Showing posts with label HTML. Show all posts
Showing posts with label HTML. Show all posts

Friday, January 29, 2016

Will One Line Of Code Help Your SEO?

There's been quite a lot of discussion online (and a little offline) about a recent blog article called: How I Sped Up My Site 68.35% With One Line of Code

I think the biggest buzz about this article has been in the SEO community, who suddenly got all excited about a magical way to speed up web pages. Mentioned by Moz (the organic optimisation industry's catnip) you could be fooled into thinking that one person had suddenly found a way to massively boost a site to the top of the results pages.
Note: For those who don't know, the speed a page downloads is cited as one of the numerous factors taken into consideration when search engines such as Google rank (judge) your site... having a much faster page load speed with just one little line of code would be fabulous.
But alas, that's not the case.

You see, I think this article is misleading as it explains how to use an HTML tag called "rel-prerender".

For those who don't know, the rel-prerender tag is used on a website to place into computer memory the next page the site developer expects the users to click on. For example, Google sometimes use it in their search engine results pages (SERPs) to make the experience of clicking on the first result much quicker.

To explain how this works on your own website, let's imagine you are on page 1 and want to automatically call-up page 2 behind the scenes (so that it appears very quickly). You therefore insert the "rel-prerender" tag in page 1 to call up page 2 before it is clicked on.

Where might you use this?
Well you might us it on a login-page (page 1) where the logged-in page (page 2) is usually the next step. You can even use it in an eCommerce site to pre-render the shopping cart I guess.... BTW: DO NOT DO THIS!

But as you would expect, there's a catch. Pre-rendering page 2 is the act of requesting a view of it in advance. So people arriving on page 1 can trigger a page 2 view without ever seeing it and in many cases they won't. This means that in some analytic packages this is recorded as a page impression (not in GA, it's clever like that) and ads on that page may be triggered even when nobody's there to see them. Plus it also adds load onto your servers whenever a page is requested, so don't tell your tech support person you're adding further load onto the system that may never be used.

So does it have an effect on SEO? Well I may be wrong.... but I really can't see how it helps organic site optimisation as you are not speeding up the render of the page you want to appear in the SERPs (Page 1). What you are actually doing is speeding up the potential delivery of the next page (page 2) you expect the user to see.  And that's not SEO, that's a caching strategy.

Sunday, September 15, 2013

Embedding a Google+ post inside a blog article


I find I'm using Google Plus more these days, so I thought I'd understand how to use the embed post feature. And as this blog is also run on Google's Blogger platform, inserting a G+ post into this blog seemed the obvious choice.

However was actually easier than I thought. By selecting the drop-down menu at the top of a top, you get the options as follows:


Then just selecting "Embed Post" you get access to the HTML code, that you then just copy and paste in the HTML editor of your blog.
Note: editing HTML might not be everyone's idea of 'easy' but for most technically proficient users this should be relatively simple.

Friday, August 23, 2013

Further musings about Meta tags

In a recent posting, I mentioned how the Meta Keywords tag is no longer used by search engines to rank websites. Even Google now officially states that they don't bother with it... so as a search engine optimisation technique, I wouldn't spend any time on them.


This therefore raises the question of whether you should even include it in your site or if you should remove it.

So here's some thoughts on the pros and cons of keeping this tag in your site.

Remove them:
  • Your site HTML code can easily be seen by viewing the source in your browser - PC's typically. This means the keywords always on display and can therefore give your competitors insight into the keywords you are targeting.
  • Although a lot of people are now on super-fast home broadband and work connection, there are still a number of users on slower download speeds ... including those on mobile devices. Although removing a line of HTML code isn't going to make your site noticeably quicker, as one UK supermarket slogan goes... every little helps.
Keep them:
  • HTML / Accessibility standards change and evolve from time to time. Therefore there is the chance that the Meta Keyword tag could be brought back into use (although very unlikely I guess).
  • Some on-site search mechanisms might still use them to classify pages on your own web presence 
  • If you're after throwing your competition off the scent of what keywords you're actually targeting, you could always put false ones in your meta tags... but then, that might be a little too much

Monday, June 3, 2013

The philosophy of content marketing

In a previous post I posed the Content Marketing equivalent of a long-standing philosophical question "if words are written and nobody reads them, are they really content?"

But the creation of content doesn't exist in a vacuum. To succeed at content marketing you don't just need great content... You also need:

100% code:
Just developing HTML that just about qualifies as 'fit for purpose' at the time of testing not only means that you may have issues down the line (e.g. when a specific browser is slightly updated) but may also hamper some of your SEO efforts. For example, some blogging platforms (e.g. WordPress) can take quite a lot of effort to get them SEO-friendly.

Killer UX:
Creating a fantastic user experience helps visitors browse your site with ease and complete tasks you want them to using functionality and content to inform them at every relevant step of the user journey.
So does content marketing include the use of  A/B and multivariate testing (MVT) approaches to optimise the user experience? You betcha! Alternative versions of content can have significant influence on visitor bounce rates and understanding... which can lead to improved conversion.

Exemplary 'white hat' SEO techniques
Forget the grey and murky areas of questionable search engine optimisation actions, your content marketing efforts have to be based on sound and utterly legitimate techniques. Why? Well thanks to the recent Google algorithm updates there is now an the even greater chance that less than honourable techniques could negatively affect your website's organic rankings.

Insight from digital analytics
A good analytical understanding of what your visitors are doing when they get to your website gives you the knowledge to evolve your content (text, imagery , video , animations, etc.) by changing it rapidly to respond to trends.

Friday, February 8, 2013

Tag Management basics

Tag Management Systems (TMS’s and also known as Container Tags) are a way of simplifying the deployment and ongoing maintenance of JavaScript tags across a website. They are typically implemented in partnership with applications such as: digital analytics, personalisation and online marketing, although others can be integrated too.

The concept of a TMS is simple... in true Lord of the Rings style; you have “one tag to rule them all”. In other words you insert a single tag into your HTML and this then calls any number of other tags to the page (sometimes based on display rules, which fire certain tags in specific situations).

The business case for Tag Management is made by calculating the effort / cost required in the manual deployment of new tags (or updated incumbent ones), along with an increased speed of implementation and reduced amount of vendor lock-in. There are also potential benefits in having a single way to manage your tags and therefore keeping track of the potential numerous tags within your online estate, as well as performance improvements from the efficient delivery of only know JavaScript to your end user.

Thursday, August 4, 2011

Screen scraping: not dead, just renamed

A few years back I blogged about the act of screen scraping and why it was wrong to do this.

Since that time I've seen clients (primarily eCommerce ones) develop feeds for:



  • Affiliates


  • Google product search


  • 3rd party applications (e.g. Mobile apps)


  • and other such things


This however means that a decent website may quickly end up developing a number of different feeds that all do different things. It can therefore create a mesh of XML files & API's that can become quite complex to maintain and manage.

One approach is to create a single feed that is then used for everything, perhaps going via a marketing agency, who can then reformat it for different purposes. However this can turn out to be a pretty bulky file (e.g. if you have a large catalogue, this can quickly become several megabytes in size) or can contain details that you might not wish all parties to have (e.g. links to your hi-res images from your Content Delivery Network that you may be paying by the megabyte for).

So I was reasonably interested in this article from eConsultancy that seemed to address this very issue. Had they really found a decent solution to this problem? One that I think will only get worse over time as the needs of eCommerce sites grow.....

Well the answer lies in this part of the posting:



Next-generation data feed solutions allow feeds to be generated and deployed
quickly and at low cost by extracting the ‘front end’ product-related HTML code
from the website, with no requirement for any ‘back end’ data – or expertise on
the part of the merchant. By harvesting elements such as pricing, availability
and product attributes directly from the merchant’s website, it is possible to
ensure that the extracted data feed is comprehensive and accurate


So let me get that straight. This 'next generation' method doesn't use an actual data feed from the site owner. It works by 'harvesting elements' from the HTML of the merchant's site without their actual involvement.

And that's not screen-scraping how exactly?

Monday, May 16, 2011

Advanced SEO : Using canonical references

What is Canonicalization?
The process of picking a single site URL to be indexed by the search engines from a range of URL’s. This happens when there are duplicate versions of the same/similar content on one site. Think of it as a way of suggesting the ‘true’ URL for your page(s) to the search engines

Why would I have multiple versions of the same content?
This usually occurs when you have dynamic content delivered from a database or there is more than one URL for a single page (typically the homepage, but not necessarily).
An example of this is an ecommerce site that enables users to get the same or very similar results from different actions.
E.g. If you have the URL of a product catalogue listing page built up from a series queries or filters.

The search engines understand this stuff?
Yes, they more than understand it they use it as a signal for search engine optimization. Since early 2009 Google, Yahoo and Microsoft have all stated they support this way for website owners to say “hey search engines, all these pages may have different URL’s but they are actually the same”.

What is the issue with duplicate content?
Search engines think that everyone is trying to game them and submit duplicate content to push themselves up the organic rankings. Without this, when a search engine finds duplicate content, it won’t now if you have done this accidentally or on. This means that they may well display the URL you don’t want to display, miss pages you want to get index or even worse downgrade the value of the page(s) they find… affecting the rankings and potentially the entire optimization of your site.

So how do I do this on my site?
If you read articles about this on the web, you would think that it is as simple as embedding a link in the header of your HTML pages (See: http://en.wikipedia.org/wiki/Canonical_meta_tag )

In theory this tells search engines the preferred location of the page to index (the “canonical” location) instead of the one it has found.

In practice it is more complex to create and maintain a working Canonical structure within an eCommerce site, especially one that has an evolving product catalogue.

Tuesday, November 23, 2010

Why isn't your annual report in HTML?

There's a growing trend over the last few years to put company annual report online. However, just sticking the print version on your website as a PDF (Portable Document Format) docuemnt just doesn't cut the mustard any more. In fact, in this recent research from Nexxar, in the UK two thirds of our top companies now produce their annual reports in HTML format.

Now I've previously covered the topic Company Report -The Next Generation, and in last year's posting I explained the benefits of delivering your Annual Report in HTML format. However what surprised me most about the finding's of Nexxar's research, was not that there are still a lot of companies holding onto their old formats, but that a number of the top French companies in the CAC40 have actually stopped producing HTML reports (and seemingly gone back to PDF or image-based reports in JPG format).

Have I missed something here and there's a reason for this? Or has the overall business climate contributed to a completed backwards step in how annual reports are now delivered online?

Thursday, September 23, 2010

SEO and eCommerce Merchandising

At Ideal Interface we have various clients who have eCommerce websites. Having done SEO work for them, including strategy and implementation consulting, I thought I'd post on some ways you can leverage your online trading site to benefit your company's search engine optimisation efforts.
  1. Give you products the names that people are looking for
    If you want to target search users who are looking for a "red patent shoe", then calling your product "scarlet platform brogue" isn't going to help as much.
  2. Provide decent product descriptions
    The supporting content you provide on the page will help the search engine spiders to understand your page better. Also try to include alternative words to target the long tail of search (Hint: you might want to mention "scarlet platform brogue" here, but again only if people will search for that term)
  3. Ensure your site navigation (and therefore your directory structure) includes keywords and that these are replicated in your page titles and breadcrumbs.
    E.g. footwear > shoes > smart shoes > red patent shoe
  4. Use of on-site search for keyword research
    Take a look at the terms that users type into your on-site search and you'll learn a lot about what they are looking for. Obviously these will be different to the terms that users type into the major search engines (e.g. they don't tend to search too often for your site name in on-site search, rather your brands or products) but they will be terms that real users type in expecting to find things.
    You'll also find out (if your search is clever enough) the terms that bring up no products. (Hint: this could either be highlighting a problem with the way you describe products or be an opportunity in the making).
  5. Optimise your entire site to ensure spidering and indexing by search engines
    As well as making sure every page of your site is coded to standards and that you're taking full advantage of Semantic HTML, you should use tools such as the Google Webmaster services that are freely available. 
  6. Create a dynamic sitemap.xml
    If your product catalogue is constantly changing, then I  recommend the use of a dynamic sitemap.xml file. This is a technical file that sits in the root directory of your site and tells the search engines all the indexable pages your have. A sitemap.xml file should be created each time your website product catalogue is created and will save you effort of manually updating it
Does anyone have any further suggestions?

Tuesday, August 31, 2010

How to optimise your press releases for search engines

The roles of the PR company and the digital agency are blurring all the time, particularly in respect of the use of Press Releases for SEO (Search Engine Optimisation). As you may have seen in a previous post, my view is:

If your PR agency is not thinking about the ways to make Press Releases more SEO friendly, then I suggest you need to reconsider your PR agency
But if you're either an in-house PR person or are looking to improve the SEO capability of your Press Releases, here's some pointers (assuming you already know how to write one)

  1. Keep your words to between 250 - 900 in number
    Although opinion differs from SEO consultant to SEO consultant on the precise number of words to use in a Press Release, there is a general consensus that focusing on the first 250 is the right approach and that there is minimal value in going beyond 900.
    (Note: a press release must also be at least 250 words to be listed on Google News)
  2. Ensure you have the right keywords in the content and in the right proportions
    Whether your Press Release is going on your own website, posted to news wires or has a different home on the web altogether, search engines can only index content to appear for the those search terms where the content actually contains the words. Keyword targeting has been around almost as long as the search engines have been and is well used by sites to create traffic from organic search results.
    However, avoid the practice of 'keyword stuffing', the act of packing as many repeated keywords into your content in the misguided belief that this will push your page to the top of search engines.... you will apparently eventually get penalised for this action.
    But how many mentions of your keyword is seen as 'stuffing' and how much is fair usage? The answer to this question is usually "it depends" or even "Who cares. Its a myth that Search Engines actually look at this stuff". Its even a question that is pretty hard to answer by those who focus on the mathematics of this stuff (which is essentially decompiling the Google search algorithm, so good luck to those people attempting this!) and each has a slightly different opinion on the subject. However the general advice is to write 'normally' and not create content that 'obviously' abuses the number of times keywords are mentioned. But this doesn't really help you if you are writing content that targets certain search terms and if you're looking for an actual figure.... many experts suggest that your keyword density ratio should be around 1 - 3% of total word usage.
  3. Hyperlink keywords back to your own site
    Many Press Releases are copied in entirety to some news sites and this includes embedded hyperlinks. These links all contribute to your SEO efforts and build traffic to your site.
  4. Ensure the target site(s) uses correct HTML code
    Does it utilise semantic code? Does it use your Press Release header as the page 'Title'? (As the page title is typically given more weight than any other text on the page).

As you can see, there are many ways to improve upon the tradional Press Release to help your Search Engine Optimisation efforts. However, if anyone has an further suggestions I look forward to hearing them and discussing their relevant merits.

Wednesday, September 23, 2009

Company Reports - the next generation

Up until 2008 (and thanks to the Companies Act 2006) it was once a requirement for any UK listed company to send a printed copy of their annual report to every single shareholder. Luckily that is not the case now and a lot of companies have realised that providing their report online (in HTML format) has obvious benefits.

For example:
  • It assists their overall Search Engine Optimisation (SEO)Efforts
  • It facilitates viewing/downloading only part of the report (useful if the user is on a low bandwidth connection)
  • It contributes to a more environmentally friendly company profile
Nexxar, a leader in providing online company reports, carry out an annual survey benchmarking international companies and their use of online reports.
http://www.nexxar.com/marketresearch.html

The report for 2009 which was released this month has found that almost half (48.7%) of those surveyed had a full HTML version of their site and according to them:

HTML is the only way to make full use of the Internet’s potential.
HTML has gradually increased in its usage, compared to just PDF's or images (usually in JPG format). And surprisingly it it UK companies that led the way, with 64% of the FTSE100 found to use HTML.


It is also interesting to note that in a piece of research focusing on Corporate Responsibility done by Radley Yeldar earlier this year, they found only 9% of those companies they surveyed (mainly FTSE 100 and FTSE 250's) used just HTML for their reports, with 47% using HTML and PDF combined.

Even US regulations now encourage company reporting in formats such as HTML, rather than just providing PDF's of the report pages. However using HTML is just the start of what you can achieve with your online company report. Animations and even audio & video (whilst remaining accessible to as many users as possible, so all sound media must have a transcript) add interactivity and help in the communication of company information to shareholders, employees, journalists and others.

So where's the future going for company reports? Well, depending upon who you ask, you get different answers....
  • Those who have a CEO & board who are professionally media trained would say you should put as much (relevant) video into your online report
  • Those who are standards based would say that you should make a lot of use of XBRL (eXtensible Business Reporting Language) or at least have sensible tagging of relevant content
  • And those who are on a budget will wait and see what works or becomes legislation