SEO Tutorial For Beginners 2015
LAST UPDATED: FEBRUARY 6TH, 2015
Here are some free search engine optimisation tips for you to help you create a successful site, based on over 10 years experience making websites rank in Google.
This is a beginner’s guide. I deliberately steer clear of techniques that might be ‘grey hat’, as what is grey today is often ‘black hat’ tomorrow, as far as Google is concerned.
No one page guide can explore this complex topic in full – but what you’ll read here is how I approach the basics – and these are the basics – as far as I remember them. Or at least – questions I had when I was starting out many years ago.
The first thing you should be aware of is that Google aims to reward high quality content and remarkable ‘white hat’ web marketing techniques. It also aims to penalise web sites that manage to rank in Google by breaking the rules.
These rules are not laws, only guidelines, for ranking in Google, laid down by Google. You should note that some methods of ranking in Google are, in fact, actually illegal (hacking, for instance).
You can choose to follow and abide by these rules, bend them or ignore them – all with different levels of success (and levels of retribution, from Google’s web spam team) . White hats do it by the ‘rules’, black hats ignore the ‘rules’.
What you read in this article is perfectly within the laws and within the guidelines and will help you increase the traffic to your website through organic, or natural search engine results pages (SERPS).
What is SEO?
There are a lot of definitions of SEO (spelled Search engine optimisation in the UK, Australia and New Zealand, or search engine optimization in the United States and Canada) but lets face it, organic SEO in 2015 is about getting free traffic from Google, the most popular search engine in the world. The guide you are reading is for the more technical minded.
The art of seo is understanding how people search for things, and understanding what type of results Google wants to display to it’s users.
It’s Google’s job to MAKE MANIPULATING IT”S RESULTS HARD. So – it keeps moving the ‘goalposts’, modifying the ‘rules’ and raising quality standards for pages that compete for top ten rankings.
Google is very secretive about it’s ‘secret sauce’ and offers sometimes helpful and sometimes vague advice – and some say offers misdirection – about how to get more from valuable traffic from Google.
Google is on record as saying the engine is intent on ‘frustrating’ search engine optimisers attempts to improve the amount of high quality traffic to a website – at least (but not limited to) – using low quality strategies classed as webspam.
At it’s core, Google search engine optimisation is about KEYWORDS and LINKS. It’s about RELEVANCE, REPUTATION and TRUST. It is about QUALITY OF CONTENT & VISITOR SATISFACTION.
USER EXPERIENCE is how Google likes to bundle it all up.
Web page optimisation is about a web page being relevant enough for a query, and being trusted enough to rank for it.
It’s about ranking for valuable keywords for the long term,on merit. You can play by ‘white hat’ rules laid down by Google, or you can choose to ignore those and go ‘black hat’ – a ‘spammer’. MOST seo tactics still work, for some time, on some level, depending on who’s doing them, and how the campaign is deployed.
Whichever route you take, know that if Google catches you trying to modify your rank using overtly brute force methods, then they will class you a web spammer, and your site will be penalised (normally you wont rank high for important keywords).
These penalties can last years if not addressed, as some penalties expire and some do not.
Google does not want you to try and modify your rank. They would prefer you paid them to do that using Google Adwords.
The problem for Google is – ranking high in Google organic listings is a real social proof for a business, a way to avoid ppc costs and still, simply, the BEST WAY to drive REALLY VALUABLE traffic to a site.
It’s FREE, too, once you’ve met the always increasing criteria it takes to rank top.
In 2015, you need to be aware that what works to improve your rank can also get you penalised (faster, and a lot more noticeably).
In particular, the Google webspam team is currently waging a pr war on sites that rely on unnatural links and other ‘manipulative’ tactics (and handing out severe penalties if it detects them) – and that’s on top of many algorithms already designed to look for other manipulative tactics (like keyword stuffing).
Google is making sure it takes longer to see results from black and white hat seo, and intent on ensuring a flux in it’s SERPS (Search Engine Results Pages) based largely on where the searcher is in the world at the time of the search, and where the business is located near to that searcher.
There’s some things you cannot directly influence legitimately to improve your rankings, but there is plenty you CAN do to drive more Google traffic to a web page.
Google has HUNDREDS of ranking factors with signals that can change daily to determine how it works out where your page ranks in comparison to other competing pages.
You will not ever find them all. Many ranking factors are on page, on site and some are off page, or off site. Some are based on where you are, or what you have searched for before.
I’ve been in online marketing for 15 years. In that time, Ive learned to focus on optimising elements in campaigns that offer the greatest return on investment of one’s labour.
Learn SEO Basics….
Here is few simple seo tips to begin with:
- If you are just starting out, don’t think you can fool Google about everything all the time. Google has VERY probably seen your tactics before. So, it’s best to keep your plan simple. GET RELEVANT. GET REPUTABLE. Aim for a good, satisfying visitor experience. If you are just starting out – you may as well learn how to do it withinGoogle’s Webmaster Guidelines first. Make a decision, early, if you are going to follow Google’s guidelines, or not, and stick to it. Don’t be caught in the middle. Do not always follow the herd.
- If your aim is to deceive visitors from Google, in any way, Google is not your friend. Google is hardly your friend at any rate – but you don’t want it as your enemy. Google will send you lots of free traffic though if you manage to get to the top of search results, so they are not all that bad.
- A lot of optimisation techniques that are effective in boosting a sites rankings in Google are against Google’s guidelines. For example: many links that may have once promoted you to the top of Google, may in fact today be hurting your site and it’s ability to rank high in Google. Keyword stuffing might be holding your page back….You must be smart, and cautious, when it comes to building links to your site in a manner that Google *hopefully* won’t have too much trouble with in the FUTURE. Because they will punish you in the future.
- Don’t expect to rank number 1 in any niche for a competitive without a lot of investment, work. Don’t expect results overnight. Expecting too much too fast might get you in trouble with the spam team.
- You don’t pay anything to get into Google, Yahoo or Bing natural, or free listings (SERPS). It’s common for the major search engines to find your website pretty easily by themselves within a few days. This is made so much easier if your website actually ‘pings’ search engines when you update content (via XML sitemaps or RSS for instance).
- To be listed and rank high in Google and other search engines, you you really should consider and largely abide by search engine rules and official guidelines for inclusion. With experience, and a lot of observation, you can learn which rules can be bent, and which tactics are short term and perhaps, should be avoided.
- Google ranks websites (relevancy aside for a moment) by the number and quality of incoming links to a site from other websites (amongst hundreds of other metrics). Generally speaking, a link from a page to another page is viewed in Google “eyes” as a vote for that page the link points to. The more votes a page gets, the more trusted a page can become, and the higher Google will rank it – in theory. Rankings are HUGELY affected by how much Google ultimately trusts the DOMAIN the page is on. BACKLINKS (links from other websites – trump every other signal.)
- I’ve always thought if you are serious about ranking – do so with ORIGINAL COPY. It’s clear – search engines reward good content it hasn’t found before. It indexes it blisteringly fast, for a start (within a second, if your website isn’t penalised!). So – make sure each of your pages has enough text content you have written specifically for that page – and you won’t need to jump through hoops to get it ranking.
- If you have original quality content on a site, you also have a chance of generating inbound quality links (IBL). If your content is found on other websites, you will find it hard to get links, and it probably will not rank very well as Google favours diversity in it’s results. If you have decent original content on your site, you can let authority websites, those with online business authority, know about it, and they might link to you – this is called a quality backlink.
- Search engines need to understand a link is a link. Links can be designed to be ignored by search engines (the attribute nofollow effectively cancels out a link, for instance)
- Search engines can also find your site by other web sites linking to it. You can also submit your site to search engines direct, but I haven’t submitted any site to a search engine in the last 10 years – you probably don’t need to do that.
- Google and Bing use a crawler (Googlebot and Bingbot) that spiders the web looking fro new links. These bots might find a link to your home page somewhere on the web and then crawl and index the pages of your site if all your pages are linked together (in almost any way). If your website has an xml sitemap, for instance, Google will use that to include that content in it’s index. An xml site map is INCLUSIVE, not EXCLUSIVE. Google will crawl and index every single page on your site – even pages out with a xml sitemap.
- Many think Google will not allow new websites to rank well for competitive terms until the web address “ages” and acquires “trust” in Google – I think this depends on the quality of the incoming links. Sometimes your site will rank high for a while then disappear for months. A “honeymoon period” to give you a taste of Google traffic, no doubt.
- Google WILL classify your site when it crawls and indexes your site – and this classification can have a DRASTIC affect on your rankings – it’s important for Google to work out WHAT YOUR ULTIMATE INTENT IS – do you want to classified as an affiliate site made ‘just for Google’, a domain holding page, or a small business website with a real purpose? Ensure you don’t confuse Google by being explicit with all the signals you can – to show on your website you are a real business, and your INTENT is genuine – and even more importantly today – FOCUSED ON SATISFYING A VISITOR.
- NOTE – If a page exists only to make money from Google’s free traffic – Google calls this spam.
- The transparency you provide on your website in text and links about who you are, what you do, and how you’re rated on the web or as a business is one way that Google could use (algorithmically and manually) to ‘rate’ your website. Note that Google has a HUGE army of quality raters and at some point they will be on your site if you get a lot of traffic from Google.
- To rank for specific keyword phrase searches, you generally need to have the keyword phrase or highly relevant words on your page (not necessarily altogether, but it helps) or in links pointing to your page/site.
- Ultimately what you need to do to compete is largely dependent on what the competition for the term you are targeting is doing. You’ll need to at least mirror how hard they are competing, if a better opportunity is hard to spot.
- As a result of other quality sites linking to your site, the site now has a certain amount of Pagerank that is shared with all the internal pages that make up your website that will in future help provide a signal to where this page ranks in the future.
- Yes, you need to build links to your site to acquire more Pagerank, or Google ‘juice’ – or what we now call domain authority or trust. Google is a links based search engine – it does not quite understand ‘good’ or ‘quality’ content – but it does understand ‘popular’ content. It can also usually identify poor, or THIN CONTENT – and it penalises your site for that – or – at least – it takes away the traffic you once had. Google doesn’t like calling actions the take a ‘penalty’ – it doesn’t look good. They blame your ranking drops on their engineers getting better at identifying quality content or links, or the inverse – low quality content and unnatural links. If they do take action your site for paid links – they call this a ‘Manual Action’ and you will get notified about it in Webmaster Tools if you sign up.
- Link building is not JUST a numbers game, though. One link from a “trusted authority” site in Google could be all you need to rank high in your niche. Of course, the more “trusted” links you build, the more trust Google will have in your site. it’s pretty clear that you need MULTIPLE trusted links from MULTIPLE trusted websites to get the most from Google.
- Try and get links within page text pointing to your site with relevant, or at least natural looking, keywords in the text link – not, for instance, in blogrolls or site wide links. Try to ensure the links are not obviously “machine generated” ie site-wide links on forums or directories. Get links from pages, that in turn, have a lot of links to them, and you will soon see benefits.
- Onsite, consider linking to your other pages by linking to them within text. I usually only do this when it is relevant – often, I’ll link to relevant pages when the keyword is in the title elements of both pages. I don’t really go in for auto-generating links at all. Google has penalised sites for using particular auto link plugins, for instance, so I avoid them.
- Linking to a page with actual key-phrases in the link help a great deal in all search engines when you want to feature for specific key-terms. ie “seo scotland” as opposed to http://www.hobo-web.co.uk or “click here“. Saying that – in 2015, Google is punishing manipulative anchor text very aggressively, so be sensible – and stick to brand links and plain url links that build authority with less risk. I rarely ever optimise for grammatically incorrect terms these days (especially links).
- I think the anchor text links in internal navigation is still valuable – but keep it natural. Google needs links to find and help categorise your pages. Don’t underestimate the value of a clever internal link keyword-rich architecture and be sure to understand for instance how many words Google counts in a link, but don’t overdo it. Too many links on a page could be seen as a poor user experience. Avoid lots of hidden links in your template navigation.
- Search engines like Google ‘spider’ or ‘crawl’ your entire site by following all the links on your site to new pages, much as a human would click on the links of your pages. Google will crawl and index your pages, and within a few days normally, begin to return your pages in search engine results pages (SERPS).
- After a while, Google will know about your pages, and keep the ones it deems ‘useful’ – pages with original content, or pages with a lot of links to them. The rest will be de-indexed. Be careful – too many low quality pages on your site will impact your overall site performance in Google. Google is on record talking about good and bad ratios of quality content to low quality content.
- Ideally you will have unique pages, with unique page titles and unique page descriptions if you deem to use the latter. Google does not seem to use the meta description when actually ranking your page for specific keyword searches if not relevant and unless you are careful if you might end up just giving spammers free original text for their site and not yours once they scrape your descriptions and put the text in main content on their site. I don’t worry about meta keywords these days as Google and Bing say they either ignore them or use them as spam signals.
- Google will take some time to analyse your entire site, analysing text content and links. This process is taking longer and longer these days but is ultimately determined by your domain authority / real Pagerank as Google determines it.
- If you have a lot of duplicate low quality text already found by Googlebot on other websites it knows about, Google will ignore your page. If your site or page has spammy signals, Google will penalise it, sooner or later. If you have lots of these pages on your site – Google will ignore your efforts.
- You don’t need to keyword stuff your text and look dyslexic to beat the competition.
- You optimise a page for more traffic by increasing the frequency of the desired key phrase, related key terms, co-occurring keywords and synonyms in links, page titles and text content. There is no ideal amount of text – no magic keyword density. Keyword stuffing is a tricky business, too, these days.
- I prefer to make sure I have as many UNIQUE relevant words on the page that make up as many relevant long tail queries as possible.
- If you link out to irrelevant sites, Google may ignore the page, too – but again, it depends on the site in question. Who you link to, or HOW you link to, REALLY DOES MATTER – I expect Google to use your linking practices as a potential means by which to classify your site. Affiliate sites for example don’t do well in Google these days without some good quality backlinks.
- Many search engine marketers think who you actually link out to (and who links to you) helps determine a topical community of sites in any field, or a hub of authority. Quite simply, you want to be in that hub, at the centre if possible (however unlikely), but at least in it. I like to think of this one as a good thing to remember in the future as search engines get even better at determining topical relevancy of pages, but I have never really seen any granular ranking benefit (for the page in question) from linking out.
- I’ve got by, by thinking external links to to other sites should probably be on single pages deeper in your site architecture, with the pages receiving all your Google Juice once it’s been “soaked up” by the higher pages in your site site structure (the home page, your category pages). This is old school though – but it still gets me by. I don’t need to think you really need to worry about that in 2015.
- Original content is king and will attract a “natural link growth” – in Google’s opinion. Too many incoming links too fast might devalue your site, but again. I usually err on the safe side – I always aimed for massive diversity in my links – to make them look ‘more natural’. Honestly, I go for natural links in 2015 full stop.
- Google can devalue whole sites, individual pages, template generated links and individual links if Google deems them “unnecessary” and a ‘poor user experience’.
- Google knows who links to you, the “quality” of those links, and who you link to. These – and other factors – help ultimately determine where a page on your site ranks. To make it more confusing – the page that ranks on your site might not be the page you want to rank, or even the page that determines your rankings for this term. Once Google has worked out your domain authority – sometimes it seems that the most relevant page on your site Google HAS NO ISSUE with will rank.
- Google decides which pages on your site are important or most relevant. You can help Google by linking to your important pages and ensuring at least one page is really well optimised amongst the rest of your pages for your desired key phrase. Always remember Google does not want to rank ‘thin’ pages in it’s results – any page you want to rank – should have all the things Google is looking for. PS – That’s a lot these days!
- It is important you spread all that real ‘pagerank’ – or link equity – to your sales keyword / phrase rich sales pages, and as much remains to the rest of the site pages, so Google does not”demote” pages into oblivion – or “supplementals” as we old timers used to see them presented. Again – this is slightly old school – but it gets me by even today.
- Consider linking to important pages on your site from your home page, and other important pages on your site.
- Focus on RELEVANCE first. Then, focus your marketing efforts and get REPUTABLE. This is the key to ranking ‘legitimately’ in Google in 2015.
- Every few months Google changes it’s algorithm to punish sloppy optimisation or industrial manipulation. Google Panda and Google Penguin are two such updates, but the important thing is to understand Google changes it’s algorithms constantly to control it’s listings pages (over 600 changes a year we are told).
- The art of rank modification is to rank without tripping these algorithms or getting flagged by a human reviewer – and that is tricky!
Welcome to the tightrope that is SEO!
Keyword Research is ESSENTIAL
The first step in any seo campaign is to do some keyword research.
There are many tools on the web to help with basic keyword research (including the free Google Keyword Research Tool and there are more useful third party tools to help you do this).
You can use these keyword research tools to quickly identify opportunities to get more traffic to a page:
Example Keyword | Search Volume |
---|---|
web search tutorial for beginners | 1900 |
ecommerce search optimization tutorials | 1600 |
how to seo a website | 880 |
seo tutorial step by step | 720 |
how to seo your website | 720 |
google seo tutorial | 320 |
best seo tutorial for novices | 260 |
free optimization tutorials | 210 |
on page seo tutorial | 170 |
seo tutorials for beginners | 170 |
all in one optimization tutorial | 170 |
website optimize tutorial | 140 |
how to seo website | 140 |
seo basics tutorial | 110 |
how to seo my website | 110 |
ethical seo tutorial download | 91 |
joomla optimizer tutorial | 91 |
online seo tutorial | 91 |
diy seo tutorial | 91 |
define optimization tutorial free | 73 |
optimize tutorial | 73 |
best seo tutorial | 58 |
basic seo tutorial | 58 |
bing optimization tutorial | 58 |
step by step seo tutorial | 46 |
beginners seo tutorial course | 46 |
seo tutorial google | 46 |
optimisation definition | 36 |
search engine optimization tutorial | 36 |
optimize website for free tutorial | 28 |
website seo tutorial | 28 |
easy optimisation tutorial for beginners lesson 1 | 28 |
website seo tutorial youtube | 22 |
google seo tutorials | 22 |
black hat optimisation tutorial | 22 |
seo tips and tricks tutorial | 16 |
local optimisation tutorial | 16 |
learn search tutorial blog | 16 |
expert seo tutorial 2014 | 12 |
seo tutorial online | 12 |
seo training tutorial printable | 12 |
natural optimization tutorial beginners | 12 |
online website optimisation tutorials |
Google Analytics Keyword ‘Not Provided’
Google Analytics was the very best place to look at keyword opportunity for some (especially older) sites, but that all changed a few years back.
Google stopped telling us which keywords are sending traffic to our sites from the search engine back in October 2011, as part of privacy concerns for it’s users.
Google will now begin encrypting searches that people do by default, if they are logged into Google.com already through a secure connection. The change to SSL search also means that sites people visit after clicking on results at Google will no longer receive “referrer” data that reveals what those people searched for, except in the case of ads.
Google Analytics now instead displays – keyword “not provided“, instead.
In Google’s new system, referrer data will be blocked. This means site owners will begin to lose valuable data that they depend on, to understand how their sites are found through Google. They’ll still be able to tell that someone came from a Google search. They won’t, however, know what that search was. SearchEngineLand
You can still get some of this data if you sign up for Google Webmaster Tools (and you can combine this in Google Analytics) but the data even there is limited and often not entirely the most accurate. The keyword data can be useful though – and access to backlink data is essential these days.
This is another example of Google making ranking in organic listings HARDER – a change for ‘users’ that seems to have the most impact on ‘marketers’ outside of Google’s ecosystem – yes – search engine optimisers.
Now, consultants need to be page centric (abstract, I know), instead of just keyword centric when optimising a web page for Google. There are now plenty of third party tools that help when researching keywords but most of us miss the kind of keyword intelligence we used to have access to.
Proper keyword research is important because getting a site to the top of Google eventually comes down to your text content on a page and keywords in external & internal links. All together, Google uses these signals to determine where you rank if you rank at all.
There’s no magic bullet, to this.
At any one time, your site is probably feeling the influence of some sort of algorithmic filter (for example, Google Panda or Google Penguin) designed to keep spam sites under control and deliver relevant results to human visitors.
One filter may be kicking in keeping a page down in the serps, while another filter is pushing another page up. You might have poor content but excellent incoming links, or vice versa.
Try and identify the reasons Google doesn’t ‘rate’ a particular page higher than the competition – the answer is usually on the page or in backlinks pointing to the page.
Too few quality incoming links? Too many incoming links? No keyword rich text? Too much keyword rich text? Linking out to irrelevant sites? Too many ads above the fold? Affiliate links on every page of your site, found on a thousand other websites?
Whatever they are, identify issues and fix them.
Get on the wrong side of Google and your site might well be flagged for MANUAL review – so optimise your site as if, one day, you will get that website review from a Google Web Spam reviewer.
The key to a successful campaign, I think, is persuading Google that your page is most relevant to any given search query. You do this by good unique keyword rich text content and getting “quality” links to that page. The latter is far easier to say these days than actually do!
Next time your developing a page, consider what looks spammy to you is probably spammy to Google. Ask yourself which pages on your site are really necessary. Which links are necessary? Which pages are getting the “juice” or “heat“. Which pages would you ignore?
You can help a site along in any number of ways (including making sure your page titles and meta tags are unique) but be careful. Obvious evidence of ‘rank modifying’ is dangerous.
I prefer simple seo techniques, and ones that can be measured in some way. I have never just wanted rank for competitive terms, I have always wanted to understand the reasons why I ranked for these key phrases. I try to create a good user experience for for humans AND search engines. If you make high quality text content relevant and suitable for both these audiences, you’ll more than likely find success in organic listings and you might not ever need to get into the technical side of things, like redirects and URL rewriting.
To beat the competition in an industry where it’s difficult to attract quality links, you have to get more “technical” sometimes – and in some industries – you’ve traditionally needed to be 100% black hat to even get in the top 100 results of competitive, transactional searches.
There are no hard and fast rules to long term ranking success, other than developing quality websites with quality content and quality links pointing to it. The less domain authority you have, the more text you’re going to need. The aim is to build a satisfying website and build real authority!
You need to mix it up and learn from experience. Make mistakes and learn from them by observation. I’ve found getting penalised is a very good way to learn what not to do.
Remember there are exceptions to nearly every rule, and in an ever fluctuating landscape, and you probably have little chance determining exactly why you rank in search engines these days. I’ve been doing it for over 10 years and everyday I’m trying to better understand Google, to learn more and learn from others’ experiences.
It’s important not to obsess about granular ranking specifics that have little return on your investment, unless you really have the time to do so! THERE IS USUALLY SOMETHING MORE VALUABLE TO SPEND THAT TIME ON. That’s usually either good backlinks or great content.
There ARE some things that are evident, with a bit of experience on your side:
Keep it simple. Don’t Build Your Site With Flash or HTML Frames
Well… not entirely in Flash, and especially not if you know very little about the ever improving accessibility of Flash.
Flash is a propriety plug in created by Macromedia to infuse (albeit) fantastically rich media for your websites. The W3C advises you avoid the use of such proprietary technology to construct an entire site. Instead, build your site with CSS and HTML ensuring everyone, including search engine robots, can sample your website content. Then, if required, you can embed media files such as Flash in the HTML of your website.
Flash, in the hands of an inexperienced designer, can cause all types of problems at the moment, especially with:
- Accessibility
- Search Engines
- Users not having the Plug In
- Large Download Times
…and Flash doesn’t even work at all on some devices, like the Apple Iphone. Note that Google sometimes highlights if your site is not mobile friendly on some devices.
Html5 is the preferred option over Flash these days, for most designers. A site built entirely in Flash could cause an unsatisfactory user experience, and could affect your rankings, and especially in mobile search results. For similar accessibility and user satisfaction reasons, I would also say don’t build a site with website frames.
Keep It Simple, Stupid
As in any form of design, don’t try and re-invent the wheel when simple solutions will suffice. The KISS philosophy has been around since the dawn of design.
KISS does not mean boring web pages. You can create stunning sites with smashing graphics – but you should build these sites using simple techniques – HTML & CSS, for instance. If your new to web design, avoid things like Flash and Javascript, especially for elements like scrolling news tickers etc. These elements work fine for TV – but generally only cause problems for website visitors.
Keep layouts and navigation arrays consistent and simple too. Don’t spend time, effort and money (especially if you work in a professional environment) designing fancy navigation menus if, for example, your new website is an information site.
Same with website optimisation – keep your documents well structured and keep your page Title Elements and text content relevant, use Headings tags sensibly and try and avoid leaving too much of a footprint – whatever you are up to.
Page Title Tag Best Practice
<title>What Is The Best Title Tag For Google?</title>
The page title tag (or HTML Title Element) is arguably the most important on page ranking factor (with regards to web page optimisation). Keywords in page titles can HELP your pages rank higher in Google results pages (SERPS). The page title is also often used by Google as the title of a search snippet link in search engine results pages.
For me, a perfect title tag in Google is dependant on a number of factors and I will lay down a couple below but I have since expanded page title advice on another page (link below);
- A great page title for Google is highly relevant to the page it refers to. It will probably be displayed in a web browsers window title bar, and will probably be the clickable search snippet link in Google, Bing & other search engines. The title element is the “crown” of a keyword targeted article with important keyword, or KNOWN SYNONYM, featuring AT LEAST ONCE, as all search engines place a lot of weight in what words are contained within this html element.
- Last I checked Google displayed as many characters as it could fit into ”a block element that’s 512px wide and doesn’t exceed 1 line of text” although this, too, is subject to change. THERE IS NO EXACT AMOUNT OF CHARACTERS any optimiser can lay down as bullet proof best practice to GUARANTEE your title will display, in full, in Google. Ultimately – the characters and words you USE in your page title will determine IF your entire page title will be seen in a Google search snippet – and THAT is even if Google uses your page title as your search snippet. Google used to count 70 characters in a title – but not in 2015. If you want to ENSURE your full title tag shows in Google SERPS, stick to about 65 characters. I have seen ‘up-to’ 69 characters recently – but as I said – it depends on the characters you use. People use too many devices and Google changes things so much that it’s a thankless task to lay down a best practice for the exact amount of words or characters to use in your page Title Element for it to display properly ALL THE TIME. It was designed, by Google, to be this way.
- Google will INDEX perhaps 1000s of characters in a title… but no-one knows exactly how many characters or words Google will actually count AS a TITLE when determining relevance for ranking purposes. It is a very hard thing to try to isolate accurately. I have had ranking success with longer titles – much longer titles – Google certainly reads ALL the words in your page title (unless you are spamming it silly, of course). I often pick a long descriptive title over a shorter title. I just remember my important keyword phrases should be in the first few words of any Title Element.
- You can probably fit up to 12 words that will be counted as part of a page title, and consider using your important keywords in the first 8 words.
- Some page titles do better with a call to action – one which reflects exactly a searcher’s intent (e.g. to learn something, or buy something, or hire something. Remember this is your hook in search engines, if Google chooses to use your page title in its search snippet, and there is now a lot of competing pages out there!
- When optimising a title, you are looking to rank for as many terms as possible, without keyword stuffing your title. Often, the best bet is to optimise for a particular phrase (or phrases) – and take a more long-tail approach. Yes – that does mean more pages on your site – that’s the reality in 2015. Content. Content. Content.
- The perfect title tag on a page is unique to other pages on the site. In light of Google Panda, an algorithm that looks for a ‘quality’ in sites, you REALLY need to make your page titles UNIQUE, and minimise any duplication, especially on larger sites.
- I like to make sure my keywords feature as early as possible in a title tag but the important thing is to have important keywords and key phrases in your page title tag SOMEWHERE.
- For me, when ranking in Google is more important than branding, the company name goes at the end of the tag, IF I ADD IT AT ALL, and I use a variety of dividers to separate as no one way performs best. If you have a recognisable brand – then there is an argument for putting this at the front of titles.
- I like to think I write titles for search engines AND humans. Mostly for Google though…
- Know that Google tweaks everything regularly – why not what the perfect title keys off? So MIX it up…
- Don’t obsess! Natural is probably better, and will only get better as engines evolve. I optimise for important key-phrases, rather than just keywords.
- Generally speaking, the more domain trust/authority your SITE has in Google, the easier it is for a new page to rank for something. So bear that in mind. There is only so much you can do with your page titles – your websites rankings in Google are a LOT more to do with OFFSITE factors than ONSITE ones.
- Also bear in mind, in 2015, the html title element you choose for your page, may not be what Google chooses to include in your SERP snippet. The search snippet title and description is very much QUERY dependant these days. Google often chooses what it thinks is the most relevant title for your search snippet, and it can use information from your page, or in links to that page, to create a very different SERP snippet title.
- Click through rate is something that is likely measured by Google when ranking pages (Bing say they use it too, and they now power Yahoo), so it is really worth considering whether you are best optimising your page titles for click-through rate or optimising for more search engine rankings.
- I would avoid keyword stuffing your page titles as it is an easy thing for Google to detect and mark you down for – if they care.
- Remember….think ‘keyword phrase‘ rather than ‘keyword‘, ‘keyword‘ ,’keyword‘…optimise for very specific key phrases and very specific key phrases that are highly relevant to them.
- A good page title will not necessarily make a thin, low quality page rank better. Low quality pages need improved to get the most out of your page title.
- Google will select the best title it wants for your search snippet – and it will take that information from multiple sources, NOT just your page title element. A small title is often appended with more information about the domain. Sometimes, if Google is confident in the BRAND name, it will replace it with that (often adding it to the beginning of your title with a a colon, or sometimes appending the end of your snippet title with the actual domain the page is on.
A Note About Title Tags;
When you write a page title, you have a chance right at the beginning of the page to tell Google (and other search engines) if this is a spam site or a quality site – such as – have you repeated the keyword 4 times or only once? I think title tags, like everything else, should probably be as simple as possible, with the keyword once and perhaps a related term if possible.
I always aim to keep my html page title elements things as simple, and looking as human-generated and unique, as possible.
I’m certainly cleaning up the way I write my titles all the time. How do you do it?
More Reading:
External Links
Meta Keywords Best Practice
A hallmark of shady natural search engine optimisation companies – the meta-keywords tag. Companies that waste time and resources on these items waste clients money – that’s a fact:
<meta name="Keywords" content="s.e.o., search engine optimisation, optimization">
I have one piece of advice with the meta keyword tag, which like the title tag, goes in the head section of your web page, forget about them.
If you are relying on meta-keyword optimisation to rank for terms, your dead in the water. From what I see, Google + Bing ignores meta keywords – or at least places no weight in them to rank pages. Yahoo may read them, but really, a search engine optimiser has more important things to worry about than this nonsense.
What about other search engines that use them? Hang on while I submit my site to those 75,000 engines first [sarcasm!]. Yes, 10 years ago early search engines liked looking at your meta-keywords. I’ve seen OPs in forums ponder which is the best way to write these tags – with commas, with spaces, limiting to how many characters. Forget about meta-keyword tags – they are a pointless waste of time and bandwidth. Could probably save a rain-forest with the bandwidth costs we save if everybody removed their keyword tags.
Tin Foil Hat Time
So you have a new site….. you fill your home page meta tags with the 20 keywords you want to rank for – hey, that’s what optimisation is all about, isn’t it? You’ve just told Google by the third line of text what to sandbox you for and wasn’t meta name=”Keywords” originally for words that weren’t actually on the page that would help classify the document? Sometimes competitors might use the information in your keywords to determine what you are trying to rank for, too….
If everybody removed them and stopped abusing meta keywords Google would probably start looking at them but that’s the way of things in search engines. I Ignore them. Not even a ‘second order’ effect, in my opinion.
Meta Description Best Practice
Like the title element and unlike the meta keywords tag, this one is important, both from a human and search engine perspective.
<meta name="Description" content="Get your site on the first page of Google,
Yahoo and Bing. Call us on 0845 094 0839. A company based in Scotland." />
Forget whether or not to put your keyword in it, make it relevant to a searcher and write it for humans, not search engines. If you want to have this 20 word snippet which accurately describes the page you have optimised for one or two keyword phrases when people use Google to search, make sure the keyword is in there.
I must say, I normally do include the keyword in the description as this usually gets it in your serp snippet, but I think it would be a fair guess to think more trusted sites would benefit more from any boost a keyword in the meta description tag might have, than an untrusted site would.
Google looks at the description but there is debate whether it actually uses the description tag to rank sites. I think they might at some level, but again, a very weak signal. I certainly don’t know of an example that clearly shows a meta description helping a page rank.
Some times, I will ask a question with my titles, and answer it in the description, sometimes I will just give a hint;
It’s also very important in my opinion to have unique title tags and unique meta descriptions on every page on your site. It’s a preference of mine, but I don’t generally autogenerate descriptions with my cms of choice either – normally I’ll elect to remove the tag entirely before I do this, and my pages still do well (and Google generally pulls a decent snippet out on it’s own which you can then go back and optimise for serps. THere are times when I do autogenerate descriptions and that’s when I can still make them unique to the page using using some sort of server-side php.
Tin Foil Hat Time
Sometimes I think if your titles are spammy, your keywords are spammy, and your meta description is spammy, Google might stop right there – even they probably will want to save bandwidth at some time. Putting a keyword in the description won’t take a crap site to number 1 or raise you 50 spots in a competitive niche – so why optimise for a search engine when you can optimise for a human? – I think that is much more valuable, especially if you are in the mix already – that is – on page one for your keyword.
So, the meta description tag is important in Google, Yahoo and Bing and every other engine listing – very important to get it right. Make it for humans.
Oh and by the way – Google seems to truncate anything over @156 characters in the meta description, although this may actually be limited by pixel width in 2015.
More Reading:
- http://www.hobo-web.co.uk/create-unique-meta-descriptions/
- http://www.hobo-web.co.uk/definitive-guide-to-using-important-meta-tags/
External Links
Robots Meta Tag
Thus far I’ve theorised about the Title Element, the Meta Description Tag and Meta Keywords Tag. Next:
The Robots Meta Tag;
<meta name="robots" content="index, nofollow" />
I could use the above meta tag to tell Google to index the page but not to follow any links on the page, if for some reason, I did not want the page to appear in Google search results.
By default, Googlebot will index a page and follow links to it. So there’s no need to tag pages with content values of INDEX or FOLLOW. GOOGLE
There are various instructions you can make use of in your Robots Meta Tag, but remember Google by default WILL index and follow links, so you have NO need to include that as a command – you can leave the robots meta out completely – and probably should if you don’t have a clue.
Googlebot understands any combination of lowercase and uppercase. GOOGLE.
Valid values for Robots Meta Tag ”CONTENT” attribute are: “INDEX“, “NOINDEX“, “FOLLOW“, “NOFOLLOW“. Pretty self explanatory.
Examples:
- META NAME=”ROBOTS” CONTENT=”NOINDEX, FOLLOW”
- META NAME=”ROBOTS” CONTENT=”INDEX, NOFOLLOW”
- META NAME=”ROBOTS” CONTENT=”NOINDEX, NOFOLLOW”
- META NAME=”ROBOTS” CONTENT=”NOARCHIVE”
- META NAME=”GOOGLEBOT” CONTENT=”NOSNIPPET”
Google will understand the following and interprets the following robots meta tag values:
- NOINDEX – prevents the page from being included in the index.
- NOFOLLOW – prevents Googlebot from following any links on the page. (Note that this is different from the link-level NOFOLLOW attribute, which prevents Googlebot from following an individual link.)
- NOARCHIVE – prevents a cached copy of this page from being available in the search results.
- NOSNIPPET – prevents a description from appearing below the page in the search results, as well as prevents caching of the page.
- NOODP – blocks the Open Directory Project description of the page from being used in the description that appears below the page in the search results.
- NONE – equivalent to “NOINDEX, NOFOLLOW”.
Robots META Tag Quick Reference
Terms | Googlebot | Slurp | BingBot | Teoma |
---|---|---|---|---|
NoIndex | YES | YES | YES | YES |
NoFollow | YES | YES | YES | YES |
NoArchive | YES | YES | YES | YES |
NoSnippet | YES | NO | NO | NO |
NoODP | YES | YES | YES | NO |
NoYDIR | NO | YES | NO | NO |
NoImageIndex | YES | NO | NO | NO |
NoTranslate | YES | NO | NO | NO |
Unavailable_After | YES | NO | NO | NO |
I’ve included the robots meta tag in my tutorial as this IS one of only a few meta tags / html head elements I focus on when it comes to managing Googlebot and Bingbot. At a page level – it is a powerful way to control if your pages are returned in search results pages.
These meta tags go in the [HEAD] section of a [HTML] page and represent the only tags for Google I care about. Just about everything else you can put in the [HEAD] of your HTML document is quite unnecessary and maybe even pointless (for Google optimisation, anyway).
If you are interested in using methods like on-page robots instructions and the robots.txt file to control which pages get indexed by Google and how Google treats them, Sebastian knows a lot more than me.
External Links
H1-H6: Headers
I can’t find any definitive proof online that says you need to use Heading Tags (H1, H2, H3, H4, H5, H6) or that they improve rankings in Google, and I have seen pages do well in Google without them – but I do use them, especially the H1 tag on the page.
For me it’s another piece of a ‘perfect’ page, in the traditional sense, and I try to build a site for Google and humans.
<h1>This is a page title</h1>
I still generally only use one <h1> heading tag in my keyword targeted pages – I believe this is the way the W3C intended it be used in HTML4 – and I ensure they appear at the top of a page above relevant page text and written with my main keywords or keyword phrases incorporated.
I have never experienced any problems using CSS to control the appearance of the heading tags making them larger or smaller.
You can use multiple H1s in HTML5, but most sites I find I work on still use HTML4.
I use as many H2 – H6 as is necessary depending on the size of the page, but generally I use H1, H2 & H3. You can see here how to use header tags properly (although basically just be consistent, whatever you do, to give your users the best user experience).
How many words in the H1 Tag? As many as I think is sensible – as short and snappy as possible usually.
Aaron Wall at SEOBook recommends not making your h1 tags the exact same as your page titles, although I personally have never seen a problem with this on a quality site. I also discovered Google will use your Header tags as page titles at some level if your title element is malformed.
As always be sure to make your heading tags highly relevant to the content on that page and not too spammy, either.
How Many Words & Keywords?
I get asked this all the time –
how much text do you put on a page to rank for a certain keyword?
The answer is there is no optimal amount of text per page, but how much text you’ll ‘need’ will be based on your DOMAIN AUTHORITY, your TOPICAL RELEVANCE and how much COMPETITION there is for that term, and HOW COMPETITIVE that competition actually is.
Instead of thinking about the quantity of the text, you should think more about the quality of the content on the page. Optimise this with searcher intent in mind. Well, that’s how I do it.
I don’t find that you need a minimum amount of words or text to rank in Google. I have seen pages with 50 words out rank pages with 100, 250, 500 or 1000 words. Then again I have seen pages with no text rank on nothing but inbound links or other ‘strategy’. In 2015, Google is a lot better at hiding away those pages, though.
At the moment, I prefer long form pages with a lot of text although although I still rely heavily on keyword analysis to make my pages. The benefits of longer pages are that they are great for long tail key phrases. Creating deep,information rich pages really focuses the mind when it comes to producing authoritative, useful content.
Every site is different. Some pages, for example, can get away with 50 words because of a good link profile and the domain it is hosted on. For me, the important thing is to make a page relevant to a user’s search query.
I don’t care how many words I achieve this with and often I need to experiment on a site I am unfamiliar with. After a while, you get an idea how much text you need to use to get a page on a certain domain into Google.
One thing to note – the more text you add to the page, as long as it is unique, keyword rich and relevant, the more that page will be rewarded with more visitors from Google.
There is no optimal number of words on a page for placement in Google. Every website – every page – is different from what I can see. Don’t worry too much about word count if your content is original and informative. Google will probably reward you on some level – at some point – if there is lots of unique text on all your pages.
TIP: The ‘inverted pyramid‘ – pictured above – is useful when creating pages for the web too – very useful.
Keyword Density?
The short answer to this is – no. There is no one-size-fits-all keyword density, nooptimal percentage. Most web optimisation professionals agree there is no ideal percent of keywords in text to get a page to number 1 in Google. Search engines are not that easy, although the key to success in many fields doing simple things well (or at least better than the competition). I write natural page copy where possible always focused on the key terms – I never calculate density in order to identify the best % – there are way too many other things to work on. I have looked into this. If it looks natural, it’s ok with me. Normally I will try and get related terms in the page, and if I have 5 paragraphs, I might have the keyword in 4 or 5 of those as long as it doesn’t look like I stuffed them in there.
Optimal keyword density is a myth, although there are many who disagree. Crazy stuff. (I think the page I just linked to is the longest page on the internet debunking keyword density!)
Internal Links To Relevant Pages
I’ll lay down my thoughts on internal link optimisation later in this tutorial, but onpage, I link internal to relevant pages in my site all the time.
I silo any relevance or trust mainly though links in text content and secondary menu systems and between pages that are relevant in context to one another.
I don’t worry about perfect silo’ing techniques any more, and don’t worry about whether or not I should link to one category from another, as I think the ‘boost’ many proclaim is minimal on the size of sites I manage.
Sometimes I will ensure 10 pages link to 1 page in a theme, and not reciprocate this link. Other times, I will. it depends on the PR google juice I have to play with and again, if it feels right in the circumstance to do so, or the size of the site and how deep I am in the structure.
There’s no set method I find works for every site, other than to link to related internal pages often and where appropriate – it’s where I find some creativity.
Link Out To Related Sites
Sticking firmly in on page seo territory, I regularly link out to other quality relevant pages on other websites where possible and where a human would find it valuable.
I don’t link out to other sites from homepage. I want all the PR residing in the home page to be shared only with my internal pages. I don’t like out to other sites from my category pages either, for the same reason.
I link to other relevant sites (a deep link where possible) from individual pages and I do it often, usually. I don’t worry about link equity or PR leak because I control it on a page to page level.
This works for me, it allows me to share the link equity I have with other sites while ensuring it is not at the expense of pages on my own domain. It may even help get me into a ‘neighbourhood’ of relevant sites, especially when some of those start linking back to my site.
Linking out to other sites, especially using a blog, also helps tell others that might be interested in your content that your page is ‘here’. Try it.
Generally I wont link out to sites using the exact keyword /phrase I am targeting, but I will be considerate, and usually try and link out to a site using keywords these bloggers / site owners would appreciate.
The recently leaked Quality Raters Guidelines document clearly tells web reviewers to identify how USEFUL or helpful your SUPPLEMENTARY NAVIGATION options are – wether you link to other internal pages, or pages on other sites.
Redirect Non WWW To WWW
I can’t even say this word properly – Canonicalization (US spelling). Does your site have canonicalization problems?
Simply put, http://www.hobo-web.co.uk/ can be treated by Google as a different url than http://hobo-web.co.uk/ even though it’s the same page, and it can get even more complicated.
It’s thought REAL Pagerank and Google Juice can be diluted if Google gets confused about your URLS and speaking simply you don’t want this PR diluted (in theory).
That’s why many, including myself, redirect non-www to www (or vice versa) if the site is on a linux/apache server (in the htaccess file –
Options +FollowSymLinks
RewriteEngine on
RewriteCond %{HTTP_HOST} ^hobo-web.co.uk [NC]
RewriteRule ^(.*)$ http://www.hobo-web.co.uk/$1 [L,R=301]
Basically you are redirecting all the Google juice to one canonical version of a url.
Do you need to do this? No. As standard these days, I do however see it as a best practice. It keeps it simple, when optimising for Google. It should be noted, it’s incredibly important not to mix the two types of www/non-www on site when linking your own internal pages!
Google can handle most sites no problem even without this measure being taken, and it’s certainly no magic bullet implementing this canonicalization fix. On it’s own I see little boost. I am not an expert when it comes server side, of course, so I would love to hear other views.
In my experience it depends on the type of site. Are people linking to your site other than you?
If there are a lot of people linking to you, I would implement it. Imagine you have 10 links from relatively untrusted sites with the www and all of a sudden you get a link from a trusted site without the www (non www) – that’s when you might not get the most out of a link, it’s thought.
Note in 2015 Google asks you which domain you prefer in Google Webmaster Tools.
Alt Tags
NOTE: Alt Tags are counted by Google (and Bing), but I would be careful over-optimizing them. I’ve seen a lot of websites penalized for over-optimising invisible elements on a page. Don’t do it.
ALT tags are very important and I think a very rewarding area to get right. I always put the main keyword in an ALT once when addressing a page.
Don’t optimise your ALT tags (or rather, attributes) JUST for Google!
Use ALT tags (or rather, ALT Attributes) for descriptive text that helps visitors – and keep them unique where possible, like you do with your titles and meta descriptions.
Don’t obsess. Don’t optimise your ALT tags just for Google – do it for humans, for accessibility and usability. If you are interested, I ran a simple test using ALT attributes to determine how many words I could use in IMAGE ALT text that Google would pick up.
And remember – even if, like me most days, you can’t be bothered with all the image ALT tags on your page, at least use a blank ALT (or NULL value) so people with screen readers can enjoy your page.
Update 17/11/08 – Picked This Up At SERoundtable about Alt Tags:
JohnMu from Google: alt attribute should be used to describe the image. So if you have an image of a big blue pineapple chair you should use the alt tag that best describes it, which is alt=”big blue pineapple chair.” title attribute should be used when the image is a hyperlink to a specific page. The title attribute should contain information about what will happen when you click on the image. For example, if the image will get larger, it should read something like, title=”View a larger version of the big blue pineapple chair image.”
Barry continues with a quote:
As the Googlebot does not see the images directly, we generally concentrate on the information provided in the “alt” attribute. Feel free to supplement the “alt” attribute with “title” and other attributes if they provide value to your users!So for example, if you have an image of a puppy (these seem popular at the moment ) playing with a ball, you could use something like “My puppy Betsy playing with a bowling ball” as the alt-attribute for the image. If you also have a link around the image, pointing a large version of the same photo, you could use “View this image in high-resolution” as the title attribute for the link.
Search Engine Friendly URLs (SEF)
Clean URLS (or search engine friendly urls) are just that – clean, easy to read, simple. You do not need clean urls in a site architecture for Google to spider a site successfully (confirmed by Google in 2008), although I do use clean urls as a default these days, and have done so for years. It’s often more usable.
Is there a massive difference in Google when you use clean urls?
No, in my experience it’s very much a second or third order effect, perhaps even less, if used on it’s own. EDIT: Recent observations I have made seem to indicate they might be more valuable in 2010.
The thinking is that you might get a boost in Google SERPS if your URLS are clean – because you are using keywords in the actual page name instead of a parameter or ID number. Google might reward the page some sort of relevance because of the actual file / page name.
On it’s own, this boost, in my experience is virtually non-detectable. Where this benefit is slightly detectable is when people (say in forums) link to your site with the url as the link. Then it is fair to say you do get a boost because keywords are in the actual anchor text link to your site, and I believe this is the case, but again, that depends on the quality of the page linking to your site – ie if Google trusts it and it passes Page Rank (!) and anchor text relevance. And of course, you’ll need citable content on that site of yours.
Sometimes I will remove the stop-words from a url and leave the important keywords as the page title because a lot of forums garble a url to shorten it.
I configure urls the following way;
- www.hobo-web.co.uk/?p=292 — is automatically changed by the CMS using url rewrite to
- www.hobo-web.co.uk/websites-clean-search-engine-friendly-urls/ — which I then break down to something like
- www.hobo-web.co.uk/search-engine-friendly-urls/
It should be remembered it is thought although Googlebot can crawl sites with dynamic URLs, it is assumed by many webmasters there is a greater risk that it will give up if the urls are deemed not important and contain multiple variables and session IDs (theory).
As standard, I use clean URLS where possible on new sites these days, and try to keep the URLS as simple as possible and do not obsess about it. That’s my aim at all times when I optimise a website to work better in Google – simplicity.
Be aware though – Google does look at keywords in the URL even in a granular level. Having a keyword in your URL might be the difference between your site ranking and not – check out Does Google Count A Keyword In The URI (Filename) When Ranking A Page?
Keywords In Bold Or Italic
As I mentioned in my ALT Tags post, some webmasters claim putting your keywords in bold or putting your keywords in italics is a benefit in terms of search engine optimizing a page – as if they are working their way through a check list.
It’s impossible to test this, and I think these days, Google might be using this to identify what to punish a site for, not promote it in SERPS.
I use bold or italics these days specifically for users. Only if it’s natural or this is really what I want to emphasise!
Don’t tell Google what to sandbox you for that easily! I’m currently cleaning up the Hobo blog to reflect this, too.
I’ve been meaning, maybe forgetting, to point out in these posts I think Google treats every website differently to others in some respect. That is, more trusted sites might get treated differently than untrusted sites.
2c.
Which Is Best? Absolute Or Relative URLS
This is another one of those areas in optimisation or website development that you shouldn’t be concerned about. My advice would be to keep it consistent.
Which Is Better? – Absolute Or Relative URLS?
I prefer absolute urls. That’s just a preference. Google doesn’t care so neither do I, really. I have just gotten into the habit of using absolute urls.
- What is an absolute URL? Example – http://www.hobo-web.co.uk/search-engine-optimisation/
- What is a relative URL? Example – /search-engine-optimisation.htm
Relative just means relative to the document the link is on. Move that page to another site and it won’t work. With an absolute URL, it would work.
Which Is Best For Google – Subfolders or Files?
Another one to forget about. Sometimes I use subfolders and sometimes I use files. I have not been able to decide if there is any real benefit (in terms of ranking boost) to using either. A lot of CMS these days (2014) seem to use subfolders in their file path, so I am pretty confident Google can deal with either.
I used to prefer files like .html when I was building a new site from scratch, as they were the ’end of the line’ for search engines, as I imagined it, and a subfolder (or directory) was a collection of pages.
I used to think it could take more to get a subfolder trusted than say an individual file and I guess this sways me to use files on most websites I created (back in the day). Once subfolders are trusted, it’s 6 or half a dozen, what the actual difference is in terms of ranking in Google – usually, rankings in Google are more determined by how RELEVANT or REPUTABLE a page is to a query.
In the past, subfolders could be treated differently than files (in my experience).
Subfolders can be trusted less than other subfolders or pages in your site, or ignored entirely. Subfolders *used to seem to me* to take a little longer to get indexed by Google, than for instance .html pages.
People talk about trusted domains but they don’t mention (or don’t think) some parts of the domain can be trusted less. Google treats some subfolders….. differently. Well, they used to – and remembering how Google used to handle things has some benefits – even in 2015.
Some say don’t go beyond 4 levels of folders in your file path. I haven’t experienced too many issues, but you never know.
UPDATED – I think in 2015 it’s even less of something to worry about. There’s so much more important elements to check.
Which Is Better For Google? PHP, HTML or ASP?
Google doesn’t care. As long as it renders as a browser compatible document, it appears Google can read it these days.
I prefer php these days even with flat documents as it is easier to add server side code to that document if I want to add some sort of function to the site.
Does W3C Valid HTML / CSS Help?
Above – a Google video confirming this advice I first shared in 2008.
Does Google rank a page higher because of valid code? The short answer is no, even though I tested it on a small scale test with different results.
Google doesn’t care if your page is valid html and valid css. This is clear – check any top ten results in Google and you will probably see that most contain invalid HTML or CSS. I love creating accessible websites but they are a bit of a pain to manage when you have multiple authors or developers on a site.
If your site is so badly designed with a lot of invalid code even Google and browsers cannot read it, then you have a problem.
Where possible, if commissioning a new website, demand at least minimum accessibility compliance on a site (there are three levels of priority to meet), and aim for valid html and css. Actually this is the law in some countries although you would not know it, and be prepared to put a bit of work in to keep your rating.
Valid HTML and CSS are a pillar of best practice website optimisation, not strictly a part of professional search engine optimisation. It is one form of optimisation Google will not penalise you for.
Where can you test the accessibility of your website – Cynthia Says – http://www.contentquality.com/ – not for the faint hearted!
Addition – I usually still aim to follow W3C recommendations that actually help deliver a better user experience;
301 Old Pages
Rather than tell Google via a 404 or some other command that this page isn’t here any more, I have no problem permanently redirecting a page to a relatively similar page to pool any link power that page might have.
My general rule of thumb is to make sure the information (and keywords) are contained in the new page – stay on the safe side.
Most already know the power of a 301 and how you can use it to power even totally unrelated pages to the top of Google for a time – sometimes a very long time.
Google seems to think server side redirects are OK – so I use them.
You can change the focus of a redirect but that’s a bit black hat for me and can be abused – I don’t really talk about that sort of thing on this blog. But it’s worth knowing – you need to keep these redirects in place in your htaccess file.
Redirecting multiple old pages to one new page – works for me, if the information is there on the new page that ranked the old page.
NOTE – This tactic is being heavily spammed in 2015. Be careful with redirects. I think I have seen penalties transferred via 301s. I also WOULDN’T REDIRECT 301s blindly to your home page. I’d also be careful of redirecting lots of low quality links to one url. If you need a page to redirect old urls to, consider your sitemap or contact page. Audit any pages backlinks BEFORE you redirect them to an important page.
I’m seeing CANONICALS work just the same as 301s in 2015 – though they seem to take a little longer to have an impact.
Hint – a good tactic at the moment is to CONSOLIDATE old, thin under performing articles Google ignores, into bigger, better quality articles. I usually then 301 all the pages to a single source to consolidate link equity and content equity. As long as the intention is to serve users and create something more up-to-date – Google is fine with this.
Penalty For Duplicate Content On-Site?
I am always on the look for duplicate content issues. I think I have seen -50 positions for nothing more than a lot of duplicate content although I am looking into other possible issues. Generally speaking, Google will identify the best pages on your site if you have a decent on-site architecture. It’s usually pretty decent at this but it totally depends on where you are linkbuilding to within the site and how your site navigation is put together.
Don’t invite duplicate content issues. I don’t consider it a penalty you receive in general for duplicate content – you’re just not getting the most benefit. You’re website content isn’t being what it could be.
But this should be common sense. Google wants and rewards original content. Google doesn’t like duplicate content, and it’s a footprint of most spam sites. Google percieves a lot of duplicate or overlapping content as a BAD USER EXPERIENCE. You don’t want to look anything like a spam site.
The more you can make it look a human built every page on a page by page basis with content that doesn’t appear exactly in other areas of the site – the more Google will like it. Google does not like sloppy automation when it comes to building a website, that’s for clear. (Unique titles, meta descriptions, keyword tags, content)
I don’t mind Category duplicate content – as with WordPress – it can even help sometimes to spread PR and theme a site. But I generally wouldn’t have tags and categories, for instance.
I’m not that bothered with ‘themeing’ at this point to recommend silo’ing your content or no-indexing your categories. If I am not theming enough with proper content and mini-silo’ing to related pages from this page and to this page I should go home. Most sites in my opinion don’t need to silo their content – the scope of the content is just not that broad.
Keep in mind, for instance, Google won’t thank you for spidering a calendar section f your website with 10,000 blank pages on it – why would they. They may even algorithmically tick you off.
PS – Duplicate content found on other sites? Now that’s a totally different problem.
UPDATED: See Google Advice on Duplicate Content.
Broken Links Are A Waste Of Link Power
The simplest piece of advice I ever read about creating a website / optimising a website was years ago and it is still useful today:
make sure all your pages link to at least one other in your site
This advice is still sound today and the most important piece of advice out there in my opinion. Yes it’s so simple it’s stupid.
Check your pages for broken links. Seriously, broken links are a waste of link power and could hurt your site, drastically in some cases. Google is a link based search engine – if your links are broken and your site is chock full of 404s you might not be at the races.
Here’s the second best piece of advice in my opinion seeing as we are just about talking about website architecture;
link to your important pages often internally, with varying anchor text in the navigation and in page text content
…. especially if you do not have a lot of Pagerank to begin with!
Do I Need A Google XML Sitemap For My Website?
What is a xml sitemap and do I need one to seo my site for Google?
(The XML Sitemap protocol) has wide adoption, including support from Google, Yahoo!, and Microsoft
No. You do NOT, technically, need an XML Sitemap to optimise a site for Google if you have a sensible navigation system that Google can crawl and index easily. HOWEVER – in 2015 – you should have a Content Management System that produces one as a best practice – and you should submit that sitemap to Google in Google Webmaster Tools. Again – best practice. Google has said very recently XML and RSS is still a very useful discovery method for them to pick out recently updated content on your site.
An XML Sitemap is a file on your server with which you can help Google easily crawl & index all the pages on your site. This is evidently useful for very large sites that publish lots of new content or updates content regularly.
Your web pages will still get into search results without an xml sitemap if Google can find them by crawling your website, if you:
- Make sure all your pages link to at least one other in your site
- Link to your important pages often, with (varying anchor text, in the navigation and in page text content if you want best results)
Remember – Google needs links to find all the pages on your site, and links spread Pagerank, that help pages rank – so an xml sitemap is not quite a substitute for a great website architecture.
Sitemaps are an easy way for webmasters to inform search engines about pages on their sites that are available for crawling. In its simplest form, a Sitemap is an XML file that lists URLs for a site along with additional metadata about each URL (when it was last updated, how often it usually changes, and how important it is, relative to other URLs in the site) so that search engines can more intelligently crawl the site.
Most modern CMS auto-generate xml sitemaps, and Google does ask you submit a site map in webmaster tools, and I do these days.
I prefer to manually define my important pages by links and depth of content, but a XML sitemap is a best practice in 2015 for most sites.
Does Only The First Link Count In Google?
Does the second anchor text link on a page count?
One of the more interesting discussions in the webmaster community of late has been trying to determine which links Google counts as links on pages on your site. Some say the link Google finds higher in the code, is the link Google will ‘count’, if there are two links on a page going to the same page.
Update – I tested this recently with the post Google Counts The First Internal Link.
For example (and I am talking internal here – if you took a page and I placed two links on it, both going to the same page? (OK – hardly scientific, but you should get the idea). Will Google only ‘count’ the first link? Or will it read the anchor txt of both links, and give my page the benefit of the text in both links especially if the anchor text is different in both links? Will Google ignore the second link?
What is interesting to me is that knowing this leaves you with a question. If your navigation aray has your main pages linked to in it, perhaps your links in content are being ignored, or at least, not valued.
I think links in body text are invaluable. Does that mean placing the navigation below the copy to get a wide and varied internal anchor text to a page?
Perhaps.
Here’s some more on the topic;
- You May Be Screwing Yourself With Hyperlinked Headers
- Single Source Page Link Test Using Multiple Links With Varying Anchor Text
- Results of Google Experimentation – Only the First Anchor Text Counts
- Debunked: Only The 1st Anchor Text Counts With Google
- Google counting only the first link to a domain – rebunked
As I said, I think this is one of the more interesting talks in the community at the moment and perhaps Google works differently with internal links as opposed to external; links to other websites.
I think quite possibly this could change day to day if Google pressed a button, but I optimise a site thinking that only the first link will count – based on what I monitor although I am testing this – and actually, I usually only link once from page to page on client sites, unless it’s useful for visitors.
Canonical Tag – Canonical Link Element Best Practice
When it comes to Google SEO, the rel=canonical link element has become *VERY* IMPORTANT over the years. This element is employed by Google, Bing and other search engines to help them specify the page you want to rank out of duplicate and near duplicate pages found on your site, or on other pages on the web.
In the video above, Matt Cutts from Google shares tips on the new rel=”canonical” tag (more accurately – the canonical link element) that the 3 top search engines now support. Google, Yahoo!, and Microsoft have all agreed to work together in a
“joint effort to help reduce duplicate content for larger, more complex sites, and the result is the new Canonical Tag”.
Example Canonical Tag From Google Webmaster Central blog:
<link rel="canonical" href="http://www.example.com/product.php?item=swedish-fish" />
The process is simple. You can put this link tag in the head section of the duplicate content urls, if you think you need it.
I add a self referring canonical link element as standard these days – to ANY web page.
Is rel=”canonical” a hint or a directive?
It’s a hint that we honor strongly. We’ll take your preference into account, in conjunction with other signals, when calculating the most relevant page to display in search results.Can I use a relative path to specify the canonical, such as <link rel=”canonical” href=”product.php?item=swedish-fish” />?
Yes, relative paths are recognized as expected with the <link> tag. Also, if you include a<base> link in your document, relative paths will resolve according to the base URL.Is it okay if the canonical is not an exact duplicate of the content?
We allow slight differences, e.g., in the sort order of a table of products. We also recognize that we may crawl the canonical and the duplicate pages at different points in time, so we may occasionally see different versions of your content. All of that is okay with us.What if the rel=”canonical” returns a 404?
We’ll continue to index your content and use a heuristic to find a canonical, but we recommend that you specify existent URLs as canonicals.What if the rel=”canonical” hasn’t yet been indexed?
Like all public content on the web, we strive to discover and crawl a designated canonical URL quickly. As soon as we index it, we’ll immediately reconsider the rel=”canonical” hint.Can rel=”canonical” be a redirect?
Yes, you can specify a URL that redirects as a canonical URL. Google will then process the redirect as usual and try to index it.What if I have contradictory rel=”canonical” designations?
Our algorithm is lenient: We can follow canonical chains, but we strongly recommend that you update links to point to a single canonical page to ensure optimal canonicalization results.Can this link tag be used to suggest a canonical URL on a completely different domain?
**Update on 12/17/2009: The answer is yes! We now support a cross-domain rel=”canonical” link element.**
Rich Snippets
Rich Snippets in Google enhance your search listing in Google search engine results pages. You can include reviews of your products or services, for instance. Rich Snippets help draw attention to your listing in serps. You’ve no doubt seen yellow star ratings in Google natural results listings, for instance.
More Reading at https://support.google.com/webmasters/answer/99170?hl=en
What Not To Do In Website Search Engine Optimisation
Google has now released a basic organic search engine optimisation starter guide for webmasters, which they use internally:
Although this guide won’t tell you any secrets that’ll automatically rank your site first for queries in Google (sorry!), following the best practices outlined below will make it easier for search engines to both crawl and index your content. Google
It is still worth a read, even if it is VERY basic, best practice search engine optimisation for your site. No search engine will EVER tell you what actual keywords to put on your site to improve your rankings or get more converting organic traffic – and in Google – that’s the SINGLE MOST IMPORTANT thing you want to know!
Here’s a list of what Google tells you to avoid in the document;
- choosing a title that has no relation to the content on the page
- using default or vague titles like “Untitled” or “New Page 1″
- using a single title tag across all of your site’s pages or a large group of pages
- using extremely lengthy titles that are unhelpful to users
- stuffing unneeded keywords in your title tags
- writing a description meta tag that has no relation to the content on the page
- using generic descriptions like “This is a webpage” or “Page about baseball
cards” - filling the description with only keywords
- copy and pasting the entire content of the document into the description meta tag
- using a single description meta tag across all of your site’s pages or a large group of pages
- using lengthy URLs with unnecessary parameters and session IDs
- choosing generic page names like “page1.html”
- using excessive keywords like “baseball-cards-baseball-cards-baseball-cards.htm”
- having deep nesting of subdirectories like “…/dir1/dir2/dir3/dir4/dir5/dir6/
page.html” - using directory names that have no relation to the content in them
- having pages from subdomains and the root directory (e.g. “domain.com/
page.htm” and “sub.domain.com/page.htm”) access the same content - mixing www. and non-www. versions of URLs in your internal linking structure
- using odd capitalization of URLs (many users expect lower-case URLs and remember them better)
- creating complex webs of navigation links, e.g. linking every page on your site
to every other page - going overboard with slicing and dicing your content (it takes twenty clicks to get to deep content)
- having a navigation based entirely on drop-down menus, images, or animations (many, but not all, search engines can discover such links on a site, but if a user can reach all pages on a site via normal text links, this will improve the accessibility of your site)
- letting your HTML sitemap page become out of date with broken links
- creating an HTML sitemap that simply lists pages without organizing them, for
example by subject (Edit Shaun – Safe to say especially for larger sites) - allowing your 404 pages to be indexed in search engines (make sure that your
webserver is configured to give a404 HTTP status codewhen non-existent
pages are requested) - providing only a vague message like “Not found”, “404″, or no 404 page at all
- using a design for your 404 pages that isn’t consistent with the rest of your site
- writing sloppy text with many spelling and grammatical mistakes
- embedding text in images for textual content (users may want to copy and
paste the text and search engines can’t read it) - dumping large amounts of text on varying topics onto a page without paragraph, subheading, or layout separation
- rehashing (or even copying) existing content that will bring little extra value to
users
Pretty straight forward stuff but sometimes it’s the simple stuff that often gets overlooked. Of course, you put the above together with Google Guidelines for webmasters.
Search engine optimization is often about making small modifications to parts of your website. When viewed individually, these changes might seem like incremental improvements, but when combined with other optimizations, they could have a noticeable impact on your site’s user experience and performance in organic search results.
Don’t make these simple but dangerous mistakes…..
- Avoid duplicating content on your site found on other sites. Yes, Google likes content, but it *usually* needs to be well linked to, unique and original to get you to the top!
- Don’t hide text on your website. Google may eventually remove you from the SERPS (search engine results pages).
- Don’t buy 1000 links and think “that will get me to the top!”. Google likes natural link growth and often frowns on mass link buying.
- Don’t get every body to link to you using the same “anchor text” or link phrase. This could flag you as a ‘rank modifier’. You don’t want that.
- Don’t chase Google PR by chasing 100′s of links. Think quality of links….not quantity.
- Don’t buy many keyword rich domains, fill them with similar content and link them to your site. This is lazy and dangerous and could see you ignored or worse banned from Google. It might have worked yesterday but it sure does not work today without some grief from Google.
- Do not constantly change your site pages names or site navigation without remembering to employ redirects. This just screws you up in any search engine.
- Do not build a site with a JavaScript navigation that Google, Yahoo and Bing cannot crawl.
- Do not link to everybody who asks you for reciprocal links. Only link out to quality sites you feel can be trusted.
…and that’s all for now.
This is a complex topic as I said at the beginning.
I hope you enjoyed my seo guide for beginners, and rememberDO keep up to date with Google Webmaster Guidelines. :)
No comments:
Post a Comment