About a few months ago, I show you a simple SEO tips by adding meta description. But, little beginner blogger know that sometimes this meta element can cause a duplicate meta description and content if you did nothing to prevent it. Now, how to find out if you have this duplicate mistake?. It’s by checking your site health on Google Webmaster Tool.
On GWT homepage, click on which site you want to check. Click on Search Appearance section on left menu and choose HTML Improvement. If you have this error, you will see the list of numbers of pages. The urls will have this structure:
These archive format means you have duplicate content for a post that publish on May 2014.
Or something like these:
People who access this url from desktop, smartphone and tablet will open the same page, and that’s why search engine will read it as a duplicate content.
If you have these error you need to fix it immediately!. Because i can guarantee you the next time google update their algorithm, you will get hit with penalty and your blog index and keywords will drop. Then how to fix these mistake?. These are 3 solution that i use to all my blogs. Google bot sometimes is a little bit slow to crawl our blog, so you need to apply all of them.
A. Rel=Canonical to Fix Url Duplicate Content.
Like I mentioned before, if you see urls that ended with ?m=0 or ?=1, that’s mean google index your post when someone acces it from mobile gadget. The more people acces it from mobile, the more duplicate content you have. To solve this problem, follow the instruction below:
1. On your blog dashboard template section, click Edit HTML button.
2. Search Tag </title> and place this element below after it.
<link expr:href=’data:blog.url’ rel=’canonical’/>
B. Meta Robot Noindex For Archive and Search Pages.
Another page that google see as a duplicate mistake is archive and search result pages. On the list you will see url that have archive.html and /search/label. That’s why you need to add a tag that tell robot to noindex these two pages.
On your template HTML, look for </title> tag and add the tag below after it.
<b:if cond=’data:blog.pageType == “archive”‘>
<meta content=’noindex’ name=’robots’/>
C. Custom Robots Tags.
If you scare play with your html template, the new blogger account let you customize your crawlers and indexing with robots.txt and robots header tags tool on search preferences section. You can make google bot stop crawling unwanted pages with correct setting.
Click on the images for both of robots tags tutorial.
There is one final step to complete this fix. When you finish apply the 3 instructions above, you need to remove the duplicate content urls from google SERP. Remember what i always said, a clean and health blog is a good blog. See you next time, bloggers!.