Good SEO practices to deal with duplicated content

The Panda updates from Google are alive with news and issues related to duplicated content.

Search Engine Optimisation ( SEO) receives a boost by adhering to a series of guidelines to make sure that your pages are located by the automated scanners of search engines.  They are ranked highly in the search results that they return, as a high page one ranking is worth a lot more than that of a low page two ranking.

One of the significant SEO NO-NOs is duplication of content. As you will be probably be aware the search engines depend on text content to both find your site and to rank it.  Additionally search engines prefer original keyword rich text content. This is because original content is better in that information that has been cloned is perceived as less valuable, or useful, to the search engine users. Wesites which scrape content directly from another one are even punished with a lower ranking making them a form of SEO poison.

Sometimes, particularly in some forms commercial sites, it is necessary for information to be duplicated on more than one page of your website. There are ways, however,  to deal with this and prevent it from affecting your search engine optimisation strategies. If particular pages of your site contain similar information, you can rank above another in the results and this particular correlation might be detrimental to your business. This can be dealt with by employing good SEO practices.

One way of achieving this is to  select the particular page which you want to rank for that text information and include the individual URL in your sitemap, but omit the URL of the page with duplicated content. This way, the page that ranks for the relevant information is the actual one that you want to rank; and you will not be negatively affected by having that material on another page too.

This entry was posted in SEO and tagged . Bookmark the permalink.