5.0 / 5
It is easy to get lost in the process of finding out which method to use and to what extent. There are many examples of bad SEO services that will deliver only short-term improvements, while in the long run, they will only make the damage. This chapter will provide an outline of why some methods are harmful and how to avoid them.
One of the main reasons for choosing what is essentially a bad SEO service is the promise of a high ranking and the number of clicks in a short period. That is unreasonable to expect because SEO is a process and SEO should improve the website’s rankings gradually. Today, no one can offer number one ranking, simply because today’s search engines offer different results to different users, depending on their geographic location or personalized results. Rankings alone are a bad metric for the overall performance since the whole purpose of having a good website is to gain more customers and not to have a great number of visitors who will leave the webpage as soon as they open it.
It is possible to achieve high rankings fast, but it usually means using black hat SEO and therefore should be avoided. As mentioned, using black hat methods usually makes more harm to the rankings than it does good.
Some of the most common black hat techniques include the following:
Putting as many keywords possible into the text of a webpage, with no attempt to give useful information; long list of keywords or their repetition;
Pages created specifically to rank in search engines, but that automatically redirect visitors elsewhere;
Text of the same color as the background, intended to deceive search engine bots
Offering payment for inbound links or creating new sites with the only purpose to create a link back to the main site; placing a lot of inbound links on unrelated pages
Not all methods need to be considered a priori a black hat to be a bad SEO service. Already mentioned white hat method – link exchanges – can too become a bad practice: not all inbound links are valuable and some may even harm. Using only one phrase for all the links can discount the webpage by search engines and even consider it spammy.
Another common mistake is targeting the wrong keywords. Putting too broad a term or using a synonym that is not common to the target population will draw the wrong visitors. Service that boosts a webpage’s ranking and traffic are not worth much if the target population is wrong.
Redesigning a website or creating new pages without 301 Redirect can cause loss of valuable existing rankings.
Content is the key to success – creating content that is heavy on keywords and light on actual value is a long term bad strategy. Low-quality texts full of keywords can downgrade the website due to recent changes made in search algorithms. Content scraping (taking content from high-ranking websites without permission) is also considered a very bad practice, associated with spam sites and a violation of copyright law. Also, putting too much focus on metadata is not so important anymore major search engines don’t pay attention to it as they used to because it was largely misused.
Search engine bots never forget, so content once crawled and indexed remains indexed until its URL returns a server header that tells the bot that the page is no longer available. In most cases, old content is just useless and innocently forgettable. But in some cases forgotten content, especially content whose sole purpose was to improve SEO may contribute to a search engine’s algorithmic conclusion that a site offers little value to searchers. Google’s updates of indexing services are implemented to avoid it.
“Farmer/Panda Update” was designed to devalue sites that were algorithmically determined to be low value to Google searchers. The update affected 12 percent of Google’s results, decreasing organic traffic for bad sites. As a result, some companies experience their Google organic traffic fall about 38 percent — a decrease of 15 percent of their total site traffic — when the algorithm launched on February 24th, 2011. Some of the methods that resulted in a devaluation of websites were:
Many websites publish pages that are related as part of a series, such as products in the same categories on an eCommerce website, articles that span across more than one page, and galleries that show off related images, often with descriptive text. These often include pagination that enables you to go from page to page, often with numbers as links at the pages’ bottoms, and sometimes with a link to see all of the content on a single page.
These paginated pages often share the same HTML title element and meta description and because of that, they could be perceived by search engines to be duplicative. Google has defined a way that websites can overcome this perception, by treating these pages as if they are related. Elements within the head sections of the pages should be linked with rel=”prev” pointing to the previous page and rel=”next” pointing to the next page in the series. That tells Google that this content spans more than one page.
If there is an “all” page on series of paginated pages, where all of the content is available on one page, then there is the option of using canonical link elements on those pages. A good practice is to add a rel=”canonical” link to the component pages to tell Google that the View All version is the version you want to appear in search results. Also, 301 redirects should be used to redirect to the canonical version of the URL. New algorithms mentioned above, like Google Panda and Penguin, influenced the world of SEO professionals and the focus on how to make a good webpage. The focus is on quality content, but high quality, key-optimized content is not the only important thing. It is just one of the ingredients for a successful webpage. This chapter covers common beliefs about SEO that are usually wrong or misguiding.
One of the biggest myths is the idea that the webpage should be submitted to Google to appear in search results. Crawlers will find the website and index it sooner or later and submission does not guarantee anything. As already mentioned in chapter IV, ranking is also not everything. Additionally, it lost some of its value when search results started to appear with rich text, short preview, and author tags.
Another common myth is that keywords need to be an exact match or that there is an ideal keyword density for every page. One should use a keyword in a way that makes the most sense. The keyword, or a variation of it, should be included in a page title, headline, URL and at least once throughout the content and there is no perfect number of keywords.
Headings used to be more important, but search engines are smarter these days and people misused this too much. SEO is not just for optimizing for search engines anymore. It became also for optimizing for users. The home page should have information about who you are, what you do, location, proposition, and what to do next. As mentioned in chapter V, on-page SEO is important, but to have a high rank, a webpage needs to be optimized on-page, off-page, and in every other way possible.
Another aspect that should be considered today is the influence of social media. Social media plays a greater role every day and should not be considered separately from SEO. The intersection of SEO and social media even has its name – social search. It is a growing field and major companies are investing in its research and development. Content that has a social connection is prioritized.
Myths related to the links and pages are that the more indexed pages one has, the better. But not everything publishes gets indexed and sometimes the pages that do, do not remain so. Furthermore, some of the link related problems were solved by Google, as mentioned in chapter Error! Reference source not found. Domains by the same owner that have links to each other will not be counted towards higher rank. Search engines are smart enough to know who the registered owner is, and registering under a different name is considered spamming.
Geographical location also became more important. Search engines now know where users are located and often show results that are specific to their location, which means that information about a business location should be included in SEO.
A common misconception is that SEO is something that can be a handoff to IT section. It may have been so some time ago, but with all the new algorithms and methods, SEO became expertise for itself, and to gain professional results, one needs a professional to it.
To validate the use of SEO optimization in e-commerce, research was provided to detect the optimization level of some well-known Internet shops in Croatia. It was analyzed which SEO methods were used while creating pages for electronic sales and which methods were left out. The study was conducted over six internet stores that sell clothing, footwear, technical devices, commercial goods, equipment for cars and groceries. The optimization level was checked by using available online tools. These tools have predefined fields, and for each is checked whether it exists and if yes, to what extent is it satisfied. The results are presented as a percentage, where 80% or more is considered a great result. Each page was tested by several optimization tools to obtain relevant information.
The percentage of optimization level of pages amounted to an average of 49.2%, ranging from 44% to 55%. Although the importance of optimizing a site for search engines was highlighted in the last few years, the Croatian websites specializing in e-commerce still do not pay enough attention to adapting to a search engine. A possible reason is less competition in the world of e-commerce. We also found that the world-famous sites for Internet commerce are more optimized. Examples of some well-known webshops that sell items in the same categories, give us optimization levels mounted to an average of 65%, ranging from 62% to 68%, which is, compared to Croatian sites, almost 25% better result.
The basis of presuppositions during optimization is that the site should have particularly good on-page SEO, but our results show drawbacks even here. Page titles are well structured and are of good length, but a way of defining metadata is not good. Although entering keywords and page descriptions in the “meta description” part of the HTML code is one of the oldest known methods in general, this rule is not respected. It is shown that the keywords and description of the page, either do not exist or are too lengthy set – assuming ‘the more the better’. The recommended amount of keywords in the meta tags is 5-10, while the studied sites contain three to four times more words. Recommended length of description meta value is up to 160 characters (two sentences), but studied sites contain an average of 3.5 times more characters.
Internal linking to internal pages is, as mentioned in previous chapters, a good method of optimization. Given the nature of the internet site for sale, and because of higher volumes of products that have to be linked, in all observed cases, we found a very large number of internal links. The disadvantage is that over 90% of the links had no defined value for the title attribute, which is considered bad. A similar failure occurred in the omission of the “alt” attribute for images. Less than 35% of images have specified description.
In Section III, we argued that directory submission is considered a good and effective white hat method. On the other hand, it is shown that to submit a web page to an appropriate directory or catalog requires an investment of lots of resources. The above statement is here confirmed since only one of six studied sites is listed in the catalogs. Connection site with other sites is variable and it depends on a case study. Some of the researched sites have very good connections from outside (over 100 links), but other sites have less, for example only 30.
The site should also have enabled access through its IP address using the 301 redirections. If the IP address of one’s site is entered in the browser, the user should be automatically redirected with 301 redirections to the address of that site, and the site should open in a browser. It was noticed none of the tested sites support IP canonicalization and that none of those web pages can be accessed via its IP address.
Each of the researched sites is indexed by Google, but are not indexed by Bing and Yahoo search engines. Furthermore, none of the sites are optimized for mobile devices and don’t have applied responsive web design.
Internet shops are mostly linked to social networks. Linking from social networks is the popular white hat method and is noted that e-commerce sites often use it. The most posted links on the sites are links to like, share and comment, and post links to sites are realized through Facebook and Google plus.
SEO is a process that takes time and effort to achieve its final objective, which is a good, easy-to-find, and optimized web page. This is especially important in expanding field of e-commerce, where the essence is to draw potential customers and offer them exactly what they are looking for.
The webshop page should always be up to date and with every update, there is a possibility of unwanted bad consequences in form of devaluation by a search engine or targeting the wrong population. To avoid that, several methods exist and well-optimized web pages should always use them. In e-commerce, the special emphases are on individualization of each paginated page and the right usage of keywords – not too few nor too many, and words that are often used by targeted customers.
Our researched e-commercial websites show that, compared to the popular worldwide sites, Croatia is still well behind and that there is a lot of room for improvement mostly regarding the usage of keywords, optimization for indexing by other search engines, and optimization for mobile devices.