Simple SEO Guide for Companies in Sri Lanka – Part One

No Comments

By Ragulan Tharmakulasingam

 

The power of Search Engine Optimization is not yet unleashed in Sri Lanka. Besides the few high-tech entrepreneurs and modern business owners who are practicing SEO, a large proportion of businessmen and companies are not benefiting from SEO in Sri Lanka.

 

If you take my company for example, we have a strong marketing consulting arm, and we do corporate training, marketing planning, market research, marketing communication, etc. So far, we have not advertised nor done any awareness program but the company totally depends on the search behavior. We always find that all the inquiries are very effective since it comes from companies who need it, or individuals looking for consulting advices and entrepreneur looking for business ideas.

 

For example, do a search for corporate training in Sri Lanka, marketing consulting in Sri Lanka and you will find the epitom.org website appearing on the top.

 

I thought of sharing some valuable insights to take your website up for search keyword and results.

 

All in all, to get the best search advantage, you should have a good website. Let us first take you through the technical audit part. The most important features are:

 

  1. Create a Webmaster Central account (Google Webmaster Central)
  2. Create a Google Analytics account

 

Once it has been done, now let’s open the site to do a simple technical audit because well-structured websites will perform well in SEO. We are not going to cover any web design or new development element. For example, if you have a simple site and want to implement proper SEO, we will be covering how to do a simple effective SEO audit, the aspects to consider and the implementation of those.

 

First, you have to do a website technical audit. When you make sure your technical side of the web is good, it will be a good indication in the improvement on ranking.

 

How you are going to perform a technical audit? The content of technical audit goes as follows.

 

1. Website Crawlability 

  • Site Search and Indexed Pages
  • Cached Pages in Google
  • XML Sitemap
  • HTML Sitemap
  • Correct Use of Robots.txt File
  • Internal Linking Structure

 

2. Internal SEO Health

  • Page Titles
  • Use of H1, H2 Tags
  • Meta Descriptions
  • Availability of Unique Content on Pages
  • Site Structure and Navigation
  • URL Structure
  • Breadcrumbs
  • Duplicate Content
  • Content quality and content strategy
  • Pagination
  • Site Errors /Server Errors
  • Correct Use of 404 Pages
  • Use of www and Sub-Domains
  • Image Optimisation

 

3. Link Popularity and Link Profile

  • Inbound Links Count
  • Link Juice and its Distribution
  • Link Diversity
  • Anchor Text Profile

Website Crawlability Check

1. Site Search & Indexed Pages

Let’s start with the website craw-lability check. How you check whether your website is being identified by Google? Simply, go to Google Chrome and type site:yourwebsite url. This will show you how many pages have been indexed in Google.

 

Take two situations; if it’s indexed, there are no issues and simply you have work on increasing the index numbers. If it is not, and if your site is too old or out there for some time, then that is certainly an issue.

 

There are three situations in this Robot Text.

 

<META NAME=”ROBOTS” CONTENT=”NOINDEX, FOLLOW”> – Search engines can read your website, but will not list your web pages in search.

<META NAME=”ROBOTS” CONTENT=”INDEX, NOFOLLOW”> – Google bots are not allowed to come in. 

<META NAME=”ROBOTS” CONTENT=”NOINDEX, NOFOLLOW”> – Both not allowed.

 

Sometimes when you are developing a content management website (WordPress/Joomla) the theme would have automatically been set to No index / no follow tags. If you didn’t see anything indexed, use http://www.seoreviewtools.com/bulk-meta-robots-checker/ to see whether the pages are index and followed.

 

Let’s see the detailed implementation of these in the next post.

 

We have a large investment company as my client, and they had some good presence in the search. But soon after they re-launched the site, the presence was not as profound as before. When we checked the site, the meta robot stated “No index, no follow”. Once we changed the meta robot, all the pages were back on track.

 

2. Cached pages 

Cached pages indicate how your webpage looked like for Google when it visited last time.

 

You can use this information to check. Google can only read the text of the website and not the entire user interface. To see how the site looks like, go to the Google search results page. Type your domain name and click on the arrow. This will give you the following information:

 

  1. How Google reads your site (Click text only version).
  2. Date and time Google bots last visited your page
  3. How the web site looked like when Google bots last visited.

 

3. XML site-maps 

XML sitemaps are just one tool that can help content creators to establish their stake as the content originator. XML site map gives directions for the Google bots about the site structure. Even without a sitemap, Google can still find a website, but with a map, they can get through more efficiently and make sure to look at all of your pages.

 

Check whether you have an XML site map installed by trying yourdomainname.com/sitemap.xml.

 

Similarly, HTML site map helps the site visitors to easily navigate the site.

 

 

 4. Correct Use of Robots.txt File

People who own websites use the /robots.txt file in order to give directions and information regarding their website to web robots. This is known as The Robots Exclusion Protocol.

 

This is how it works. For example, if a robot visits a website URL, such as http://www.example.com/welcome.html, then before the robot visits the website, it will check for http://www.example.com/robots.txt first. This is what it will find:

 

User-agent: *

Disallow: /

 

The “User-agent: *” indicates that this section pertains to all robots. On the other hand, the “Disallow: /” relates to the robot that it shouldn’t visit the pages on that specific website.

 

5. Internal Linking structure

Different web practitioners have different terms for this, but internal linking is the term that is well understood by the SEO community. In general terms, internal linking refers to any links from one web page on a domain which leads to another web page on that same domain. This can refer to the main site navigation or the links within articles to related content. In this article, we will focus more on the latter- the editorial links within articles because it is a more commonplace SEO tactic that is controlled by the site’s editors and writers as opposed to a tech team.

This is author biographical info, that can be used to tell more about you, your iterests, background and experience. You can change it on Admin > Users > Your Profile > Biographical Info page."

About us and this blog

We are a digital marketing company with a focus on helping our customers achieve great results across several key areas.

Request a free quote

We offer professional SEO services that help websites increase their organic search score drastically in order to compete for the highest rankings even when it comes to highly competitive keywords.

Subscribe to our newsletter!

Fields marked with an * are required

More from our blog

See all posts
No Comments