Technical Optimization for SEO

Technical Optimization for SEO

RECAP: ON-SITE SEARCH ENGINE OPTIMIZATION

Search engine optimization (SEO) is the blend of creative and technical methods designed to make a website more attractive to search engines like Google and Bing. In our last blog article, we talked about creative SEO tactics, like keywords and consistent content publishing, that help keep a site relevant and showing up in search results. But as the SEO description states, the whole process is a blend of creative and technical methods, so this week our Dev team is breaking down the tools and tactics to help increase the technical aspects of SEO.

TECHNICAL OPTIMIZATION ASPECTS

There are certain aspects of SEO optimization that are coded into the website, like speed and a search engine’s ability to crawl and index a site. These are referred to as technical SEO, a part of on-page SEO.

SPEED

Timing plays an important role in technical SEO and visitor satisfaction. People don’t want to sit around forever waiting for a web page to load. Ideally, a website should load in two to four seconds. Anything longer than four seconds has almost a 40% chance of visitors leaving the site.

To help our client sites load quickly, our Dev team uses Google’s Lighthouse tool to test performance, speed, accessibility and SEO. This tool runs audits on the site and then generates a report on how well the page(s) did and how to fix any issues. From there, our Dev team knows what to focus on to make sure the site is performing at optimal levels.

CRAWLABILITY

Search engines use tools called crawlers to scour the internet looking for content so it has the best and freshest information to show the next time an internet user types a question into the search bar. Moz defines crawling as “the discovery process in which search engines send out a team of robots (known as crawlers or spiders) to find new and updated content.”

The robots.txt file is primarily used to prevent search engines from indexing specific sections and provide a reference to the sitemap.xml. In order for a website to be optimized for crawlers, robots.txt files need to be in place and properly coded. Not having a robots.txt file is better than having a misconfigured one. Our client websites, when built in WordPress, automatically have these files in place and active to make sure the sites are accessible to crawlers and search engines. And as part of the post-launch process, the Dev team makes sure there aren’t any robots.txt files or robots meta tags blocking searching engines.

CANONICAL LINK ELEMENT

Duplicate content on a website can hurt SEO. But in some cases, the same product or content fits into multiple areas of a website. In order to avoid these pages harming your SEO rankings, you can set up canonical URLs. This means you can determine which webpage is the “preferred page” and which are the canonical links with a link tag, preventing an issue of duplicate content.

WordPress allows site content to be displayed at many different URLs depending on the needs of the site, and the context in which it was accessed, including:

  • website.com
  • website.com/home
  • website.com/?pagename=home

All three of these are potentially-valid URLs to the home page of a WordPress website.

In most cases, if a WordPress URL is accessed using a valid alternative, the search engine or user will be redirected via the canonical link to the URL dictated by the site settings (the main home page in this example). URLs can be further modified by appending arguments to the canonical URL, as is done by search engines and social networking sites for the purposes of tracking, for example: website.com/?fbclid=IwAR12[…]sZfw

The canonical link URL in each of these scenarios will always be the same, telling the browser, search engines or social sites the preferred URL for linking to the content regardless of the URL that was actually used for access. This helps with SEO by deduplicating the content, which helps prevent keyword dilution and false flags for “blackhat” techniques.

For episodic content (like blogs), or long sections of text broken into multiple pages, the canonical link element is used in conjunction with prev/next link elements to provide a sense of the content’s position within the site’s organizational hierarchy.

HTTPS

HTTPS, short for hypertext transfer protocol secure, is the secure extension of HTTP (hypertext transfer protocol). HTTP is the primary method used to transfer data between a search engine and a website, which is why you see it at the beginning of every URL. Search engines have always encouraged business owners and developers to adopt the HTTPS protocol because of its increased security, but in 2018 Google went a step further and started flagging non-HTTPS sites as “not secure.”

According to this Search Engine Land article, Google identifies several reasons to switch to HTTPS in their website migration guide:

Data sent using HTTPS is secured via Transport Layer Security protocol (TLS), which provides three key layers of protection:

  • Encryption. Encrypting the exchanged data to keep it secure from eavesdroppers. That means that while the user is browsing a website, nobody can “listen” to their conversations, track their activities across multiple pages or steal their information.
  • Data integrity. Data cannot be modified or corrupted during transfer, intentionally or otherwise, without being detected.
  • Authentication. Proves that your users communicate with the intended website. It protects against man-in-the-middle attacks and builds user trust, which translates into other business benefits.

The switch from HTTP to HTTPS requires a full website migration, but the increase in security makes the process worth it. To ensure our client sites are secure, Terrostar does not host or deploy sites without HTTPS.

XML SITEMAP

A XML sitemap is an organized list of all the pages on a website. The way Google “crawls” your site is by loading a page, then virtually “clicking”/following all of the links on the page, then doing the same, over and over. Search engines should be able to find most pages on a website when crawling, but if a page doesn’t have any internal links which can make them hard to find. The XML sitemap acts as a table of contents with links to every page on the site so none are missed when a search engine is crawling. It also allows developers to direct the search engine with how the pages are organized, and what type of media is included.

DEV THOUGHTS

When trying to understand the technical elements of SEO, it helps to think of search engines as non-sighted visitors. In order for them to successfully consume and understand the website, the content and code must be well organized and semantically structured so as to impart actual meaning to the structure of the document, and that document’s position within the greater hierarchy.

Article Tags