A meta robots tag is a short snippet of HTML code at the head section of each webpage that instructs search engine robots what they can or can’t do. There are many types of meta robots tags, but the most commonly used are the noindex, noarchive, and nofollow tags. If you do search engine optimization, particularly technical SEO, you will already
Site speed is an important ranking factor for Google and an overlooked aspect of search engine optimization. To reiterate: the better your site speed, the better your site’s chances of climbing up the search engine results pages (SERPs). Google’s prioritization of site speed makes sense as faster-loading sites tend to mean a better visitor experience
Site speed is one of the strongest ranking signals under the technical SEO umbrella. The rate at which your pages fully load on browsers is a hallmark of solid performance which Google likes to reward in the sites it indexes. We recently deployed a pilot version of our new property Mediko.Ph, a medical information resource
Good news for bloggers and content creators who like sharing process guides (like me!) – Google has announced the official launch of the HowTo and FAQ schema markups. Now, content that share step-by-step processes and frequently asked questions can be declared using structured data so Google and other search engines can easily identify them. Announced
The best just got a little better as Yoast SEO version 11.1 added support for Schema.org markups for videos and images. With this enhancement, you no longer have to manually code structured data to your media assets in order to help search engines better understand what they’re all about. The latest version of the plugin
Understanding 301 Redirects A 301 redirect is basically a permanent redirection of one URL to another. Whenever site visitors or search engine bots land on a redirected page, the redirection code sends them to a specified URL from the one they typed in or selected from the SERPs. Search engines in particular are given express
An XML sitemap is a file that helps search engines crawl and index a website’s pages more intelligently. Through this protocol, web crawlers can locate more URLs, determine their place in the site’s information architecture and understand their level of importance in the site’s hierarchy of pages.
Robots.txt is the standard means by which websites tell search engine spiders and other crawlers which pages are open to scanning and which ones are off-limits. Also known as the robots exclusion standard or robots exclusion protocol, it’s used by most websites today and honored by most web crawlers. The protocol is often used on
In web development and SEO parlance, a crawl error happens when a search spider fails to access a webpage or an entire website. Technical issues, content management mishaps or improperly configured bot restrictions could lead to the occurrence of these errors. While most sites have crawl errors in small quantities and are virtually unaffected by
Adding Schema markup microdata to our webpages is helpful in acquiring more organic traffic for our sites. Though Google has not declared it as a ranking factor, the search giant has acknowledged that it uses structured data to better understand the HTML elements of a site so it can use them for rich snippets.