Think about how Google can algorithmically and manually determine the commercial intent of your website – think about the signals that differentiate a real small business website from a website created JUST to send visitors to another website with affiliate links, on every page, for instance; or adverts on your site, above the fold, etc, can be a clear indicator of a webmaster’s particular commercial intent – hence why Google has a Top Heavy Algorithm.
Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag or index files in engines like ALIWEB. Meta tags provide a guide to each page's content. Using metadata to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches.[10][dubious – discuss] Web content providers also manipulated some attributes within the HTML source of a page in an attempt to rank well in search engines.[11] By 1997, search engine designers recognized that webmasters were making efforts to rank well in their search engine, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords. Early search engines, such as Altavista and Infoseek, adjusted their algorithms to prevent webmasters from manipulating rankings.[12]
One of the things Google looks at when ranking a page is the content on that page. It looks at the words on the page. Now picture this, if every word on, for instance, a blog post about a digital piano is used 2 times, then all words are of equal importance. Google won’t have a clue which of those words are important and which aren’t. The words you’re using are clues for Google; it tells Google and other search engines what the page or post is about. So if you want to make Google understand what your page is about, you need to use it fairly often.
Redirecting is the act of sending a user to a different URL than the one initially requested. There are many good reasons to redirect from one URL to another, for example, when a website moves to a new address. However, some redirects are designed to deceive search engines and users. These are a very poor user experience, and users may feel tricked or confused. We will call these “sneaky redirects.” Sneaky redirects are deceptive and should be rated Lowest.
QUOTE: “The average duration metric for the particular group of resources can be a statistical measure computed from a data set of measurements of a length of time that elapses between a time that a given user clicks on a search result included in a search results web page that identifies a resource in the particular group of resources and a time that the given user navigates back to the search results web page. …Thus, the user experience can be improved because search results higher in the presentation order will better match the user’s informational needs.” High Quality Search Results based on Repeat Clicks and Visit Duration
QUOTE: “Supplementary Content contributes to a good user experience on the page, but does not directly help the page achieve its purpose. SC is created by Webmasters and is an important part of the user experience. One common type of SC is navigation links which allow users to visit other parts of the website. Note that in some cases, content behind tabs may be considered part of the SC of the page.” Google Search Quality Evaluator Guidelines 2017
QUOTE: “7.4.3 Automatically ­Generated Main Content Entire websites may be created by designing a basic template from which hundreds or thousands of pages are created, sometimes using content from freely available sources (such as an RSS feed or API). These pages are created with no or very little time, effort, or expertise, and also have no editing or manual curation. Pages and websites made up of auto­generated content with no editing or manual curation, and no original content or value added for users, should be rated Lowest.” Google Search Quality Evaluator Guidelines 2017
Many search engine marketers think who you link out to (and who links to you) helps determine a topical community of sites in any field or a hub of authority. Quite simply, you want to be in that hub, at the centre if possible (however unlikely), but at least in it. I like to think of this one as a good thing to remember in the future as search engines get even better at determining topical relevancy of pages, but I have never actually seen any granular ranking benefit (for the page in question) from linking out.
SEO techniques can be classified into two broad categories: techniques that search engine companies recommend as part of good design ("white hat"), and those techniques of which search engines do not approve ("black hat"). The search engines attempt to minimize the effect of the latter, among them spamdexing. Industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO.[50] White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.[51]
Congrats Floyd! To answer your question: a big part of the success depends on how much your content replaces the old content… or is a good fit for that page in general. In the example I gave, my CRO guide wasn’t 1:1 replacement for the dead link. But it did make sense for people to add it to their pages because they tended to be “list of CRO resources” type things. Hope that helps.
Site. Migration. No two words elicit more fear, joy, or excitement to a digital marketer. When the idea was shared three years ago, the company was excited. They dreamed of new features and efficiency. But as SEOs we knew better. We knew there would be midnight strategy sessions with IT. More UAT environments than we could track. Deadlines, requirements, and compromises forged through hallway chats. ... The result was a stable transition with minimal dips in traffic. What we didn't know, however, was the amount of cross-functional coordination that was required to pull it off. Learn more in this video!
Let's say, for example, you're researching the keyword "how to start a blog" for an article you want to create. "Blog" can mean a blog post or the blog website itself, and what a searcher's intent is behind that keyword will influence the direction of your article. Does the searcher want to learn how to start an individual blog post? Or do they want to know how to actually launch a website domain for the purposes of blogging? If your content strategy is only targeting people interested in the latter, you'll need to make sure of the keyword's intent before committing to it.
Ever since its April 2015 update, Google is now taking the user's mobile experience into consideration in its search results. This means that when a user is doing a search on a mobile device, Google's search results will favor websites that are mobile friendly over the ones that aren't. If you want to capture that mobile search audience, you will need to have a mobile version of your website in addition to your desktop version.

To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots (usually ). When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[47]
This helpful tool scans your backlink profile and turns up a list of contact information for the links and domains you'll need to reach out to for removal. Alternatively, the tool also allows you to export the list if you wish to disavow them using Google's tool. (Essentially, this tool tells Google not to take these links into account when crawling your site.)
×