Ecommerce SEO


Internal Linking Optimization

Length: 11,404 words

Estimated reading time: 1 hour, 20 minutes

This e-commerce SEO guide has almost 400 pages of advanced, actionable insights into on-page SEO for ecommerce. This is the fifth out of 8 chapters.

Written by an e-commerce SEO consultant with over 25 years of research and practical experience, this comprehensive SEO resource will teach you how to identify and address all SEO issues specific to e-commerce websites in one place.

The strategies and tactics described in this guide have been successfully implemented on top 10 online retailers, small & medium businesses, and mom-and-pop stores.

Please share and link to this guide if you like it.

The importance of external links for rankings is a well-documented SEO fact and part of conventional SEO wisdom. However, internal links can also impact rankings.

Links, either internal or from external websites, are the primary way for site visitors and search engines to discover content.

If a page does not have incoming internal links, not only may that page not be accessible to search engines for crawling and indexing, but even if it gets indexed, the page will be deemed less valuable (unless a lot of external links point to it). Examples of pages without internal links are product detail pages accessible only after an internal site search or entire catalogs being available only to logged-in members.

On the other hand, if a page does not link out to other internal pages, you send search engine robots into a dead-end.

In between these two extremes, internal links can lead crawlers into traps or to unwanted URLs that contain “thin”, duplicate, or near-duplicate content. Internal links can also put crawlers into circular referencing.

When optimizing the internal linking, remember that websites are for users, not search engines. The reason links exist in the first place is to help users navigate and find what they want quickly and easily. Therefore, consider an approach that balances links available only to users with links available to bots. Build the internal linking for users, and then accommodate the search engines.

Ecommerce websites come with an interesting advantage: a large number of pages creates many internal links. The larger the website and the more links pointing to a page, the more influential that page is. Strangely enough, although SEOs typically know that the more links you point to a page, the more authority the page receives, many SEOs still focus on getting links from external websites first.

However, why not optimize the lowest-hanging fruit first, the internal links? When you optimize your internal linking architecture, you do not need to hunt for external backlinks. You need to increase the relevance and authority of key pages on your website by creating quality content that attracts organic traffic and links and interlinking pages thematically.

Let’s see how ecommerce websites can use internal linking to boost relevance, avoid or mitigate duplicate content issues, and build long-tail anchor text to rank for natural language search queries.

Crawlable and uncrawlable links

Before we move forward, a quick and important note. Do not blindly implement any of the techniques discussed in this section. Decide which solution best suits your website based on your business needs and specific situation. If you are in doubt, get help from an experienced consultant before making changes.

A crawlable link is a link that is accessible to search engine crawlers when they request a web resource from your web server.

An uncrawlable/undiscoverable link is one that search engines cannot discover after parsing the HTML code and rendering the page. However, that uncrawlable link is still accessible to users in the browser.

Uncrawlable links can be created client-side (in the browser) using JavaScript, AJAX, or blocking access to the resources required to generate the URLs using robots.txt. Uncrawlable links are created on purpose and are not the same as broken links, which occur accidentally. Also, uncrawlable links are not hidden links (e.g., off-screen text positioned with CSS or white text on a white background).

Because the main goal of e-commerce websites is to sell online, they must be useful and present information in an easy-to-find manner. Imagine an ecommerce website that does not allow users to sort or filter 3,000 items in a single category. However, this sorting and filtering generates URLs with no value for search engines and, in some cases, limited value for users. Since the current crawling and browsing technologies depend on clicks and links, these issues are here to stay for a while.

However, why do ecommerce websites generate overhead URLs, and why can search engines access such URLs? There are plenty of reasons:

  • URLs with tracking parameters are needed for personalization or web analysis.
  • Faceted navigation can generate many overhead URLs if it is not properly controlled.
  • A/B testing can also create overhead URLs.
  • If the order of URL parameters is not enforced, you will generate overhead URLs.

So, how do you approach overhead URLs?

A compromise for offering a great user experience while helping search engines crawl complex websites is to make the overhead links undiscoverable for robots while the links are still available to users in the browser. For example, a link that is important for users but not important for search engines can be created with uncrawlable JavaScript.

Before we look at some examples, keep the following in mind:

  • Decide whether there is an indexing or crawling issue to be addressed in the first place. Are 90%+ of your URLs indexed? If yes, you may need to build links to the other 10% of pages. Or maybe you can add more content to get those 10% pages indexed.
  • Would you hinder user experience by blocking access to content with JavaScript?
  • Hiding links from robots may qualify as cloaking, depending on the reason for the implementation. Here’s a quote from Google:

“If the reason is for spamming malicious, or deceptive behavior—or even showing different content to users than to Googlebot—then this is high-risk”.[1]

Please note that from an SEO perspective, I advocate using uncrawlable links only for the following reasons:

  • Create better crawl paths to help search engines reach important pages on your website.
  • Preserve the crawl budget and other resources (e.g., bandwidth)
  • Avoid internal linking traps (i.e., infinite loops).

I do not endorse this tactic if you want to spam or mislead unsuspecting visitors. There are a couple of methods for keeping crawlers away from overhead URLs.


Let’s say you do not want any links generated by the faceted navigation to be visible to search engines. In this scenario, you will embed the faceted navigation in an <iframe> and block the bots’ access to that iframe using robots.txt.

The advantage of using iframes is that it is fast to implement and remove if the results are not satisfactory. One disadvantage is that you cannot granularly control which facets can be indexed; once the iframe source is blocked with robots.txt, no facet will be crawled.

Figure 117 – This screenshot highlights a classic implementation of faceted navigation (left-hand navigation). This type of navigation often creates bot traps.

Intermediary directory/file

The directory implementation requires including a directory in the URL structure and then blocking that directory in robots.txt.

Let’s say that the original facet URL is:


Instead of linking to the URL above, you will link through an intermediary directory, which is then disallowed by robots.txt. The URL contains the /facets/ directory:


Your robots.txt will disallow everything placed under this directory:

User-agent: *
# Do not crawl facet URLs
Disallow: /facets/

Instead of a directory, you can also use a file in the URL. The controlled URL will include the facets.php file, which will be blocked in robots.txt.

If this was the original facet URL:


Using the robbotted file, this is how the new URL will look like:


User-agent: *
# Do not crawl faceted URLs
Disallow: *facets.php*

JavaScript and AJAX

Using JavaScript or AJAX is another method used to control access to internal links, to silo the website, and to avoid duplicate content issues, at the source. Search engines can already execute some JavaScript statements such as document.write(). They can also render AJAX to discover content and URLs but only to some extent,[2] and there are limitations to what they can understand. However, keep in mind that the major search engines evolve rapidly, and in a matter of months they might be able to execute complex JavaScript.

While most of the time SEOs want to make AJAX content more accessible to search engines, when you want to control internal links, you aim for the opposite. You will use JavaScript or AJAX to generate the links in the browser (client-side) rather than in the raw HTML. Depending on the implementation, those URLs may not be available to bots when they fetch the HTML and render the page.

One application of this method is to generate clean internal tracking URLs in the HTML, and add the user tracking parameters on demand, in the browser.

Let’s say you have three links on the homepage and all point to the same URL, but each link is in a different location on the page. The first link is in the primary navigation, the second one is on a product thumbnail image, and the last one is in the footer. Your Merchandising team wants to track where people clicked, and they ask the Analytics team to track the click locations. The Analytics and Dev teams will tag each of the three URLs with internal tracking parameters. The three tagged links may look like these:

The trackingkey parameter in the first link communicates to the web analytics tool that the click came from the home page (which is indicated by the hp string), in the watches category located in the primary navigation (primary_nav). The other two URLs are similar except that the link location is different. When the Analytics team added these tracking parameters, they created three duplicate content pages, which is not desirable.

Of course, in this scenario, you can use rel=”canonical” to point to a significant URL, or you can use the URL Parameter Tool in Google Search Console to consolidate to a canonical URL. However, for our purpose, we want to have a solution that avoids creating the duplicate URLs, in the first place.

Here’s one way to use JavaScript to avoid generating duplicate content URLs, if you want to use internal tracking with parameters.

In the source code your anchor element will look similar to this:

<a href=“” param-string=“trackingkey=hp-watches-primary_nav”>Watches</a>

This URL is clean of parameters, which is great for SEO.

The page featuring this link includes a JavaScript code that “listens” when users click the tracked link. When the left mouse button is pressed, the href is updated client-side, by appending the content of the param-string attribute to the URL.

This is how the URL will look like at the mousedown event:

<a href=“” param-string=“trackingkey=hp-watches-primary_nav”>Watches</a>

Now, the URL includes the internal tracking parameter trackingkey. However, the parameter was added in the browser; it was not present in the raw HTML code when the bot accessed the page (you can get the sample HTML and JavaScript code from here).

If you decide to create uncrawlable links with JavaScript, keep in mind that Google can identify links and anything that looks like a link to them, even if it is a JavaScript link. For instance, OnClick events that generate links are the most likely to be crawled. I have seen cases where Google requested and tried to crawl virtual page view URLs generated by Google Analytics.[3]

Also, note that using JavaScript to create undiscoverable links for bots can be a tricky web development task. Such links may also hinder the user experience for visitors who do not have JavaScript enabled. If your existing website does not work with JavaScript off, you should be fine using AJAX links. However, if your website fully degrades for non-JS users, do not sacrifice user experience for SEO.

The user-agent delivery method

This approach is controversial because it delivers content based on the user-agent requesting the page. The principle behind it is simple; when there is a request for a URL, identify the user-agent making the request and check if it’s a search engine bot or a real browser. If it is a browser add internal tracking parameters to the URL; if it is a bot, deliver a clean URL.

Do you think this method is too close to cloaking? Let’s see how Amazon uses it to add internal tracking parameters to URLs on the fly, client-side. If you go to their Call of Duty: Ghosts – Xbox 360 page[4] while using your browser’s default user-agent and mouse over the Today’s Deals link, you will get a URL that contains the tracking parameter ref:

Figure 118 – The internal tracking parameter shows up in the URL.

Now, change the user-agent to Googlebot, reload the page, and mouse over the same link. This time the URL does not include the tracking parameter. To change the browser user-agent use one of the many browser extensions, available for free.

Figure 119 – When Googlebot is used as user-agent, the tracking parameter is not in the URL anymore.

Below is their HTML code. The top part of this screenshot shows the code when Googlebot requests the page. The bottom part depicts the code served to the default user-agent.

Figure 120 –This “white-hat cloaking” may be ok if you are only playing with URL parameters that do not change the content.

Assessing internal linking

The first step towards internal linking optimization is the diagnosis. Analyzing if pages are linked properly can reveal technical and website taxonomy issues.

One of the fastest and easiest ways to ascertain which pages are interlinked the most (and therefore deemed more important by search engines) is to use your Google Search Console account.

Figure 121 – The Internal Links report in Google Search Console.

Look at the Internal Links report, found under the Search Traffic section in Google Search Console. Are the most important pages for your business listed at the top? For ecommerce websites, those are usually the categories listed in the primary navigation.

In the image above, notice the /shop/checkout/cart/ directory. The URL is the second most linked page on the website. This makes sense from a user standpoint because this link must be present on most pages. However, the cart link is not important for search engines, so you can disallow the entire /checkout/ directory in robots.txt, to prevent everything under it from being crawled.

Figure 122 – The shopping cart link is the only link that is followed.

Next, let’s see how each page is linked anchor text wise. For this we will use one of the best, yet underestimated, on-page SEO crawler and audit tool, the IIS SEO Toolkit.[5]

Figure 123 – The IIS SEO Toolkit, an indispensable on-demand desktop crawler.

You do not hear the SEO community talk much about this tool, maybe because it is Microsoft technology. However, its flexibility and extended functionalities are better than Xenu (free) and at least at par with Screaming Frog (paid).

The IIS SEO Toolkit is free, which makes it a great tool to start with. Unfortunately, the development of the IIS SEO toolkit was stopped years ago, so it cannot really compete with the new crawlers.

Once you identified and fixed all the problems reported by the IIS SEO toolkit, you can consider upgrading to an enterprise tool such as:

  • Botify – undisclosed pricing
  • DeepCrawl – $89 USD/month (100k URLs per month)
  • OnCrawl – $69 USD/month (100k URLs per month)

These monthly costs are estimates based on the minimum number of URLs crawled per month as of Dec 2019 (prices might’ve changed meanwhile).

Figure 124 – There are virtually thousands of ways to analyze your website with IIS SEO toolkit, and you can slice and dice the SEO data in almost any way you can imagine.

Install the tool on your Windows machine and run your first crawl. It is simple to set up, and it does not require an IIS server, as the name implies. The toolkit uses the default IIS server in Windows, so you might need to activate the IIS server component. Also, Windows 10 users will have to go through an additional fix to make it run.

Now, let’s see how you can use the toolkit to identify some major internal linking issues.

Finding broken links with this toolkit is a breeze, just like it should be with any decent crawler. You can find the broken links report under the Violations section or the Content section.

Figure 125 – You can find broken links using the Violations or Content reports.

You already know that broken links are an issue that needs attention because they hinder user experience as well. So, use the toolkit to identify and take care of them.

The general SEO wisdom is that any page should be accessible in as few clicks as possible. Four or five levels are OK and acceptable for users and bots but any more becomes problematic.

Figure 126 – The Link Depth report can uncover issues such as circular referencing, malformed URLs or infinite spaces.

Having URLs buried 24 levels deep, as depicted in this screenshot, suggests that there is a problem with the internal linking. In this example, the issue stemmed from malformed URLs that were creating circular references.

Use the Pages with Most Links report in the Links section of the tool to identify the number of outgoing links from each page on your website. Sort the data by Count to get a quick idea of where the problems are.

Figure 127 – Including 936 links on a page is a bit concerning.

In most cases, the number of links on the same page template should be similar. However, in this case, there is a quick jump from about 300 to around 900 links for the same template, the product detail page.

When you find such big differences, check the pages that seem off the charts. Investigate why there are so many links, compared to the other pages on the same template.

The Pages with Most Links report is also available in the Violations section:

Figure 128 – Check the Violations report to identify problematic pages.

Identify hubs

The Most Linked Pages report can help you identify internal hubs. In terms of website taxonomy and internal linking, a hub is a parent with lots of children linking back to it.

Usually, the largest link hubs on ecommerce websites are the home page and the category pages linked from the primary navigation. If you see other pages at the top, you might have internal linking issues. The number of products under a certain category also influences how many links a category gets.

Figure 129 – The Most Linked Page report is very similar to the Internal Links report in Google Search Console.

The numbers in the previous image highlight three issues:

  • The most linked page does not have a <title> tag.
  • The shopping cart URL seems to be getting too many internal links. Because the shopping cart URL is dynamic bots will try to access it from multiple pages, which is not ideal.
  • A significant number of internal links point to 301 redirects. This suggests that somewhere in the primary navigation there is a link to a 301 redirect. Whenever possible, link directly to the final URL.

To dig more deeply and get additional details on how each URL is linked, right-click the URL you would like to analyze and then click on View Group Details in New Query.

Figure 130 – Finding out how each URL is linked.

Then click on Add/Remove Columns and add the Link Text column. Then click on Execute, at the top left, to update the report.

Figure 131 – You can remove/add columns to your reports.

Regarding section one in the screenshot above, if a page is linked using an image link, the IIS SEO toolkit does not report the alt text of the image. This is one of the few downsides of the toolkit.

In section two I highlighted a mismatch between the anchor text and the linked page. The highlighted page is linked using the “customer service” anchor text, which is wrong because the linked page is not the customer service page.

Look for this kind of mismatches in your analysis.

Next, let’s aggregate anchor text:

  1. Click on Group by.
  2. Select Link Text in the Group by tab and hit Execute.
  3. You will get a count of each anchor text pointing to that particular URL.
  4. To analyze a different URL, simply change the value in the Linked-URL field.

Figure 132 – If you find that a page is linked with too many varying anchor texts, you need to evaluate how close the anchor texts are semantically and taxonomically.

Ideally, you consistently link to category pages using the category name, but a few variations in the anchor text are acceptable. For example, you can link to the Office Furniture category with the anchor text “office furniture”, but you can also link using “furniture for the office”. When you link to product detail pages (PDPs) from product listing pages (PLPs), use the product name as the anchor text. If you link to PDPs from blogs, user guides, or other content-rich pages, you can vary the anchor text. The anchor text can include details such as product attributes, brands, and manufacturers.

To get an overall picture of the site-wide anchor text distribution, you need to create custom reports, which are called “queries” in the IIS SEO Toolkit. This is where the enormous flexibility of the tool comes in handy.

To create a custom report, go to the Dashboard, click on the Query drop-down, and select the New Link Query:

Figure 133 – Adding a New Link Query.

  1. In the new tab (Links) select the field name values as depicted in the image above.
  2. In the Group By tab select Link Text from the drop-down.
  3. Click Execute.

Figure 134 – There you have the internal anchor text distribution for the entire website.

In the example above, notice a couple of things that need to be investigated further:

  • First, why does the most linked page have no anchor text?
  • Second, how about blocking bots’ access to the shopping cart link?

If you want to look at a fancy visualization of your hub pages, use the Export function of the IIS SEO Toolkit to generate the list of all URLs. Then, import that file into a data visualization tool.

Figure 135 – Sample internal linking graph.

The image above is a visualization example generated with Gephi. You can find tutorials on how to generate link graphs using Google’s Fusion Tables,[6] NodeXL[7] and Gephi[8].

Problematic redirects

Using the Redirects report will help identify internal PageRank leaks, unnecessary 302 or 301 redirects, and undesirable header response codes. To make the analysis easier and see the issues grouped by pages, you can sort by Linking URL.

Figure 136 – Sort by Linking-StatusCode to identify issues.

Regarding the two notes in this screenshot:

  1. The currency selection is kept in the URL rather than in a cookie. For this website, each currency selection generated a unique URL on almost every single page, which is bad.
  2. Instead of linking to a URL that returns a 301 (Moved Permanently), link directly to the destination.

Figure 137 – The unnecessary redirects are also available in the Violations report.

Wrong URLs blocked by robots.txt

The toolkit can also help you identify URLs robotted by mistake. Use the Links Blocked by robots.txt report to find such URLs.

Figure 138 – The Help.aspx page is blocked with robots.txt

Do not block bot access to help pages (or similar pages, e.g., FAQs or Q&As). You want people who have questions about your products or services to be able to find such pages, straight from a search engine query. The content on these pages has the potential to reduce calls to customer service.

Because the help page is located under the /Common/ directory, which is blocked with robots.txt, search engines will not be able to access it, and the help page will not be indexed.

Figure 139 – All pages under the /Common directory will be blocked.

In the Links Blocked by robots.txt report look for pages and URLs that should be indexed but are blocked by mistake.


The Protocols report displays the various protocols used to link internally to resources on the website:

Figure 140 – Do you interlink https with HTTP pages?

If your website uses both non-secure HTTP and secure HTTPS protocols, what happens when visitors go back and forth between HTTP and HTTPS pages? Do they get warning messages in the browser? Do you link to the same URL with secure and non-secure protocols?

We know that shopping carts, logins, and checkout pages should be secure and such pages do not need to be indexed by search engines. However, it is best to switch everything to HTTPS. Keep in mind that when you switch from non-secure HTTP to secure HTTPS, there might be a temporary drop in traffic.

Other issues

Here are some other common internal linking mistakes:

  • Inconsistent linking; this happens when you link to the same page using multiple URL , for example, linking to the homepage as but also as When you link an internal URL, be consistent—link using a consolidated URL only.
  • Default page dispersal; this is when you link to index files rather than to root directories. For example, many webmasters link to index.php when linking to home pages. Instead, you have to link to the root directory, which is just the slash sign, /.
  • Case sensitivity that leads to 404 Not Found errors. For instance, Apache servers are case sensitive, so if you link to the URL Product-name.html using upper-case “P” instead of lower-case, the server may return an error.
  • Mixed URL paths; this happens when you link to the same file using both absolute and relative paths. This is not an SEO issue per se; however, adopting standardized URL referencing helps with troubleshooting web dev issues. Also, if you use absolute paths, when content scrappers steal content, they may still leave the absolute links to your URLs.

When you assess your competitors’ internal linking from an SEO perspective, compare the source code generated with Googlebot used as user-agent and with JavaScript disabled, with the source code generated when you use the browser’s default user-agent. Are there any internal linking differences?

You should also analyze the internal linking differences between the cached version and the live page.

Nofollow on internal links

The nofollow microformat[9] is a Robot Exclusion Protocol that applies at the element level, and it prevents PageRank and anchor text signals from being passed on to the linked pages. The HTML element that nofollow applies to is the A element.

Some SEOs use the nofollow attribute believing it will prevent the indexation of the linked-to URL. Often, we find statements similar to “nofollow the admin, account, and checkout URLs to prevent these pages from being indexed”.

Such statements are not accurate because nofollow does not prevent crawling, nor indexing.

Figure 141 – Interpretation of nofollow by the individual search engine, according to Wikipedia.

Matt Cutts, who worked as the head of Webspam team at Google, says that Google does not crawl nofollow links and here are his words:

“At least for Google, we have taken a very clear stance that those links are not even used for discovery”.[10]

However, Google’s Content Guidelines documentation states something different:

“How does Google handle nofollowed links? In general, we do not follow them”.[11]

Notice the “in general” mention, in the statement above.

A test I performed some time ago with internal nofollow site-wide footer links showed that although it took about a month, Googlebot, msnbot, and bingbot did crawl and index the nofollow links. Yahoo! Slurp was the only bot that didn’t request the resource.

My recommendation is to use nofollow not as a method to keep search engines away from content, but just as a method for preventing crawling. Keep in mind that if you nofollow links that search engines previously discovered, those links may still be indexed. Also, if external links are pointing to nofollow URLs, those URLs will get indexed.

Figure 142 – You will often see the nofollow tag applied to links such as shopping carts, checkout buttons, and account logins.

A few years ago, nofollow was used to funnel PageRank to important pages, a tactic named “PageRank sculpting”. However, nowadays, the vast majority of SEOs know that PageRank sculpting with nofollow no longer pays off[12], and many ecommerce websites stopped nofollow-ing internal links.

However, some continue doing it, as you can see in this screencap:

Figure 143 –Instead of nofollow links like the ones in the image above, a better approach is to consolidate links into a single page.

Consolidating links is a good approach because when you nofollow a site-wide URL like “Terms of Use”, you take that page out of the internal links graph completely.[13] This means that the page will not receive internal PageRank, but it also means it will not have internal PageRank to pass.

The previous example brings us to a more important issue: nofollow-ing links in primary or secondary navigation. Depending on what links you nofollow, you could be making a big mistake.

It is important to know that PageRank is a renewable resource, which means that it flows back and forth between pages that link to one another. According to the original formula, the PageRank metric uses a decay factor (AKA damping factor) between 10% and 15% at each iteration, to avoid infinite loops[14].

Let’s say that page A is the home page and it links to category pages B and C, from the primary navigation menu. To simplify, let’s assume that pages B and C do not have any external links pointing to them.

Figure 144 – An overly simplified PageRank flow.

The most important thing to understand from this diagram is that page B and page C each return PageRank to page A, which increases the PageRank for page A.

Let’s see what happens when you add rel=“nofollow” to the link pointing to Page C in the primary navigation:

Figure 145 – The nofollow attribute stops sending PageRank to page C.

When the nofollow is applied, page C stops sending internal PageRank back to page A, because Page C does not receive any internal PageRank to pass.

When I researched examples for this topic, big names such as Toyota surprised me by using nofollow in the global navigation. You can see in the screenshot below how Toyota nofollow-ed all the links pointing to car models such as Yaris and Corolla.

Figure 146 – The link in the red dotted border are nofollow.

Note: at the time of the research PageRank was still publicly available. Back then, Toyota’s home page had a PageRank 7, and the Yaris page (which is a nofollow link in the primary navigation) had a PageRank 5. The PageRank 5 was mostly due to a large number of external links rather than to the internal linking flow.

Figure 147 -The Yaris page gets a lot of external backlinks from more than 2,500 domains.

However, the situation was different on This time, the categories linked from the primary navigation did not get many backlinks from external sources. While their home page had a PageRank 5, all Shop by Type pages had a “not ranked” PageRank.

Figure 148 – Because the Shop by Type pages were linked from the primary navigation, they should’ve got a decent amount of authority (e.g., at least PageRank 3).

The nofollow attribute on primary navigation links does not mean that those pages will not show in SERPs. As a matter of fact, they were all cached by search engines, at that time. Also, using nofollow on those links does not mean that more PageRank was passed to other follow links. What it really meant is that the authority of the nofollow-ed URLs in the primary navigation was significantly reduced.

If some links are not important for users, consider removing them from the navigation altogether. Not every category needs a link in the primary navigation menu.

If you want to send link juice only to specific links or pages, here are some alternatives to nofollow:

  • Have fewer links on the linking page.
  • Move important links to prominent places.
  • If you do not want to pass link juice to certain links, make them undiscoverable for bots.
  • Block search engine bots from discovering overhead links.

Keep in mind that nofollow is not a solution for duplicate content.

Because nofollow is incorrectly used to prevent indexing, it may also be incorrectly used to prevent duplicate content issues. However, adding nofollow to links is not the best approach for controlling duplicate content. Since nofollow is not 100% crawling and indexing fail-proof, how can it be used to prevent indexation or duplicate content?

Internal linking optimization

Users navigate from one page to another by clicking on links. That is one of the core principles of the Internet, and it hasn’t changed since the Web’s inception. However, while links are simply a way for people to navigate within a website or between websites, search engines will use links as authority and relevance signals.

For search engines, though, all links are not created equal. Some links are assigned more weight based on various criteria. For example, links surrounded by text are considered more important than links in footers, as Google states in the video[15].

Links surrounded by text are called contextual text links, while links used to structure a website (for example, links in the primary and secondary navigation or breadcrumbs) are called structural or hierarchical links.

One of the reasons contextual text links receive more search engine weight is related to the fact that users often ignore structural links to go straight to the content,[16] and because users rarely scroll to click on footer links. This is why search engines deem contextual text links more important than some structural links such as footer links.

Figure 149 – Structural links in several types of navigation such as primary, secondary, faceted navigation. The contextual text links are present in the main content area.

Large websites such as ecommerce ones have the advantage of generating an incredible number of internal links; however most of those are structural links that do not carry the same power as contextual text links. Moreover, in some cases Google might even ignore boilerplate or structural links:

“We found that boilerplate links with duplicated anchor text are not as relevant, so we are putting less emphasis on these”.[17]

There are several ways to optimize internal linking, and there is no excuse for you not to capitalize on SEO opportunities that are under your direct control.

Theoretically, a large number of factors could influence the value of an internal link,[18] but we are going to limit to the following:

  • The position of the link in the page layout.
  • The type of link, e.g., contextual versus structural link.
  • The text used in the anchor.
  • The type of link, as in image link versus text link. An image’s alt text seems to pass less ranking value than text links, as it has been reported in this article[19].
  • The page authority and the number of outbound links on the page.

The position of the link in the page layout (e.g., in the primary navigation, in the footer, or the sidebar) influences how much PageRank flows out to the linked-to page.[20]
Microsoft has the VIPS patent (VIPS stands for A Vision-based Page Segmentation Algorithm[21]), which talks about breaking down page layouts into logical sections. Microsoft has another paper that talks about Block-Level PageRank, which suggests that PageRank passed out to other pages is dependent on the location of the link on the page.[22]

Google has a patent on “Document ranking based on semantic distance between terms in a document[23] and another patent called “Reasonable Surfer[24]. These two papers indicate that links placed in prominent places pass more PageRank than links in less important sections of the page.

Contextual text links are assigned more weight than primary and secondary navigation links, which in turn are deemed more important than footer links. However, the presence of the keyword rich anchor text in the primary navigation (which is present on almost every single page of the website) compensates for relevance. Therefore, primary navigation links are at least as powerful as contextual text links.

Unfortunately, you can have only a limited number of anchors in the primary or secondary navigation, which means you have to choose carefully. However, with contextual links, you can implement a large number and variety of anchors because you are not limited by design space or by strict anchor text labeling. For example, you may be restricted to using the anchor text “hotels” in your structural navigation, but on content-rich pages, you can use contextual text links such as “5-star hotels in San Francisco” or “San Francisco’s best 5-star hotels”.

Related to the link position, it is worth mentioning the concept of the First Link Rule. This rule says that when multiple links on the same page point to the same URL, only the first anchor text matters to search engines.[25]

Figure 150 – Each of these URL pairs points to the same URL twice, but with not so optimal anchor text

Regarding the first pair, linking back to the home page with the anchor text “home” may confuse search engines. This is because the anchor text “home” conflicts with the anchor text “home & garden products”.

Regarding the second pair, Children’s Bedroom Furniture should be a category page on its own, at a separate URL.

For the third pair, the “Decorating with Metal Beds” link points to a shopping guide, which is great. However, the link using the anchor text “modern metal beds” should point to a category page (if keyword research unveils that “modern metal beds” is an important category). For example, the link could point to the Metal Beds category page, filtered by the Style=modern.

If you want to make search engines count multiple anchor texts,[26] one of the best options is to add the hash sign (#) at the end of the URLs[27].

So, if your first link is, then all subsequent links will read like

However, if you link to the same URL with varied anchor text, you do not need to use the hash in the URL. For example, you can link to the same product page once with the product name and the second time using the product name plus the manufacturer name. Just make sure that the varied anchor text is related and relevant to the linked-to page.

Very often you will encounter multiple URLs pointing to the home page—once on the logo and once in the breadcrumbs.

Figure 151 – Both links (logo and breadcrumb) point to the homepage.

The alt text of the logo is “UGG Australia,” and the anchor text in the breadcrumb is “Home”. While having a “Home” link is good for usability, I am not a big fan of the “home” anchor text. I would either:

  • Use the brand name in the breadcrumb, because in this particular case the brand name (UGG) is very short. Instead of “Home”, I would use “UGG Australia” or just “UGG”.
  • I would replace the anchor text “Home” in the breadcrumb with a small house icon and use the alt text “UGG Australia” for that icon.

Multiple same-page linking happens when a page contains multiple links to the same URL. On ecommerce websites, this frequently arises when links on product listing pages point to product detail page URLs. One link is on the clickable image thumbnail, and the other link is on the product name:

Figure 152 – Multiple links to the same product details page.

Figure 153 – The HTML code for the previous image.

If we look at the source code for the previous example we will find that the alt text of the thumbnail image is “black”, and the product anchor text is “Solid Ribbon Belt”. This sends confusing relevance signals and is not optimal.


  1. The <A> element has an alt attribute, but it is in the wrong place because the alt attribute is not allowed on the <A> element. This alt attribute was probably intended to be a title attribute.
  2. The alt texts #1 and #2 should be switched.
  3. The alt attribute on the A tag (#1) should be removed.

Figure 154 – This is the product name text link.

Let’s talk about several options for addressing multiple links generated by image thumbnails and product names:

  • Repeat the product name text in the image alt text. This is the easiest way to tackle this particular type of issue. In our example, the thumbnail’s alt text will become “Solid Ribbon Belt”.
  • Wrap the image and text under a single anchor or a single link. This not always possible and it is not good for accessibility.
  • Deploy the URL hash if you need to use unrelated anchor text to point to the same URL.
  • Place the product name above the image (this is against usability and design conventions).
  • Code the page so that the text link is above the image link in the HTML code, while in the browser you will use CSS to display the anchor text below the image. This is a bit complex to implement, and it is not a very good idea.

In the case of multiple same page links, if the anchor texts are unrelated, they will send confusing relevance signals. However, PageRank will pass through both links.[28]

Now that you know that contextual text links are important, let’s see how you can create more of them with the help of user-generated content, product descriptions, brand pages, and blog posts.

User-generated content
User-generated content (UGC) is one of the best ways to feed search engine bots, to send engagement signals, and to help users make purchasing decisions.

Figure 155 – The highlighted texts are potential internal links.

In this screenshot, you can see two typical reviews displayed on a product detail page. Reviews can add to the overall main content text and help with conversions. I highlighted in yellow a couple of words that could be potential internal links.

Product reviews
Product reviews are one type of content-heavy user-generated content and represent a huge opportunity for generating contextual links. However, not many ecommerce websites are taking full advantage of product reviews for internal linking purposes.

While researching this topic, I was surprised to find that only one of the top 50 online retailers was adding contextual links on user reviews content. For whatever reason (maybe poor SEO implementation, vendor restrictions, fear of linking out from product detail pages and losing conversions, and so on) the other retailers did not. In fact, very few of the top 50 online retailers deployed SEO-friendly reviews. We will discuss how to optimize reviews in detail, in the section dedicated to product detail pages.

Figure 156 – This is a very popular product with more than one thousand reviews.

In the example above, the product has 1,221 reviews. If you were to add just one contextual link on 10% of the reviews, you would create 120 powerful contextual internal links.

Product descriptions
Many ecommerce pages contain text-rich sections. Take product detail pages for example; each product has or should have a description. These content-rich sections are great places to link up to parent categories and brand pages:

Figure 157 – The highlighted text could be a link to a brand page (Bobeau).

When you link from product descriptions, it is important to link to the parent category and, optionally, to other highly related categories.

Optimized brand pages

Figure 158 – Most of the time brand pages are nothing but a product listing page.

There is nothing wrong with listing products on a brand page, but you must make brand pages content-rich as well.

If you want to build relevant and valuable contextual text links that send more PageRank authority to product or category pages, then brand pages must include text, media, and social signals. Be creative with the content, and link smartly. Add a paragraph about the brand’s history, and link to the brand’s top sellers. Alternatively, you can add interesting facts about the brand and a couple of useful reviews. Get the brand owners interviewed and publish the interview on their brand page. You can then ask for a link or a mention from their Press or News section.

Look at how Zappos improved the internal linking on their brand pages, and how they carefully interlink thematically related pages:[29]

Zappos’ brand page does a good job at satisfying users and search engines:

  • Zappos uses section 1 as a sitemap, to guide bots to various other related pages on their website.
  • In sections 2, it implements brand-specific RSS feeds. When there are new products published for that brand, search engines will be instantly notified.
  • In section 3, you can see how they use text-rich content for contextual linking.
  • In section 4, they link to the brand’s featured products.
  • In section 5, Zappos features contextual links within user reviews.

As mentioned in the Information Architecture section, blogs can be used to support and increase authority for the category and product pages, but very few ecommerce websites take full advantage of blogging.

Figure 159 – Contextual text links from the main content area carry significant authority. Make sure your content-rich pages link internally to PDPs and PLPs.

When you write blogs, link internally from the main content areas to pages on your website, ideally to product and category pages.

Figure 160 – This is a good implementation of internal linking from blog posts. If you do not overdo it, internal exact anchor text match is still important.

At the risk of becoming annoying, I need to stress this: if you are not blogging, you are missing a huge amount of long-tail search queries used by possible customers in the early buying stages.

Remember, you write articles not to sell or promote something but to grab long-tail traffic for informational search queries, and to support pages higher in the hierarchy. The amount of content you need to create to support category, subcategory or products depends on how competitive each keyword is.

Other types of user-generated content that you can use to create contextual links are blog comments, user or customer support questions and answers, guest posts, product images with captions, user-submitted images, curated rich media, and even shop-able images.

Anchor text

The anchor text optimization principle is simple: the text used in the anchor sends relevance clues to search engines, and it must be relevant to the linked-to page. For example, if the anchor text is “suitcases” and the linked-to page includes the phrase “suitcases” along with other semantically related words, then the anchor text in the incoming link is given more weight.

However, if you were to use “click here” on internal anchor text pointing to, let’s say hotel description pages, then search engines will assign less relevance to those anchors, as they are too generic and don’t communicate anything about the linked-to page. In our previous example, when linking internally to hotel description pages, you should use the hotel names in the anchor text.

The following study has been conducted on more than 3,000 ecommerce and non-ecommerce websites, and it analyzed more than 280,000 internal links and their corresponding anchor text[30]. The study looked for the most common words used in the internal anchor text. This screenshot shows those terms ranked by frequency.

Figure 161 – The study looked at how 3,000 websites use anchor text in internal linking.

Seven out of 10 anchor text links could be logically consolidated into three groups, represented by the numbers in the image. This technique is called link consolidation, and it is a better alternative to link sculpting with nofollow. Keep in mind that if the links you consolidate are in the footer, then the value of doing this is minimal.

Let’s see what anchor texts you use to link pages internally.

First, let’s find out whether you use generic anchor texts such as “click here” or “here” on your website. After you run the crawl on your website, use the IIS SEO Toolkit to check whether The link text is not relevant violation is reported under the Violations Summary section of the tool:

Figure 162 – If you double-click any of the violation titles in the Violations Summary section you will get more details about each error.

There are situations where it is OK to use “click here” as anchor text, for example when you link to a page that is not important for rankings, or when you use “click here” as a call to action. As a matter of fact, “click here” is one of the most powerful calls to action used in online marketing.

By default, the IIS SEO Toolkit searches for the words “here” and “click here” in text anchors. In practice, there are more generic anchors that you should pay attention to. A more comprehensive list of generic anchors is available in here.

If you want to be exhaustive with this type of analysis, you need to export the list of anchors from the IIS SEO Toolkit and use Excel for a deeper analysis. Here’s how to do it.

Figure 163 – Create a new link query.

In the IIS SEO Toolkit go to Dashboard, then click on the Query drop-down, then click on New Link Query.

Figure 164 – Use the settings depicted in section (1) and (2).

Use the following settings in section (1):

  • Linked Is External Equals False.
  • Link Type Not Equal Style.
  • Link Type Not Equal Script.
  • Link Type Not Equal Image.

In the Group By section, select Link Text. Then hit Execute, sort by Count, and then click Export. This will generate the aggregated link text report. You can export the data to a .csv file.

Once the Links tab opens, right-click anywhere on the gray area, and select Query –> Open Query.

Figure 165 – Importing an XML query in the IIS SEO Toolkit.

Open the file generated by the IIS SEO Toolkit using Excel, and name one of the spreadsheets Anchors. Name the first column Anchor, and list all the anchor text in it. Name the second column Occurrences and list the occurrence count (the SEO toolkit generates this data.)

Add a third column (name it Presence) and leave it empty for now because this column will be filled in later using a VLOOKUP function.

Figure 166 – The count of occurrences for each anchor text.

Create a new spreadsheet and name it Generic anchors. Then, create two columns: Generic Words and Presence. Then, list all generic keywords in the Generic Words column. Fill the Presence column with number “1”:

Figure 167 – Adding the number one in the Presence column will be used to match the anchors on your website with the generic anchor text list.

Now, go back to the Anchors spreadsheet, and add the following VLOOKUP formula in cell C2:
=VLOOKUP(A2,’generic anchors’!A:B,2,FALSE)

Figure 168 – VLOOKUP is a built-in Excel function that is designed to work with data that is organized into columns.

Copy the VLOOKUP formula all the way down in column C. You can double click on the tiny dot in cell C2 (the dot at the bottom right of the cell) to automatically fill the column C with the VLOOKUP formula.

If there is an exact match between the anchors used on the website and the generic keywords list, the column C cells will be filled with value “1”. You will get “#N/A” when there is no match. Sort or filter by “1”, and you will get the list of generic anchors on your website:

Figure 169 – The anchor text “Blog” is one of the most used internal anchor texts. Additionally, there are some other generic anchors such as “click here”, “here”, “home”, or “website”.

So, we identified that the “blog” anchor text is heavily used on this website; this large number suggests that it is probably a site-wide link.

Next, we will use the IIS SEO Toolkit to see which pages link to the Blog section.

You will need to open a new query by going to Dashboard –> Query –> New Link Query;

In the Field Name section use the following settings:

Figure 170 You can group the data by Link Text.

  • Link Type Not Equal Style.
  • Link Type Not Equal Script.
  • Link Type Not Equal Image.
  • Link Text Equals “Blog”.

In the Group By section, select Link Text (if Group By does not show up by default, you will have to click on the Group By icon just below the Links tab. Next, hit Execute. This report will show you how many times the word “blog” was used as anchor text.

Double-clicking on “Blog” will open a detailed list of Linking URLs. Repeat the process for all generic anchor texts.

You have to be more creative and replace the anchor text “blog” with something more appealing to search engines and people. Even {CompanyName}Blog is a better choice, but you could theme this anchor text even more. For example, if you sell fishing or hunting equipment, you can use {CompanyName}Fish & Hunt Blog. If you sell running shoes, you could use Mad Runner’s Blog, and so on.

When you link to category or product detail pages, use the category or the product names as anchor text. For instance, if you sell books, you will link to the product detail page with the book’s name. You can also vary the anchor text by adding brands or product attributes to the product name.

Exact internal anchor text match still matters for ecommerce websites, if you do not go overboard, for example by spamming with site-wide footer links. Usually, it is a good idea to match the search queries with your internal anchor text, as closely as possible. However, how do you know which anchors to use to link to a page that lists, let’s say, ignition systems for a 2004 Audi A3? By doing keyword research.

For example, if you sell auto parts, you can break down the keywords by years, makes, models, product types, or categories. Collect keyword data from as many sources as you can: user testing, Google Analytics, Google Ads data, your webmaster accounts, competitor research, or data from your Amazon account. Put all the keywords in a master spreadsheet and remove duplicates using Excel.

Add the metrics that you want to take into consideration, and your table may look like this:

Figure 171 – I like to add a keyword ID in the first column, just to be able to revert to the original data at any time, by sorting by ID.

As metrics, I am going to consider the average monthly searches for each keyword and the number of conversions.

Now, you need to identify search patterns. You are going to do this by replacing each word with its corresponding product attribute or category it belongs to. For example, you will replace “2007” or any other year with the placeholder{year}, “Chevy” or any other make with the placeholder{make}, and “grill” or any other category name with the placeholder{category}. Replace all the words until you end up with a significant number of placeholders.

Figure 172 – You can speed up this process if your programmers can write a script to replace keywords with attributes automatically.

Once you replaced all the words with placeholders, identify the most used patterns by using pivot tables:

Figure 173 – You can identify the most used patterns using pivot tables.

For your pivot table settings use Keyword Pattern for your rows, and for Values use the following:

  • The sum of average monthly searches.
  • The sum of conversions.
  • The count of keyword pattern.

There you have it! The most common pattern in our example is {year}{make}{model}{category}. However, the pattern with the most searches is {make}{model}. The pattern with the most conversions is {make}{model}{category}.

By mimicking user search patterns in your internal linking, you will increase the relevance of the linked-to pages.

Anchor text variation
Despite RankBrain becoming better at understanding keyword variations, it is still a good idea to vary the internal anchor text pointing to the same URL. For ecommerce websites, the category and subcategory pages will allow only some room for keyword variations. For example, when you link to the Vancouver Hotels page, you can use “hotels in Vancouver” or “Vancouver hotels”.

When you link to a product listing page (for example a page that lists all Rebel XTi cameras), you can add the brand name (“Canon Rebel XTi”) or the product line the product belongs to (e.g., “Canon EOS Rebel XTi”).

Figure 174 – When you link from content-rich areas such as blog posts or user guides, you can vary the anchor text more.

Contextual text links allow more anchor text variation than structural links. Structural links are often based on rules, such as using only the product names or product names plus product attributes. Therefore, structural links are not very flexible, while contextual text links are.

For product variants (e.g., model numbers or different colors), the anchor text on the item name can contain differentiating product attributes:

Figure 175 These three SKUs are variants of the product “Canon Digital Rebel XTi 10.1MP”.

In this screenshot, the three SKUs are variants of the same product, “Canon Digital Rebel XTi 10.1MP”. The first SKU is just the camera body. The second SKU includes a lens too, and the anchor text includes that detail. Similarly, the third SKU includes a lens too, but in a different color.

Remember to link using text that makes sense for users without forcing keywords. Also, just a reminder that when you use plurals in the anchor text (e.g., “digital cameras”), consider linking to a listing page because search queries that contain plurals usually denote that users want to see a list of items.

Merchandising and marketing teams needed to cross- and upsell; that is why ecommerce websites started featuring sections such as Related Items. Related linking can be found under various names and implemented in various ways such as people who purchased this also purchased…, you may also like…, people also viewed…, related products, or related searches. This concept was originally introduced to help increasing the average order value by increasing  the number of items added to the cart by users. This tactic also helps users navigate to related products or categories.

Figure 176 – The You May Also Like section in this screenshot is commonly found on ecommerce websites and is a good example of related items sections.

SEOs realized that related items sections could also be used to:

  • Optimize internal linking by interconnecting deep pages (i.e. facets) that were otherwise impossible, or very difficult to connect with other types of navigation URL such as breadcrumbs.
  • Flatten the website architecture.
  • Silo the website architecture by linking to siblings and parent categories. Keep in mind that siloing with related products requires very strict business rules.

Links from “Related links” sections can be used to boost the authority of any page(s) whenever needed:

  • You can boost the crawling, indexing, and eventually the rankings of newly added products by linking directly from the category listing page, or even from the home page.
  • If there are products that have very high value for your business, linking from the home page will send more authority to those products.
  • On a page that lists all houses for sale in a particular district, you can also link to houses in nearby neighborhoods.
  • You can boost hotel description pages by linking to recently reviewed hotels from city listing pages.
  • And so on.

If you have a lot of data to rely on, you can implement related products/categories/searches/etc with the help of recommendation engines. Such engines are used to optimize the shopping experience on-the-fly, but often they are implemented with uncrawlable JavaScript. One way of tackling related items implemented with JavaScript is to define and load a set of default related products that are accessible to search engines when they request a page. You will then replace or append more items with AJAX once the page loads in the browser, to improve the discoverability for users. The idea is that you do not want to leave the rendering the content to Googlebot.

Figure 177 – The related items section on the left side of the screenshot is accessible to search engines, as you can see in the cached version of the page, on the right side of the screenshot.

On a side note, while the content of the recommendation engine is indexed, the alt text of the images could be improved.
On the other hand, on the website below the AJAX implementation prevents search engines from finding the recommended products:

Figure 178 – The You May Also Like section should show up in the cached version, just after the last product in the list, but it does not.

If the website above wants to flatten the website architecture by internally linking from related products, they have to make sure search engines can access the links in the related products section. Use fetch and render using Google Search Console to clarify if search engines can render the items in the You May Also Like section. If it works there, it will work for search too.

Googlebot is a headless browser. This means that it is a web browser without a graphical user interface, but one that can render and “see” the content on JS-powered pages.

Also, keep in mind that what you see when you use the “cache:” operator is not the same with what Google renders at their end. The source of truth for Google is very close to what “Fetch and Render” provides in Google Search Console, while the cached version is just the raw HTML. It is most likely that Google uses both the cached and rendered versions of a page, just to make sure people are not spamming.

Here are a few things to consider when implementing related or recommended items:

  • If you need to add tracking parameters to recommended item URLs, do so in the browser, at mouse down or click events. If you cannot use click events, canonicalize the tracking parameters using Google Search Console or using rel=”canonical”.
  • Keep the number of recommended items low and focus on quality (three to five products should be enough).
  • If you want to provide even more recommended items, use carousels.

The website below links to a sweater and sandals PDP because those products are related to the product detail page they are featured on.

Figure 179 – You can interlink related items even if they are in different silos if it makes sense for users (e.g., link from a skirt PDP to the sandals PDP that completes the look).

Popular searches sections are another type of related links that can be implemented on ecommerce websites. These popular searches (which don’t necessarily come from your internal site search) can be an SEO power horse, especially for large ecommerce sites. You can automate the creating of internal links to categories, products, product listing pages, and facets at scale, which will help increasing the number of internal links to those pages. You can also consider taking into account various metrics such as conversion rates, search volumes, or rankings to distribute the internal links more efficiently; the more competitive a query included in the popular searches is, the more internal links will be required.

Internal linking over-optimization

While internal links with exact match anchor text typically do not hurt,[31] do not to overdo it. Let’s look at a few scenarios that can raise over-optimization flags.

Unnatural links to the homepage
It does not help much if you replace the anchor text “home” with your primary keyword.[32]

Figure 180 This looks spammy.

If your domain or your business name is “online pharmacy”, then it may be fine to use keyword-rich anchor text to point to the home page, but otherwise, do not do it.

Too many contextual text links
A high ratio of internal anchor text links to content is not advisable. For example, if a category description content has 100 words and you place 15 anchors in it, that is too much.

Figure 181 – Contextual links are great, but that does not mean you have to abuse them, as depicted in the image above.

Contextual text links can be created either programmatically or added manually by copywriters or SEOs. In both cases, you need to define rules to avoid over-optimization.

Let’s exemplify with a set of rules for category descriptions:

  • Add links to other products from the parent category. Maximum products linked per 100 words is two.
  • Add links to related categories. Maximum related categories linked per 100 words is two.
  • The maximum consecutive anchor text links is two.
  • The maximum number of links with the same anchor text is one.
  • The minimum number of links per 100 words is two.

Use these rules just as guidelines and customize based on your circumstances.

The following is an example of decently safe internal linking:

Figure 182 – The text in this paragraph flows naturally, and the anchors seem natural as well.

Keyword-stuffed navigation and filtering
Some ecommerce websites try to enhance rankings for head terms like category or subcategory names by stuffing keyword-rich anchor text links in the primary navigation, similarly to what you see in this screenshot:

Figure 183 – Did you notice how each subcategory link contains the upper category name?

It is not necessary to repeat keywords repeatedly in the main navigation. If your website architecture is properly built, search engines will be able to understand that if the category name is Watches, all the links and products found under it belong to the Watches category.

The same applies to other forms of navigation, such as faceted navigation.

Figure 184 – These links look spammy too.

You can use properly nested list items to help search engines understand categorization so that you do not need to repeat the category name in every filter value in the left navigation.

Because PageRank is a renewable metric, having external links to category and subcategory pages not only provides ranking authority to the target pages but also increases the amount of PageRank that flows throughout the entire website. Moreover, because it is not economically feasible to build links to individual product pages for ecommerce websites with large inventories, the link-earning efforts should be focused on category and subcategory pages. Keep in mind that link development is complex and outside the scope of this course.

Focusing your link building efforts towards just a few top-performing category pages is a good idea for new websites or websites with limited marketing budgets, but generally, you need to diversify your targets. Once you built enough links to a category page, that page becomes a hub: it will pass link equity to pages downwards and upwards in the website hierarchy. The more hubs you build, the more natural your website will look, and the more PageRank will flow throughout it.

You can identify existing link hubs using Google Search Console and use them to your advantage. Anytime you want to boost a new page you can tap the power of the hubs. For example, you identified that Women’s Apparel subcategory is a hub. If you want to boost the Women’s Sleepwear category, link to it from the hub page contextually, from the main content.

  1. Browser-specific optimizations and cloaking,!topic/webmasters/4sVFlIdj7d8
  2. GET, POST, and safely surfacing more of the web,
  3. Google Analytics event tracking (pageTracker._trackEvent) causing 404 crawl errors,!topic/webmasters/4U6_JgeCIJU
  4. Call of Duty: Ghosts – Xbox 360,
  5. Free SEO Toolkit,
  6. One More Great Way to Use Fusion Tables for SEO,
  7. Visualize your Site’s Link Graph with NodeXL,
  8. How To Visualize Open Site Explorer Data In Gephi,
  9. rel=”nofollow” Microformats Wiki,
  10. Interview with Google’s Matt Cutts at Pubcon,
  11. Use rel=”nofollow” for specific links,
  12. PageRank sculpting,
  13. Should internal links use rel=”nofollow”?,
  14. Damping factor,
  15. Are links in footers treated differently than paragraph links?,
  16. Is Navigation Useful?,
  17. Ten recent algorithm changes,
  18. Link Value Factors,
  19. Image Links Vs. Text Links, Questions About PR & Anchor Text Value,
  20. Are links in footers treated differently than paragraph links?,
  21. VIPS: a Vision-based Page Segmentation Algorithm,
  22. Block-Level Link Analysis,
  23. Document ranking based on semantic distance between terms in a document,,716,216.PN.&OS=pn/7,716,216&RS=PN/7,716,216
  24. Google’s Reasonable Surfer: How The Value Of A Link May Differ Based Upon Link And Document Features And User Data,
  25. Results of Google Experimentation – Only the First Anchor Text Counts,
  26. 3 Ways to Avoid the First Link Counts Rule,
  27. When Product Image Links Steal Thunder From Product Name Text Links,
  28. Do multiple links from one page to another page count?,
  29. Agave Denim,
  30. [Study] How the Web Uses Anchor Text in Internal Linking,
  31. Will multiple internal links with the same anchor text hurt a site’s ranking?,
  32. Testing the Value of Anchor Text Optimized Internal Links,