Ecommerce SEO


Category & Product Listing Pages (PLPs)

Length: 21,316 words

Estimated reading time: 2 hours, 25 minutes

This e-commerce SEO guide has almost 400 pages of advanced, actionable insights into on-page SEO for ecommerce. This is the seventh out of 8 chapters.

Written by an ecommerce SEO consultant with over 25 years of research and practical experience, this comprehensive SEO resource will teach you how to identify and address all of the SEO issues specific to ecommerce websites, in one place.

The strategies and tactics described in this guide have been successfully implemented on top 10 online retailers, small & medium businesses, and mom and pop stores.

Please share and link to this guide if you liked it.

Category & Product Listing Pages

Those involved in ecommerce in one way or another refer to product detail pages (also known as PDPs), as the “money pages”. This seems to imply that many view PDPs as the most important pages for ecommerce. Because of this mindset, often the PDPs do get the most attention, at the expense of listing pages such as product or category listing pages.

However, listing pages are in fact the hubs for ecommerce websites, and can collect and pass the most equity to lower and upper levels in the website hierarchy. Also, link development for ecommerce usually focuses on category and subcategory pages, so listing pages deserve more attention.

Listing pages display content in a grid or list. When these pages list products, they are referred to as product listing pages or PLPs. When the pages list categories, subcategories, guides, cities, services, etc., they are referred to as category landing pages or simply, category pages; sometimes they are also called intermediary category pages.

Two types of listings

Listing pages usually display one of two types of items:

  • Products – this listing displays items belonging to the currently viewed category.
  • Subcategories – this listing displays subcategories under the currently viewed category or department.

Product listings

Product lists (or grids) display thumbnail images for all the items categorized in a certain category or subcategory. This means that all the items listed there share a common parent in the hierarchy.

Figure 261 – This screenshot shows a traditional product grid. All the items displayed in the main content area belong to the Guitars category.

The product list approach has the advantage of sending more authority directly to the products in the list, especially to those on the first page of the list. However, these listings can present too many options to users, who may have to sift through hundreds or thousands of products, as depicted in the image below:

Figure 262 – 2,839 clothing items in a single category will require pagination.

In many cases, showing the entire list of products belonging to a top-level category will not make sense to users. They need guidance in choosing a product, and listing thousands of items is too much and too generic.

Let’s talk about several recommendations for optimizing product listings.

Deploy an SEO-friendly Quick View functionality
Use this feature to provide more content and context for users and search engines. This functionality is usually implemented with modal windows to quickly provide product summary information without visiting the actual product detail page:

Figure 263 – A click on the QUICK LOOK button brings up the modal window to the right. This functionality can lead to a better shopping experience.

To make this functionality work to your advantage, implement it with SEO-friendly JavaScript, whenever possible. For example, you can deliver more crawlable content by loading the static product description in the source code but displaying it in the browser only when Quick Look is clicked. Dynamic information such as product availability, available colors, or pricing can be loaded on-demand with AJAX.

Just as with any other method that displays content to users only at certain browser events, it is wise not to abuse the Quick Look implementation. This means that the content should be super-relevant and brief. Fifty to 150 words for the product description is probably more than enough.

Also, the internal linking should not go overboard; two to five links in the short product description is enough.

Additionally, you may want to consider the number of items you load in the default view, which is the view that gets cached by search engines. If you load 20 products each with 100-word descriptions, that is 2,000 words of content on that page. If you load 50 products, that is 5,000 words, which may be too much.

Create and improve internal algorithms to optimally display items in the list.
SEO is about increasing profits from organic traffic by optimizing for users coming from search engines. If a user lands on a category page and the first items in the list or in the grid do not generate profits for you, then you are missing opportunities.

You need to design and use an algorithm that assigns a product rank to every item, and you need to organize the products based on this metric. The algorithm does not have to be complex. It can consider a few different metrics, for example, percentage margin, sales statistics, stock availability, proximity to user location, and even hand-picked items.
The idea is to put the profit-maximizing items first on the list.

Most sites have the best-selling or most popular items as the default view in the product list, which is good for usability because most customers will be looking for bestsellers.[1] However, that does not mean that you should not try to optimize profits by experimenting with your rankings algorithm that displays at the top the products most important to you.

Add category-specific content
Adding content to PLPs can increase the chances of showing up higher in search result pages. This applies to categories at all levels of the hierarchy.

You are probably familiar with the “SEO content” for category descriptions; many ecommerce websites have it nowadays, usually at the bottom of the page. Take a look at the screenshot on the next page:

Figure 264 – The “SEO content” is displayed after the product grid or list to allow items to be displayed above the fold. The SEO influence of this content can be improved by adding links to several internal pages.

Do you wonder if this tactic works for Newegg?

Figure 265 – They rank #2 for “LCD monitors”, above Best Buy.

Of course, other factors helped this page rank second, but that category description will have some influence as well. Remember, SEO is about making small and incremental changes.

Some websites prefer to place this type of content above the listing, but this approach does not allow room for much copy, and it will push the listing down the page, as you can see in this screenshot:

Figure 266 – The category “SEO text” at the top of the listing pushes the products down the page.

In the above example, the category description is not long at all, but the marketing banner pushes the product grid even further down.

There is no doubt that more text content can help with SEO. However, adding too much content above the product list can push the products below the fold, which can, in turn, confuse users, and negatively affect conversion rates. On the other hand, displaying the content below the product grid is not as helpful and effective as having the content at the top of the page.

There are a few techniques to address this issue, for example, collapse/expand more content at click, or using JavaScript carousels. I find SEO-friendly tabbed navigation to be one of the most elegant solutions to fit a lot of content at the top of the page. This approach is good for both users and bots, and it can be done within a limited amount of space without being spammy.

A quick note on content behind tabs and expandable clicks: before mobile-first indexing, such content is considered less important and given less authority. However, this changed when the mobile-indexing went live.

Let’s compare “before and after” tabbed navigation screenshots. The screenshot below shows how a category page looked like on REI. It displayed some content at the top of the list, but it was not using tabbed navigation. Notice how the content at the top pushes the listing down the page.

Figure 267 – This implementation does not require tabs.

And this is how the tabbed navigation version looks:

Figure 268 – This new design uses tabs to display more above the fold.

The Shop by Category is the default tab, which is great for users because it lists subcategories. The last tab, Expert Advice & Activities, holds a whole lot of SEO value:

Figure 269 – The content above is great for users and search engines.

The content in the previous image is not only well-written content that focuses on users and conversions rather than SEO, but it is also great “food” for search engines. This type of content targets visitors at various buying stages and will move them further into the conversion funnel, which is awesome. It will also increase the category’s chances of ranking better in the SERPs.

A quick note here: REI could easily add one or two contextual text links to thematically related subcategories or products to push some SEO equity to them.

The lesson here is that whatever content is placed in the tabbed navigation, it should be useful for users and should not be just some boilerplate text.

I mentioned it before and will say it again, ecommerce websites have to become content publishers if they want to succeed in the long run. This is not an SEO strategy but rather a healthy marketing approach. The content you place on each page should match the user intent targeted with that page. If the query you target with a page is generic, i.e., targeting category names, try to satisfy multiple intents on that page. I call this multi-intent content.

In addition to the great content wrapped in this tab, REI added even more content at the bottom of the subcategory grid, outside the tabbed navigation:

Figure 270 – Adding more content at the bottom of the page is intended to increase the relevance of this subcategory page.

One great implementation of SEO content at the bottom of the listing grid is on The Home Depot’s website. They placed buying guides, project guides and category-related community content, which is great for users, and search engines will fall for it. Just a side note here: it would be interesting to test the effects on conversions if this type of content was moved up in the layout, to just above the product grid.

Creating the kind of content deployed by Home Depot is a win-win tactic because:

  • Users will get helpful content to assist with their needs and questions, which leads to better conversion rates.
  • Search engines will love such content, which leads to more organic traffic.

Figure 271 – A very useful section is listed at the end of the product listing.

Another option for adding more content to category pages is to present a link or a button to more content, just above the listing. You can see it exemplified in this screenshot:

Figure 272 – When users click on the View Guide button they are taken to a new page. The guide on this new page is long and good, but it does not add any value to the listing page itself.

Instead of opening the guide in a new page, a better SEO option is to open a modal window that contains an excerpt from the guide. Preload the text excerpt in the HTML code so that it is accessible to search engines. This modal window will contain a link to the HTML guide, so users can click on it if they need to read the entire guide.

Creating content is time and resource consuming, so you need to identify the top-performing or best-margin categories to start with, then gradually proceed to others.

Capitalize on user-generated content (UGC)
User-generated content is a highly valuable SEO asset, so let’s take a look at two types of UGC that you can implement on listing pages: product reviews and forum posts.

Product reviews
Adding relevant product reviews will influence conversion rates and search engine rankings:

Figure 273 – In this screenshot, you can see how the product reviews section is displayed at the bottom of the product listing page. The reviews in this section should, ideally, match some of the products in the listing.

If the listing is paginated, the reviews should be listed on the index page and should not be repeated on paginated pages. If you have enough reviews to populate pages 2-N of the series, you might be tempted to do so, but this is not a good idea.

In such cases, you may want to consider increasing the number of reviews you list on the index page. Instead of listing three reviews, increase to five or ten.
When you do so, you need to create rules to avoid duplicate content issues between listing and product detail pages. Such rules can be:

  • do not display more than two reviews for the same product on the same page.
  • display only five reviews on the same listing page.
  • on the listing page do not display the same reviews you displayed on the product page

Forum posts
Community content such as forum posts can be handy not only in the forum section of the website (of course, if you have one) but on category pages as well:

Figure 274 – In addition to product reviews, relevant forum posts are listed below the category grid.

Optimize for better SERP snippets
Product listing pages can get rich snippets in Google search result pages:

Figure 275 – SERP snippet enriched with list item count. Sometimes, Google displays only the number of items in the listing; other times, it displays a few item names as well.

While many ecommerce websites are interested in knowing how to get these rich snippets, Google’s official recommendations do not go into much detail:[2]

“If a search result consists mostly of a structured list, like a table or series of bullets, we will show a list of three relevant rows or items underneath the result in a bulleted format. The snippet will also show an approximate count of the total number of rows or items on the page” (for example, “40+ items” as in the screenshot above)”.

Figure 276 – Clean HTML code can help with getting the item count in the rich snippet.

Google can use your HTML code to generate rich snippets, which means that it does not necessarily need semantic markup such as This is why it is important to keep your code clean and well structured.

Keep in mind that if your listing pages get rich snippets that include item names, then the description line in the SERP snippet will be shorter than the usual ones. Instead of two or three lines of text, the description snippet may be truncated to just one line of text. You may want to check the impact on SERP CTR in such cases.

Here are some tips on how to get rich snippets for category listings:

Validate the HTML code for your list
If you open a list item element but do not close it, or if you nest elements improperly, it will be more difficult for Google to understand the page structure.

Figure 277 – Each product in the grid is wrapped in a list item element that is properly closed. Also, notice the DIV and UL class names.

Do not break the HTML tables
The rich snippet will display the number of items on the index page (e.g., “40+ items”) if the product grid contains 40+ items in a single table, but only if the table markup has no breaks. If something in between items 10 and 11 breaks the table, Google will instead display the message “10+ items”. If you list your products in multiple tables, Google will choose to display the count from only one of them.

Use suggestive HTML class names
It is reported[3] that using the class name “item” in the item’s DIV helps with getting rich snippets for category pages:

“Just to confirm, wrapped a few items in <div class=items> and the snippet has been updated. Took four days to appear in the SERPs”.

This advice seems to be working, at least to some extent, as you can see in the following screenshot:

Figure 278 – Notice the LI class name.

The DIV that wraps the product grid contains the word “products”, and this seems to be common among websites that get rich snippets without using semantic markup. Also, the list item class name contains the word “item”.

Figure 279 – The rich snippet for the Running Shoes category includes the number of items per page and the total number of items in this category.

A large total of items in the list may attract more clicks because a large selection is one of the things consumers look at when choosing where to click. This brings us to another optimization idea.

Reconsider the number of items in the listing
If the number of products within the currently viewed category is reasonably low and easily skimmable (e.g., 50 items in a grid of five rows by ten columns), then load them all on one page. Depending on how many other links you have on the page and your overall domain authority, you can sometimes pump up this number to 100 or even more.

If you think that it is necessary to display a low number of items from a user experience perspective, you can load 50, 100 or 150 items in the source code in an SEO-friendly way, and use AJAX to display only 10, 15 or 20 items in the browser, to avoid information overload. You can then use AJAX to update the page content based on user’s requests, such as scroll down, sort, display all, and so on.

If you have thousands of products under the same category, consider breaking them into more manageable subcategories. You can list the subcategories instead of products after segmenting into smaller chunks.

Tag product reviews with structured markup
This is a debatable tactic, so you want to pay attention to how you implement it. Search engines do not support product review markup on product listing pages and may consider such markup spam; be careful.

Figure 280 – The Review markup can only be safely used on PDPs. PLPs should not contain this markup.

Make sure you do not use the AggregateOffer entity in your markup because this will raise spam concerns. The safest entity to use in PLPs is the Offer.

To learn more about product markup, read this article.[4]

Display category related searches below the search field
Related searches sections have traditionally been used to link internally to other pages and to flatten the website architecture. Here’s a classic example:

Figure 281 – The Related Searches section helps with linking internally to other pages.

Related searches are there to help users with discoverability and findability, by providing highly related links to other pages on a website. Given this, why not place them closer to where users will perform a search, such as a search field? You can see this in action on Zappos’s website:

Figure 282 – The Search by links are placed in a prominent place, to push authority to the linked pages and to help users.

However, on Zappos, the links are the same on every page, and they do not make sense on the Bags section of the website:

Figure 283 – Zappos displays search options as plain HTML links right below the search field.

Size, Narrow Shoes and Wide Shoes are not useful refinements for someone looking for bags, right? Instead, you can dynamically change these links to something related to bags, maybe by linking to a page that filters bags by Style or another attribute that suits the Bags category.

If you do not want to use too much space to list 10 or more related popular searches, you can implement a modal window that opens at click on “Popular Searches”. Make sure its content is available to search engines on page load. You can list as many popular related keywords for each category as you like.

Figure 284 – The image above depicts a possible implementation of popular searches using with a modal window.

As mentioned in the Home Pages section, you can use one of the following sources to identify searches helpful to users:

  • Find the top searches performed on each category page.
  • Identify the products or subcategories most visited after viewing the category page.
  • Get the top referring keywords. Remember, Google and other commercial search engines hide search queries behind the “not provided” label.

Defer loading the product thumbnail images
When you load tens of items in a listing, the chances are that many of them will be below the fold.[5] Loading all thumbnail images at once is neither necessary nor recommended. Lazy load only when the user scrolls down to view more products.

While image deferring has not much to do with rankings, it will help improve user experience by decreasing the page load time.

Figure 285 – Notice how small the scroll slider is (highlighted in a red rectangle); this size conveys that the page is very long. The products in the screenshot are several thousand pixels “below the fold”.

A word of caution about the meaning of fold: the “fold” has a very clear meaning in print (i.e., the physical fold of the newspaper right in the middle), but with websites the meaning of fold is blurry. You will need to define and identify where the fold is for your website, considering the browser resolution and the devices used by most of your users.
Obviously, the fold will be different on mobile than desktop.

Remove or consolidate unnecessary links
Product lists often pose the issue of redundant links. For example, in this screenshot, there is an image link on the product thumbnail image and another link on the product name. Both are pointing to the same URL.

Figure 286 – The links on the product thumbnail image and the product name point to the same URL.

There are several ways to address this issue, and we have discussed this before. Please refer to the Link Position section in the Internal Linking section.

Another issue very similar to the thumbnail-product name redundancy occurs when you place a link on the review stars and the text link displaying the number of reviews for the same product:

Figure 287 – The image link on the stars and the text link on the number “6” point to the same URL.

In the example above, none of the links can provide strong relevance clues to search engines due to the lack of anchor text so that you can keep only one link. I would keep the links on the star images, because you can add more SEO context using the alt text, and because the link area on those star images is larger than the text numbers. The link on the number of reviews could eventually be JavaScript-ed.

Removing unnecessary links or other page elements can de-clutter the design, provide white spacing between products, and can reduce the number of links that leak authority to the wrong pages.

Figure 288 – It is unnecessary to display the Special Offers link for each product. Instead, use tooltips or display small icons or stickers to highlight such offers.

Another element frequently listed in product lists is the “add to cart” button. I am not saying that you should remove it without proper analysis, but you can always A/B test to see how it influences conversion rate.

I suggest tracking “add to cart” events and analyze whether users add to cart directly from product listings. If they do, go a step further and identify what type of users do that (e.g., returning customers, first-timers, etc.) In many cases, those who add directly from product listings are return customers who are very familiar with your brand, your products, and your website; usually, they know exactly what they want from you. If you decide to remove “add to cart” buttons, these users will know that they can also add products to their cart, from product detail pages.

The usefulness of the “add to cart” buttons on product listings must be tested — test by replacing them with other CTAs, adding more product detail, or removing the buttons altogether.

Make the listing view the default view for bots (and searchers, if it makes sense)
Usually, the list view allows more room for product-related content, which is useful for users and search engines.

Figure 289 – This is the grid view. There isn’t much room to feature product info in a grid (name and price only).

Figure 290 – In a list view, there is room for more product info to be displayed.

In the example above, the list view is the default view for users and search engines, but users have the option to switch to grid view in the interface.
At the beginning of this section, I mentioned that there are two types of listings. Until now, we discussed product listings. Now, let’s talk about the second type:

Category listings

To list categories, it means that instead of displaying products, you show the available subcategories that a category contains, each displayed using a representative thumbnail image. Category listings are implemented at the first two or three levels of a site’s category hierarchy, depending on the size of the product catalog. Because the number of subcategories to display is low, most of the time category listing pages use a grid view, rather than a list view.

Let’s look at how HomeDepot implemented the subcategories grid in a user and search engine-friendly manner.

Figure 291 – This is the first level of the hierarchy.

The first level in the hierarchy (Appliances), lists several subcategory thumbnails (Refrigerators, Ranges, Washers, etc.), as well as sub subcategory links (e.g., under Refrigerators they list links to French Door Refrigerators, Top Freezer Refrigerators, Side By Side Refrigerators, etc.)
When you click on Refrigerators, a category listing is loading. This time the listing displays some of the most important sub-subcategories for the Refrigerators subcategory.

Figure 292 – This is the third level of the hierarchy.

The third level in the hierarchy (Appliances > Refrigeration > Refrigerators) still lists categories instead of products. This encourages users to take a more deliberate selection path before the page displays tens or hundreds of products.

Implementing subcategory listings in the first two levels of the ecommerce hierarchy has the advantage of sending more PageRank to subcategory pages. That is better than sending more PageRank to just a few products because your link development efforts should point to category and subcategory pages. It is not economically feasible to target product pages with link building unless you have either a large budget or only a few products in the catalog. Developing external backlinks builds equity for category and subcategory pages, which further flows to PDPs.

Implementing the first two levels of the ecommerce hierarchy as subcategory listings also makes for a better user experience. Usability tests have shown[6] that users can be encouraged to navigate deeper into the hierarchy and make better scope selections.

The choice between product and subcategory listing depends on the particularities of each website. Usually, subcategory listings are a better choice, especially for websites with large inventories. Deciding which subcategories to feature at which level of the hierarchy should be based on business rules (e.g., top five subcategories with highest margins, or top-five bestsellers).

Here are several recommendations on how to build better subcategory listing pages:

  • To send SEO authority directly to products, add a list of featured/top items at the bottom of the listing, as shown in this example:

Figure 293 – Keep in mind not to list too many products; five to 10 items should be enough.

  • Keep the left sidebar navigation available to users because that is the spot we have been trained to look to for secondary navigation; this navigation pattern influences conversions[7]. Also, it is easier to scan and choose from secondary navigation links.
  • The secondary navigation will not contain filters until a user reaches the point where you list products instead of subcategories.
  • Display professional subcategory thumbnails, as exemplified here.

Figure 294 – High-quality imagery reassures users that they are dealing with a serious business.

  • Add a brief description of the category whenever possible, and eventually link to buying guides or interactive product-finder tools that may help users decide which product is right for them. This is especially important if your target market is not familiar with the items you sell, or if you sell high-ticket items.

Figure 295 – A brief description of each category may help first-time buyers understand your terminology and can provide more context for search engines.

Figure 296 – Providing guides and educational content helps increase conversions.

In the example above, the original design did not include buttons like “Find the right fridge”, “Find the right washer “or “Find the right dryer”. However, those links can be of great value to users and might help with SEO as well. If searchers click such buttons after landing from a generic search query (i.e. “best appliances”), the clicks will help lowering bounce rates and will lead to longer dwell times.

  • Use an SEO-friendly Quick View functionality to add more details about each category.

Just as this functionality works on product listings, a similar approach can be implemented for category listings.

Figure 297 – In this screenshot, I added the More Info button to the original design for illustration purposes.

Clicking on More Info will open a modal window. In this window, you can include details such as a brief explanation of the category, what users can expect to find under this category, links to more subsequent categories in the hierarchy, and even FAQs.


Breadcrumbs are a form of navigational elements, and are usually displayed between the header and the main content:

Figure 298 – Breadcrumbs provide a sense of “location” for users.

For example, a breadcrumb on a website selling home improvement products might read Home > Appliances > Cooking.

In a breadcrumb structure, Home, Appliances, and Cooking are called elements, and the > sign is called separator.

Breadcrumbs are frequently neglected as an SEO factor, but here are a few good reasons for you to pay more attention to them:

  • Breadcrumb links are very important navigational elements that communicate the location of a page in the website hierarchy to users and helps them easily navigate around the website.[8],[9]
  • Breadcrumbs are one of the best ways to create silos, by allowing search engine bots to crawl vertically upwards in the taxonomy.
  • Breadcrumb navigation makes it easier for search engines to analyze and understand your site architecture.
  • Breadcrumbs are one of the safest places to use exact anchor text keywords.

In spite of great usability and SEO benefits, many ecommerce websites fail to implement breadcrumbs correctly, for users and search engines.

Figure 299 – Can you guess what the above page is about?

Take a quick look at this screenshot. Which page do you think this is? Is it the Edition page? Maybe the Gifts page? Or, the Designer Sale? Or is it the Shop by Designer page? None of these. It is the Shoes & Handbags category listing page. Did you find the label, yet? It is the drop-down in the left navigation.

Using a breadcrumb on this website would make it easier for users to understand where they are in the hierarchy.

If usability alone has not yet convinced you to pay more attention to breadcrumbs, then maybe it is time to remind you that properly implemented breadcrumbs show directly in Bing[10] and Google search results:[11]

Figure 300 – Breadcrumb-rich SERP snippets.

In this screenshot, you can see how BestBuy, NewEgg, and Dell websites have a breadcrumb structure in the results snippet, but Walmart does not have any. Perhaps their HTML code for breadcrumbs is not properly marked-up.

On the subject of featuring breadcrumb-rich snippets in SERPs, a Google patent[12] discusses the taxonomy of the website, internal linking, primary and secondary navigation, and structured URLs among other things they consider when deciding to display breadcrumbs in SERPs. To increase the chances of breadcrumbs showing up in search engine result pages, implement them consistently across the website, and follow Google’s official guidelines by using the Breadcrumbs structured markup with microdata or RDFa.[13]

In the past, breadcrumb-rich search result listings allowed users to click not only on the underlined blue SERP title but on the breadcrumbs in the listing as well. However, Google decided not to hyperlink these links in the breadcrumbs. I believe that people clicked on the intermediary category links and landed on pages that didn’t match their intent, so Google decided to retire it.

If a product belongs to multiple categories, it is OK to list multiple breadcrumbs on the same page,[14] as long as the product is not categorized in too many different categories. However, the first breadcrumb on the page has to be the canonical path to that product, because Google picks the first breadcrumb it finds on the page.

Depth-triggered breadcrumbs

Some websites implement breadcrumbs only at a certain depth in the website hierarchy, but that is not optimal for users and search engines.

Figure 301 – When users are on the top category page for Furniture, there are no breadcrumbs.

Figure 302 – When users navigate to a Furniture subcategory (i.e., Bedroom Furniture), the breadcrumbs start being displayed. All subcategories under Bedroom Furniture will have breadcrumbs.

Depth-triggered breadcrumbs may work fine for users who start navigating from the home page, but nowadays every page on your website could serve as an entry point for users and search engines. Therefore, it is important to feature breadcrumbs from the first level of the website taxonomy. Additionally, featuring breadcrumbs only on some pages and not on others may confuse users.

Figure 303 – Many times, the category name is displayed in the breadcrumbs as well.

It is OK to repeat the category name in the breadcrumb and the heading. However, the last element in the breadcrumb should not be linked. This is because that element will contain a self-referencing link to the active page, which is very confusing to users.
Depending on how they are implemented, there are three types of breadcrumbs:[15] path-based, location-based and attribute-based.

Path-based breadcrumbs

This type of breadcrumbs shows the path users have taken within the website to get to the current page. It acts as the “this is how you got here” clue for users. The breadcrumbs will dynamically update to reflect the user’s historical navigation path. Page view history is achieved with either URL tagging or session-based cookies.

It is not a good idea to implement this type of breadcrumb anywhere except internal site search result pages. Users landing from search engines can reach deep sections inside a website without ever needing to navigate through the website. In this case, a path-based breadcrumb becomes meaningless for users. The same applies to search engine bots, which can reach deep pages on your website from external referral sources.

Location-based breadcrumbs

This is the most popular type of breadcrumb, and it indicates the position of the current page within the website hierarchy. It is the “you are here clue” for users. This type of breadcrumbs keeps users on a fixed navigation path based on the website’s taxonomy, no matter which previous pages they visited during navigation. This is the type of breadcrumb recommended by taxonomists[16] and usability experts.[17]

On top-level category pages, the breadcrumb will be just one link to the home page, while the category name will be plain text (not a hyperlink).

Figure 304 – The category name is not hyperlinked, because this is the active page. This is the correct behavior.

The first element in the breadcrumb should always be a link to your homepage, but it does not necessarily have to use the anchor text “home page” or “home”. You can use the company’s name instead or use a small house icon with your company name in the alt text.

The subsequent levels in the breadcrumbs are the category and subcategory names used in your taxonomy. Again, do not make the current page a link because it will confuse users.

Figure 305 – There are instances when using a keyword in the anchor text link pointing to the home page may make sense (for example when your business name is The Furniture Store). But even then, use it with caution.

Attribute-based breadcrumbs

Attribute-based breadcrumbs, as the name suggests use product attributes or filter values (such as style, color, or brand ) to create navigation that is presented in a breadcrumb-like fashion. This type of breadcrumbs is the “this is how you filtered our items” clue for users:

Figure 306 – This page presents the breadcrumbs as filters.

If you were to click on Bed &Bath and then on the Comforter Sets you will see the breadcrumbs listed at the top. In the example above, the elements of the breadcrumb are not links (the “X” signs are links).

Technically speaking, these are not breadcrumbs, but rather filter values. However, this implementation mimics the traditional breadcrumbs usage, and users will expect the filters to be clickable, just as they expect the breadcrumbs to be displayed horizontally and not vertically.

I do not usually recommend replacing categories with filters for the top-level categories and the first subcategory levels. The choice between a category and a filter comes down to when it does not make sense to create separate categories for specific product attributes. For example, having separate categories for shoe sizes does not make sense. The size is rather a product attribute that will translate into a left navigation filter.


You need to clearly separate each element in the breadcrumb trail; you can divide the elements using separators. The most common separator between breadcrumb elements is the “greater than” sign (>). Other good options may include right-pointing double-angle quotation marks (»), slashes (/) or arrows (→). Remember to mark the separators with correct HTML entities.[18]


SEO for category pages starts to get complicated when the listings need pagination. Pagination occurs on ecommerce websites because of the large number of items that have to be segmented across multiple paginated pages (also known as component pages). Usually, pagination occurs on product listing pages and internal site search result pages.

Figure 307 – The pagination in the example above spreads across 113 pages, which is way too much for users to handle, and it will be tricky to optimize for bots.

If pagination occurs on pages that list other subcategories instead of products, it is time you revise making subcategories available to users without pagination. You can achieve that by increasing the number of subcategories you list on a page, or by breaking the subcategories into sub-subcategories.

Pagination is one of the oldest issues found on websites with large sets of items, and to address it was to aim at a moving target. Currently, the most recommended approach is with rel=“prev” and rel=“next”.

However, there were a couple of tactics to address pagination even before Google introduced these relationships at the end of 2011. Such tactics included noindexing all pages except the first page or using a view-all page.

To make pagination even more intriguing, Google says that one of the options for handling pagination is to “leave as-is”,[19] suggesting that they can identify a canonical page and handling pagination well.

However, anything you can do to help search engines better understand your website and crawl it more efficiently is advantageous. The question is not whether you need to deal with pagination, but how to deal with it.

From an SEO perspective, a “simple” functionality such as pagination can cause serious issues with search engines’ ability to crawl and index your site content. Let’s take a look at several concerns regarding pagination.

Crawling issues
A listing with thousands of items will need pagination since a huge listing like that will not help either users or search engines. However, pagination can screw up a flat website architecture like nothing else.

For instance, in this example, it may take search engines about 15 hops to reach page 50 of the series. If the only way to reach the products listed on page 50 is by going through this pagination page by page, those products will have a very low chance of being discovered. Probably those pages will be crawled less frequently, which is not ideal.

Figure 308 – We are on page 7 in the series, and this page lists an additional three pagination URLs (8, 9 and 10) compared to page 1.

In our pagination example, there are missing component pages in the series (page 2 and 3), which means that bots can jump from page 1, straight to page 4. Because of these gaps in component URLs, bots can reach page 50 in about 15 hops instead of 43 (bots will need 43 hops to reach page 50 because they can go from page one straight to page 7 since page 7 is listed on the index page. From page 7 it will be another 43 hops/clicks on Next until they reach page 50).

The odds that Googlebot will “hop” through paginated content to crawl the final pages decreases with each page in the series, and more significantly at page 5.

The graph above is from an experiment on pagination. As you can see, Google crawls component pages 6-N far less frequently.[20]

The experiment concluded that:

“The higher the page number is, the less probability that the page will be indexed… On average, the chance that the robot will crawl to the next page of search results decreases by 1.2 to 1.3% per page”.

If you have a large number of component pages, find a way to add links to intermediary pages in the series. Instead of linking to pages 1, 2, 3 and 4 and then jumping to the last page, add links to multiple intermediary pages. In our previous example, we can break the series into four parts by linking to every 28th page in the series. Why did I choose to link to every 28th component page? It is because I wanted to break the pagination in four (112 divided by four is 28).

The fewer component pages you have in the pagination, the fewer chunks you will use. For example, if you have 10 component pages, you will list links all of them. If you have 50, you will divide by 2; 100 will be divided by 4, 200 will be divided by 8 and so.
So now, the pagination may look like:

Figure 309 – Make sure that the navigation on each page in the series makes sense for users.

Once you made changes to pagination, you can assess the impact after a week or two. Additionally, you can use your server logs to determine Googlebot’s behavior before and after you have made updates to pagination.

Duplicate metadata
While the products listed on pages 2 to N are different, very often each component page has the same page title and meta description across the entire series. Sometimes even the copy of the SEO content is duplicated across the pagination series, which means that the index page will compete with component pages. In many cases, this duplication is due to the default CMS configuration.

Consider the following to avoid or improve upon this:

  • Create custom, unique titles and descriptions for your index pages (page 1 in each series).
  • Write boilerplates titles and descriptions for pages 2-N. For instance, you can use the title of page 1 with some boilerplate appended at the end, “{Title Page One}– Page X/Y”.
  • Do not repeat the SEO copy (if you have any) from the index page on, component pages.

Adding this uniqueness to your titles, descriptions, and copy for the entire series may not have a huge impact on rankings for pages 2 to N, but doing it helps Google consolidate relevance to the index page. Additionally, the component pages will send internal quality signals, and will not compete with the index page.

Another duplicate content issue particular to pagination can occur when you reference the first page in the series (AKA the index page) from component pages:

Figure 310 – URLs pointing to the index page (page 1 in the series) should not include pagination parameters. Instead, these links should point to the category index URL,

Ranking signals dispersion
Sometimes, because component pages in the pagination series get linked internally (or from external sites), they may end up in the SERPs. In such cases, ranking signals are dispersed to multiple destination URLs instead of to a single, consolidated page.

If we look at how PageRank flows according to the first paper on this subject (published in 1998[21], which notes that PageRank flows equally throughout each link and has a decay factor of 10 to 15%), then component pages seem to be PageRank black holes—especially those not linked from the first page in the series.

Let’s see how PageRank flows on a view-all page and several paginated series. For our purposes, we will split the PageRank only between component pages. This assumes that all the other links are the same on all pages.

Figure 311 – In the pagination scenario above, the items listed on page 2 will receive about three times less PageRank than the items listed on a view-all page.

Our scenario is for a listing with 100 items on a PageRank 4 page. Due to the decaying factor, this listing page will send only 3.4 PageRank points to other pages, 4 x (1-0.15). Each of the 100 items listed on the view-all page will receive 0.034 PageRank points.

We will split the same listing into ten pages in a paginated series, listing ten items per page. We will have links to component pages 1, 2, 3, 4, 5…10.

For the first page in the series we will have the following metrics:

  • Its PageRank is 4, which is the same as the view-all page, and the amount that can flow to all other links is 3.4 PageRank points.
  • The total number of links is 16 (10 links for items plus six links for pagination URLs).
  • Each item and pagination URL receives 0.213 PageRank points.

The ten items on the first page of pagination receive about six times more PageRank than items on the view-all page.

The second page has these metrics:

  • Its PageRank is 0.213, and the amount that can flow further to all other links is 0.181.
  • The total number of links is still 16 (10 links for items plus six links for pagination URLs).
  • Each item and pagination URL receives 0.011 PageRank points.

The ten items on the second page receive three times less PageRank than the items listed on the view-all page.

Figure 312 – Page 6 in the series is not linked from the index page of the pagination exemplified above.

If a component URL is not present on the first page in the series (e.g., page 6 shows up only when users click on page 2 of the series, as in the screenshot above), then the amount of PageRank that flows to items linked from page 6 is incredibly low.

  • This page PageRank is 0.011 (from page 2), and the amount that can flow further is 0.0096.
  • The total number of links is 16 (10 links for items plus six links for pagination URLs).
  • Each item or pagination URL receives 0.0006 PageRank points.

This means that the items on such pages will receive about 56 times less PageRank than the items listed on the view-all page.

Figure 313 – This screenshot depicts how PageRank changes when you change the number of items on each component pages (i.e., listing 10, 20 or 25 items per page).

If you are interested in playing with this model, you can download the sample Excel file from here.
This PageRank flow modeling suggests that:

  • The items listed on the first page in a pagination series receive significantly more PageRank than those listed on component pages.
  • The fewer links you have on the first page, the more important they are and the more PageRank they receive—no surprise here.
  • If the link to a paginated page is not listed on the series’ index page, that page receives significantly less PageRank.
  • Items listed on pages 2 to N receive less PageRank than if they were listed on a view-all page. The exception is when they receive a lot of internal or external links.

However, in practice PageRank is a metric that flows in more complex ways. For example, PageRank flows back and forth between pages, and more PageRank is passed from contextual links than from pagination links. The amount of PageRank that gets into pagination pages is impossible to compute, except for Google.

However, this oversimplified model shows that you can either pass a lot of PageRank to a few items on the first page of pagination (and significantly less to items on component pages) or pass a medium amount of PageRank to all items via a view-all page.

In both cases, if you use pagination, it is essential to put your most important products at the top of the list on the first page.

Thin content
Listing pages usually have little to no content, or at least not enough so that search engines would consider worthy of indexing. Except for product names and some basic items information, there is not much text in there. Because of this, Panda filtered a lot of listing pages from Google’s index.

Questionable usefulness
Do your visitors make use of pagination? Look at your analytics data to find out. Do component pages serve as entry points, either from search engines or other referrals? If not, the SEO and user benefits of a view-all might be much greater than having pagination.

Pagination may still be a necessary evil if the site architecture has already been implemented and it is too difficult to make updates, or if a large number of items cannot be divided and grouped into multiple subcategories.

If you want to minimize pagination issues, it is probably best to start with the architecture of the website. You can avoid some challenging user-experience, IT and SEO issues by doing so. You should consider the following:

Replace product listings with subcategory listings
For example, on the Men’s Clothing category page in the next screenshot, instead of listing 2,037 products you can list subcategories such as Athletic Wear, Belts & Suspenders, Casual Shirts, and so on. You will only have product listings deeper in the website hierarchy.

Figure 314 – The Men’s Clothing category page lists products, but instead it should list subcategories, as in the next image.

Figure 315 – This is just a mock-up I came up with to demonstrate the replacement of a product listing with a subcategory listing.

Listing categories instead of listings products will also assist users in making better scope selections.[22]

Break into smaller subcategories
If you have a category with hundreds or thousands of items, maybe it is possible to break it down into smaller segments. That, in turn, will decrease or even eliminate the number of pages in the series.

Figure 316 – The Jeans & Pants subcategory can be split into two separate subcategories.

Segmenting into multiple subcategories may completely remove the need for pagination if you list a reasonable number of items. However, do not become overly granular; you want to avoid ending up with too many subcategories.

Increase the number of items in the listing
The idea behind this approach is simple: the more products you display on a listing page, the fewer component pages you have in the series. For example, if you list 50 items using a 5×10 grid and have 295 items to list, you will need six pages in the pagination series. If you increase the number of items per page to 100, you will need only three pages to list them all.

How many items you list on each page depends on how many other links are on the page, your web server’s ability to load pages quickly, and the type of items in the list. For example, greeting card listings may be scanned more slowly than light bulbs. Generally, 100 to 150 items is a good choice.

Link to more pagination URLs
Instead of skipping pages in the pagination series, link to as many pagination links as possible.

Figure 317 – This kind of pagination present big usability issues.

The pagination in the previous screenshot requires search engines and people to click the right arrow seven times to reach the last page. That is bad for users and SEO.
Instead, you should link all the pagination URLs, and your pagination will look like this:

Figure 318 – With this approach, it is way easier for users to go to any of the component pages.

Adding links to a manageable number of pagination URLs will ensure that crawlers will get to those pages in as few hops as possible.

If you can, interlink all the component pages. For example, if the listing results in fewer than 10 component links, you can list all the links instead of just 1, 2, 3…10. If the listing generates an unmanageable number of component URLs, list as many as possible without creating a bad user experience.

The aforementioned ideas will help you minimize the impact of pagination on SEO. However, in many cases pagination is still necessary—and you will have to handle it.

The way you approach pagination is situational, which means it depends on factors such as the current implementation, the index saturation (the number of your pages indexed by search engines), the average number of products in categories or subcategories, and other factors. There is no one-size-fits-all approach.

Apart from the “do nothing” approach, there are various SEO methods for addressing pagination:

  • The “noindex, follow” method.
  • The view-all method.
  • The pagination attributes, aka as the rel=“prev”, rel=“next” method.
  • The AJAX method.

An incorrect approach to pagination is to use rel=“canonical” to point all component pages in a series to the first page. Google states that:

“Setting the canonical to the first page of a parameter-less sequence is considered improper usage”.[23]

The “noindex, follow” method

This method requires adding the meta robots “noindex, follow” in the <head> of pages 2 to N of the series, while the first page will be indexable. Additionally, pages 2 to N can contain a self-referencing rel=“canonical”.

Of the three methods, this is the least complicated to implement, but it effectively removes pages 2 to N from search engines’ indices. Note that this method does not transfer any indexing signals from component pages to the primary, canonical page.

Figure 319 – Pages 2 to N are noindexed with a meta robots “noindex, follow” tag.

If your goal is to keep pages out of the index – maybe because a thin content filter has hit you – then this is the best approach. Also, a good application of the noindex method is on internal site search results pages, since Google and all other top search engines do not like to return “search in search”.[24]

Blocking crawler’s access to component pages can be done with robots.txt and within your webmaster accounts. These two options will not remove pages from indices; they will only prevent further crawling. Moreover, while you can use Google Search Console to prevent component pages from being crawled, it is easier to manage pagination if you block them just in one place only, either with robots.txt or with GSC. When you are auditing crawling and indexation issues, do not forget where you blocked content.

The “view-all” page

This method seems to be Google’s preferred choice for handling pagination, because

“users generally prefer the view-all option in search results”

and because

“[Google] will make more of an effort to properly detect and serve this version to searchers”.[25]

This approach seems to be backed up by testing performed by usability professionals such as Jakob Nielsen, who found that:

“the View-all option [was] helpful to some users. More important, the View-all option did not bother users who did not use it; when it wasn’t offered, however, some users complained”.[26]

The view-all method involves two steps:
1. Creating a view-all page that lists all the items in a category, as in this screencap:

Figure 320 – The view-all link list all the items.

2. Make the view-all page the canonical URL of the paginated series by adding rel=“canonical” pointing to the view-all URL, on each component page:

Figure 321 – Every component URL points to the view-all page.

The purpose of rel=“canonical” is to consolidate all link signals into the view-all page. With a view-all approach, all component pages lose their ability to rank in SERPs. However, while the view-all page can be different from the listing page index, making the view-all the index page is also possible.

The view-all method comes with advantages such as better usability, indexing signal consolidation, and relative ease of implementation.

However, there are several challenges to consider before creating a view-all page:

  • Consolidating hundreds or thousands of products on one page can dramatically increase page load times, especially on product listing pages. Fast loading time is considered under four seconds, but you should aim to load under 2 seconds. Use progressive loading to make this happen.
  • A view-all page means having hundreds or thousands of links on a single page because the view-all page must display all the items from the component pages. While the compensation may be the consolidation of indexing signals from component pages to the view-all page, we do not have any official words on how search engines will assess such a large number of links on the view-all page.
  • Sometimes you do not want to remove all other component pages and push the view-all to be listed in SERPs. If you would like to surface individual pages from the pagination series, you should use the rel=“next” and rel=“prev” method.
  • Implementation is a bit more complex than for the “noindex” method. However, it is not as complex as the pagination attributes.

There are situations when you want to implement the view-all page solely for user experience purposes, and you do not want search engines to list it in SERPs. In such situations make sure that the component pages in the series do not have a rel=“canonical” pointing to the view-all page, but rather to the first page of the pagination. Also, mark the view-all page with “noindex”. Additionally, you may want to make the view-all link available only to humans using AJAX or cookie-triggered content, or other methods.

If you are concerned with page load times, there are ways to deliver a barebone version of the view-all page to search engines, while presenting the fully rendered page to humans, on-demand and without increasing load times. These implementations must take into consideration progressive enhancement[27] and mobile user experience.

However, nowadays you should be looking into building progressive web apps and accelerated mobile apps to load super-fast, rather than complicating things with delivering different resources to bots versus humans.

While de-pagination can work for websites that have a reasonably low number of items in their listings, for websites with larger inventories it may be easier to stay with user-friendly pagination that limits the number of items. From a usability standpoint,

“typically, this upper limit should be around 100 items, though it can be more or less depending on how easy it is for users to scan items and how much a long page impacts the response time”.[28]

Pagination attributes (aka rel=”prev” and rel=”next” method)

Another method for handling pagination is to use pagination attributes (also known as the rel=“prev” and rel=“next” method). Even if Google is not using this method as an indexing signal anymore, this is probably still the best approach for URL discoverability, as it seems to generate good results without completely removing the ability of component pages to rank in search results.

In the <head> section of each component page in the series, you use either rel=“prev” or rel=“next” attributes (or both), to define a “chain” of paginated components.

The prev and next relationship attributes have been HTML standards for a long time,[29] but they only got attention after Google pushed them. Rel=“prev” and rel=”next” are just hints to suggest pagination; they are not directives.

Let’s say you have product listings paginated into the following series:

On the first page (the category index page), you would include this line in the <head> section:
<link rel=“next” href=“” />

The first page contains the rel=“next” markup, but no rel=“prev”. Typically, this is the page in the series that becomes the hub and gets listed in the SERPs.

On the second page, you will include these two lines in the <head> section:

<link rel=“prev” href=“” />
<link rel=“next” href=“” />

Pages 2 to second-to-last should have both rel=“next” and rel=“prev” markup.

Note that page 2 points back to the first page in the pagination as /duvet-covers/ instead of /duvet-covers?page=1. This is the correct way to reference the first page in the series, and it does not break the chain because:

/duvet-covers/ will point to/duvet-covers?page=2 as the next page, and
/duvet-covers?page=2 will point to /duvet-covers/ as the previous page.

On the third page ( – you will include the following markup in the <head> section:

<link rel=“prev” href=“” />
<link rel=“next” href=“” />

On the last page (, you will include the following link attribute:

<link rel=“prev” href=“” />

Notice that the last page in the series contains only the rel=“prev” markup.

The rel=”prev” rel=”next” method has a few advantages, such as:

  • Component pages retain and share equity with all other pages in the series.
  • It addresses pagination without the need to “noindex” component pages.
  • It consolidates indexing properties such as anchor text and PageRank just as with a view-all implementation. This means that in most cases the index page of the series will show up in Google’s SERPs.
  • On-page SEO factors such as page titles, meta descriptions, and URLs may be retained for individual component pages, as opposed to being consolidated into one view-all page.
  • If the listing can be sorted in multiple ways using URL parameters, then these multiple “ordered by” views are eligible to be listed in SERPs. This is not possible with a view-all approach.

It is not a good idea to mix pagination attributes with a view-all page. If you have a view-all page, point the rel=“canonical” on all component pages to the view-all page, and do not use pagination attributes. You may also self-reference component pages to avoid duplicate content due to session IDs and tracking parameters.

Using rel=“canonical” at the same time with rel=“prev” and rel=“next”
Pagination attributes and rel=“canonical” are independent concepts, and both can be used on the same page to prevent duplicate content issues.

For example, page 2 of a series could contain a rel=”canonical”, a rel=“prev” and a rel=“next”:

<link rel=“canonical” href=“” />
<link rel=“prev” href=“” />
<link rel=“next” href=“”/>

This setup tells Google that page 2 is part of a pagination series and that the canonical version of page 2 is the URL without the sessionID parameter. The canonical URL should point to the current component page with no sorts, filters, views or other parameters, but rel=“prev” and rel=“next” should include the parameters.

Keep in mind that rel=“canonical” should be used to deal with duplicate or near-duplicate content only. Use it on:

  • URLs with session IDs.
  • URLs with internal or referral tracking parameters.
  • Sorting that changes the display but not the content (e.g., sorting that happens on a page-by-page basis).
  • Subsets of a canonical page (e.g., a view-all page).

You can also use rel=”canonical” on product variants (i.e., on near-duplicate PDPs that share the same product description with the only difference being an overhead product attribute (i.e., the same shoe but in different sizes). You need to understand your target market before applying the canonical, and you also need to be able to select the canonical product from the collection of SKUs.

Rel=“prev”, rel=“next” and URL parameters
Although rel=“prev” and rel=“next” seems more advantageous than the view-all method from an SEO standpoint, it comes with implementation challenges.

Regarding URL parameters, the rule on paginated pages is that pagination attributes can link together only URLs with matching parameters. The only exception is when you remove the pagination parameter for the first page in the series.

To make pagination attributes work properly, you have to ensure that all pages within a paginated rel=“prev” and rel=“next” sequence are using the same parameters.
Pagination and tracking parameters

The following URLs are not considered part of the same series, since the URL for page 3 has different parameters, and that would break the chain:

In this case, you should dynamically insert the key-value pairs based on the fetched URL.

If the requested URL contains the parameter referrer=twitter ( then the pagination URLs should dynamically include the referrer parameter as well:

<link rel=“prev” href=“”>
<link rel=“next” href=“”>

Additionally, you can use Google Search Console to tell Google that this parameter does not change the page content, and to crawl the representative URLs only (URLs without the referrer parameter).

Pagination and viewing or sorting parameters
Another frequent scenario with pagination is the sorting and viewing of listings that span across multiple pages. Because each view option generates unique URL parameters, you will have to create a pagination set for each view.

Let’s say that the following are the URLs for “sort by newest”, displaying 20 items per page:

On page 1 you will have only the rel=“next” pagination attribute pointing to URLs with sort and view parameters:
<link rel=“next” href=“”>

On page 2 you will have rel=“prev” and rel=”next” also pointing to URLs with sort and view parameters:

<link rel=“prev” href=“”>
<link rel=“next” href=“”>

On page 3 you will have only a rel=”prev” attribute also pointing to URLs with sort and view parameters:

<link rel=“prev” href=“”>

The above markup defines one pagination series.

However, if users can also display 100 items per page, that is a new view option, and it will create a new pagination series. The new URLs will look like the ones below; the view parameter now equals 100.

On page 1 you will have only the rel=“next” pagination attribute pointing to URLs with sort and view parameters:
<link rel=“next” href=“”>

On page 2 you will have the rel=“prev” and rel=”next” also pointing to URLs with sort and view parameters:

<link rel=“prev” href=“”>
<link rel=“next” href=“”>

On page 3 you will have only a rel=”prev” attribute, also pointing to URLs with sort and view parameters:

<link rel=“prev” href=“”>

When dealing with sorting URLs, you may want to prevent search engines from indexing bi-directional sorting options, because sorting by newest is the same as sort by oldest, only in a different order. Keep one default way of sorting accessible—e.g., “newest”—and block the other, “oldest”.

Also, adding logic to the URL parameters not only can prevent duplicate content issues, but it also can:

“help the searcher experience by keeping a consistent parameter order based on searcher-valuable parameters listed first (as the URL may be visible in search results) and searcher-irrelevant parameters last (e.g., session ID). Avoid”[30]

Make sure that parameters that do not change page content (such as session IDs) are implemented as standard key-value pairs, not as directories. This is necessary for search engines to understand which parameters are useless, and which ones are useful.

Here are a couple of other best practices for pagination attributes:

  • While technically you could use relative URLs to reference pagination attributes, you should use absolute URLs to avoid cases where URLs are accidentally duplicated across directories or subdomains.
  • Do not break the chain. This means that page N should point to N-1 as the previous page and to N+1 as the next page (except for the first page, which will not have a “prev” attribute, and the last page, which will not have the “next” attribute).
  • A page cannot contain multiple rel=“next” or rel=“prev” attributes.[31]
  • Multiple pages cannot have the same rel=“next” or rel=“prev” attributes.

Probably the biggest downside of rel=“prev” and rel=“next” is that it gets tricky to implement, especially on URLs with multiple parameters. Also, keep in mind that Bing does not treat the previous and next link relationships the same way as Google. While Bing uses the markup to understand your website structure, it will not consolidate indexing signals to a single page. If pagination is a problem in Bing, consider blocking excessive pages with a Bingbot-specific robots.txt directive or noindex meta tag.

The AJAX or JavaScript links method

With this method, you create pagination links that are not accessible to search engines but are available to users, in the browser. The trade-off is that users without JavaScript will not have access to component pages. However, users can still access a view-all page.

Figure 322 – Pagination links are not plain HTML links.

In the screenshot above you can see that the interface (1) allows users to sort, as well as choose between list and grid views. It also allows access to pagination. However, the source code (2) reveals a JavaScript implementation for pagination. If the JavaScript resources needed to generate those links are blocked with robots.txt, Google will not have access to those pagination URLs (3).

This approach has the potential to avoid a lot of duplicate content complications associated with pagination, sorting, and view options. However, it can introduce URL discoverability problems.

If you prefer this approach, make sure that search engines have at least one other way to access each product in each listing— using, for example:

  • A more granular categorization that does not require more than 100 to 150 items in each list.
  • Controlled internal linking that links all products in an SEO-friendly way from other pages.
  • Well-structured HTML sitemaps, along with XML Sitemaps.
  • Other sorts of internal links.

Infinite scrolling

A frequent user interface design alternative for pagination is infinite scrolling.[32] Also known as continuous scrolling, it lets users view content as they scroll down towards the bottom of the page, without the need to click on pagination links. Visually, this alternative appears very similar to displaying all the items on the page. However, the difference between infinite scrolling and a view-all page is that with infinite scrolling the content is loaded on demand (e.g., by clicking on a “load more items” button, or when content becomes visible above the fold), while for a view-all page the content is loaded all at once.

Mobile websites use infinite scrolling more since on small screens it is easier to swipe than to click. However, infinite scrolling relies on progressive loading with AJAX,[33] which means that you will still need to provide links to component URLs to search engines or to users without JavaScript active. You will achieve this using a progressive enhancement approach.

Regarding SEO, infinite scrolling does not solve pagination issues for large inventories, and this is one of the reasons Google suggests paginating infinite scrolls.[34]
Google’s advice is OK, but I do not believe that continuous scrolling needs pagination when there aren’t too many products in the listing; a view-all page with 200 items is preferable in many cases. However, pages that list more than 200 items and use infinite scrolling should degrade to plain HTML pagination links for non-JavaScript users, and this includes search engine bots.

Degrading to HTML links means that search engines may still get into pagination problems, so you will have to handle pagination with one of the methods described earlier.

Figure 323 – This screenshot depicts the cached version of a subcategory page that uses infinite scrolling and degrades to HTML links for pagination when users do not have JavaScript on.

On the page in previous screenshots, there are links Previous and Next pages for users without JavaScript active (or for search engines).

Figure 324 – This is a screenshot of the same page, but this time JavaScript is active. The Previous and Next links do not show up anymore. This was achieved by hiding the pagination section with CSS and JavaScript. Users can continuously scroll to see all watches.[35]

Continuous scrolling has many advantages such as better user experience on touch devices, faster browsing due to the elimination of page reloads, increased product discoverability, and external links consolidation.

However, there are disadvantages too,[36] and infinite scrolling does not perform better on all websites.

For example, on Etsy – an ecommerce marketplace for handmade and vintage items – infinite scrolling did not have the desired business outcome, so they reverted to old-fashioned pagination.[37] Infinite scrolling led to fewer clicks from users, as they felt lost in a sea of items and had difficulty sorting between relevant and irrelevant.

However, on other websites, infinite scrolling may work well as reported in this study.[38] As with most ideas for your website, an A/B test will tell you whether removing pagination it is helpful for users or not. If you plan to test infinite scrolling, here are few ideas for you.

Display visual clues when more content is loading.
Not everyone’s connection is fast enough to load content in the blink of an eye. If your server cannot handle fast user scrolling, or if browsers are slow, let the user know that more content is on the way.

Figure 325 – Notice the Loading More Results message at the bottom of the list. It conveys to users that more content is loading.

Consider a hybrid solution
A hybrid approach combines infinite scrolling and pagination. With this approach, you will display a “show more results” button at the end of a preloaded list. On mobile, make this button big to combat the fat-finger syndrome[39]. When the button is clicked, it loads another batch of items:

Figure 326 – In this example, more shoes are loaded with AJAX only when users click on the “show more results” button.

Add landmarks when scrolling
Amazon uses horizontal pagination in this product widget, to give users a sense of how many pages are in the carousel:

Figure 327 – You can find the horizontal pagination for this carousel on the top right side of the screenshot.

For vertical scrolling, adding landmarks such as virtual page numbers can provide users a sense of how far they scrolled and may help to create mental references (e.g., “I saw a product I liked somewhere around page 6”).

Figure 328 – This screenshot was modified to exemplify a navigational landmark (the horizontal rule and the text “Page 2”).

Update the URL while users scroll down
This is an interesting concept worth investigating. You can automatically append a pagination parameter to the URL when users scroll down past a certain number of rows.

This concept is best explained with a video, so I made a brief screen capture to illustrate it in case the original page[40] becomes unavailable (you can download this file from here).

If you regularly have 200 or fewer items in your listings, it is better to load all the items at once. Doing so will feed everything to search engines as one big view-all page. This is Google’s preferred implementation to avoid pagination.

Of course, users will see 10 or 20 items at a time, and you will defer loading the rest of them in the interface. However, you will use data from the already loaded HTML code.

This has the potential to save many pagination headaches. Depending on your website authority, you could go with even more than 200 items per listing.

If the list is huge, you should probably paginate. But even then, you may want to consider a view-all page.

Complement with filtered navigation
Large sets of pagination should be complemented by filtered navigation to allow users to narrow the items in the listing based on product attributes. Also, use subcategory navigation to allow users to reach deeper into the website hierarchy.

Figure 329 – Filters can reduce items in a list from hundreds to a few tens or fewer.

The previous listing page has 116 items, but if you filter by Brand=Samsung, the list shortens to 52 items. If you filter by Color or Finish Family, the list shortens to 9 items.
If infinite scrolling produces better results for your users and revenue, it is probably a good idea to keep it in place. But, make it work for users without JavaScript, remove the pagination client-side, and implement infinite scrolling for users with JavaScript on.

Secondary navigation

On listing pages, primary navigation is always complemented by some ancillary navigation. We call that secondary navigation.

I will refer to secondary navigation as the navigation that provides access to categories, subcategories, and items located deeper in the taxonomy. On ecommerce websites, this type of navigation is usually displayed on the left sidebar.

Figure 330 – Secondary navigation can appear very close to the primary navigation (either at the top or on the left sidebar) and provides detailed information within a parent category.

In many cases, the secondary navigation lists subcategories, product attributes, and product filters.

The entire left section in the example above is considered the secondary navigation; it includes subcategories as filters, filter names and filter values.

Unlike primary navigation, the labels in secondary navigation can change from one page to another to help users navigate deeper into the website taxonomy. This change of links in the navigation menu is probably the most significant difference between primary and secondary navigation.

From an SEO point of view, it is important to create category-related navigation. By doing so you offer users more relevant information, provide siloed crawl paths, and give search engines better taxonomy clues.

Figure 331 – Take Amazon, for example. When you are in the Books department of the website, the entire navigation is only about books.

Faceted navigation (AKA filtered navigation)

Ecommerce sites are often cluttered, displaying too much information to process and too many items to choose from. This leads to information overload and induces choice paralysis.[41] It is therefore essential to offer users an easier way to navigate through large catalogs. This is where faceted navigation (or what Google calls additive filtering), comes into play.

Whether your visitors are looking for something very specific or just browsing the website, filters can be highly useful. It will help users locate products without using the internal site search or the primary navigation, which in most cases shows a limited number of options for users.

Faceted navigation makes it easier for searchers to find what they are looking for by narrowing product listings based on predefined filters, in the form of clickable links.

Usability experts refer to faceted navigation as:

“arguably the most significant search innovation of the past decade”.[42]

Faceted navigation almost always has a positive impact on user experience and business metrics. One retailer saw a:

“76.1% increase in revenue, a 26% increase in conversions and 19.76% increase in shopping cart visits in an A/B test after implementing filtering on its listing pages”.[43]

This screenshot illustrates a usual design for faceted navigation:

Figure 332 – A sample faceted navigation interface.

It is common to present faceted navigation in the left sidebar, but it can also be displayed at the top of product listings, depending on how many filters each category has. In many instances, subcategories are also included in the faceted navigation.
Filters, filter values, and facets have different meanings.

  • Filters represent a group of product attributes. In this screenshot, Styles, Women’s Size, Women’s Width are the filters.
  • Filters values are the options under each filter. For the Styles filter, the filter values are Comfort, Pumps, Athletic, and so on.
  • Facets are views generated by selecting one or a combination of filter values. Selecting a filter value within the Women’s size filter and a filter value for the Women’s width filter creates the so-called “facet”.

Figure 333 – Selecting one or more filter values generates the so-called facet.

Faceted navigation is a boon for users and conversion rates, but it can generate a serious maze for crawlers. The major issues faceted navigation generates are:

  • duplicate or near-duplicate content.
  • crawling traps.
  • non-essential, thin content.

Figure 334 – If you received a Google Search Console message like the one in the screencap, faceted navigation is one of the possible causes.

There is no better example of how filtering can create problems than the one offered by Google itself. The faceted navigation on –alongside other navigation types such as sorting and viewing options– generated 380,000 URLs.[44] And keep in mind that this was a site that sold just 158 products.

If you are curious to find out how many URLs faceted navigation could generate for a product listing page, you can use the following formula for counting the possible permutations (without allowed repetition):


In this formula, n is the total number of filter values that can be applied, and r is the total number of filters. For instance, let’s say you have two filters:

  • The Styles filter, with five filtering options
  • The Materials filter, with nine filtering options

In this case, n will be 14, which is the total number of filtering options and r will be 2 because we have two filters. This setup could theoretically generate 91 URLs.[45]
If you add another filter (e.g., Color), and this filter has 15 filtering options, n becomes 29 and r equals 3. This setup will generate 3,654 unique URLs.

As I mentioned, the formula above does not allow repetitive URLs. This means that if users select (style=comfort AND material=suede), they get the same results as for selecting (material=suede AND style=comfort), at the same URL. If you do not enforce an order for URL parameters, then the faceted navigation will generate 182 URLs for the example with two filters, and 21,924 URLs for the example where three filters have been applied.

Figure 335 – The huge difference between the total number of pages indexed and the number of pages ever crawled hints at a possible crawling issue or some serious content quality issue.

Figure 336 – Notice how many URLs the price parameter generates?!

In the previous screenshot, the issue was identified and confirmed by checking the URL parameters report in Google Search Console. The large number of URLs was due to the Price filter, which generated 5.2 million URLs.

You can partially solve duplicate content issues generated by faceted navigation by forcing a strict order for URL parameters, regardless of the order in which filters have been selected. For example, Category could be the first selected filter and Price the second. If a visitor (or a crawler) choose the Price filter first and then Category, you make it so that the Category shows up first in the URL, followed by Price.

Figure 337 – In the URL above, although the cherry filter value was selected after double door, its position in the URL is based on a predefined order.

The same order is reflected in the breadcrumbs as well:

Figure 338 – If you need a breadcrumb that reflects the order of user selection, you can store the order in a session cookie rather than in a URL.

Another near-duplicate content issue generated by facets arises when one of the filtering options presents almost the same items as the unfiltered view. For example, the unfiltered view for Ski & Snowboard Racks has a total of 15 products, and you can narrow the results using two subcategories: Hitch Mount and Rooftops.

Figure 339 – The above is the product listing page for Ski & Snowboard Racks.

However, the subcategory Rooftop Ski Racks & Snowboard Racks includes 13 results from the unfiltered page. This means that except for two products, the filtered and the unfiltered pages are near-duplicates.

Figure 340 – The Rooftop Ski Racks & Snowboard Racks.

Faceted navigation comes with a significant advantage over hierarchical navigation: the filter combinations will generate pages that could not exist in a tree-like hierarchy because tree-like hierarchies are rigid and cannot cover all possible combinations generated by faceted navigation. The hierarchy structure is still good for high-level decisions, however.

Let’s say that you sell jewelry and would like to rank for the query “square platinum pendants”. Your website hierarchy only segregates into jewelry-type categories such as pendants, bracelets, etc., and then it allows filtering based on a Material filter, with values such as platinum, gold, etc. If there is no Shape filter to list the square option, your website will have no faceted navigation page for “square platinum pendants”.

However, if you were to introduce the Shape filter on the Platinum Pendants listing page, it would allow you to generate the Square Platinum Pendants facet, which narrows down the inventory based on the square filter value. This page is relevant to users and to search engines. You can further optimize this page with custom content and visuals, to make it even more appealing to machines and humans.

Figure 341 – An additional filter – Shape – would allow the targeting of more torso and long-tail keywords.

If there is no Shape filter to generate the Square Platinum Pendants facet, and if there is no hierarchical navigation that could lead to such a page, you will have to manually create a page that targets the “square platinum pendants” query. Then you will have to link to it internally and externally, so search engines can discover it. Depending on the size of your product catalog it will be practically impossible to create thousands or even millions of such pages manually.

Essential, important and overhead filters

Before discussing how to approach faceted navigation from an SEO perspective, it is important to break down the filters and facets into three types: essential, important and overhead.

Essential filters/facets
Essential filters will generate landing pages that target competitive keywords with high search volumes, which usually are “head” or “torso terms”. If your faceted navigation lists subcategories, those facets are essential, and they are called faceted subcategories.

Figure 342 – In this example, Bags, Wallets, and the remaining subcategories are essential filters. You should always allow search engines to crawl and index such filters.

Essential facets can also be generated by a combination of filter values under Brand + Category—for example, using the filter value “Nokia” for Brand and the filter value “Cameras” for Category.

Either Category or Brand can be considered a facet, as they function as filters for larger sets of data.

You can handpick the top combinations of essential filters that are valuable for your users and your business. Turn them into standalone landing pages by adding content, and by optimizing them as you would with a regular important page. This is mostly a manual process, as it requires content creation, so it is doable for only a limited number of pages at once.

However, if you do this regularly and you commit resources for content creation, it will give you an advantage over your competitors. Start with the most important 1% of facets and gradually move on. If you do a couple per day (you need only about 100 to 150 carefully crafted words), in a year you will have optimized hundreds of filtered pages.

All essential facet pages should have unique titles and descriptions. Ideally, the titles should be custom, while the descriptions can be boilerplate.

Make sure that search engines can find the links pointing to essential filters and facets. As a matter of fact, essential filters should be linked from your content-rich pages such as blog posts and user guides. For maximum link juice flow link from the main content area of such pages.

The URL structure for essential facets should be clean. It should ideally reflect, either partially or exactly, the hierarchy of the website in a directory structure or a file-naming convention:

Figure 343 – The URL for the Bathroom Accessories subcategory facet is parameter-free and reflects the website’s hierarchy.

Important filters/facets
These refinements will lead users and search engines to landing pages that can drive traffic for “torso” and “long-tail keywords”.

For example, if your analytics data proves that your target market searches for “red comfort shoes”, this means that URLs generated by Color + Style selections are important facets for your business. Search engines should be able to access important facet URLs.

You will have to decide what is and what is not an important facet, preferably on a category or subcategory basis. For instance, the Color filter can be relevant and important for the Shoes subcategory, but it will be an overhead filter for the Fragrances subcategory.

A particular case you need to pay attention to is the Sales or Clearance filter. In the next example, the retailer lists all the facets for all the products on clearance.

Figure 344 – The left navigation filters in the image above do not help users much because Rugs do not have sleeves, Snowshoes do not have a shirt style, and Pullovers do not have a ski pole style.

Instead of listing products, this retailer should list only subcategories in the left navigation and the main content area. This will make it more likely that users will handle the ambiguous nature of the Clearance page by first choosing a category that interests them. Once the desired category has been selected, the retailer should display the filters that apply to that category.

Depending on how your target market searches online, it is advisable to prevent search engines from accessing URLs generated when more than two filter values have been applied. If one of the applied filters is an essential filter, you will block when three filters have been applied.

This works best with multiple selections on the same filter (e.g., Brand=Acorn AND Brand=Aerosoles) because users are less likely to search for patterns like “{brand1}{brand2}{category}” (e.g., “Acorn Aerosoles shoes”).

Being able to select multiple filter values is useful for users who might select Red AND Blue Shirts, but they are not so useful for search engines. Therefore, such selections can be blocked for bots.

Figure 345 – An example of multi-selections on the Brand filter.

Note that blocking access to faceted navigation URLs by default, whenever multiple filters are applied, will prevent bots from discovering pages created by single value selections on different filters (e.g., Color=red AND Style=comfort). You will miss traffic for a large number of filter combinations (unless you manually create and optimize landing pages for all the important filters and facets, and unless you allow the bots to crawl and index those pages).

Let your data be the source of truth when deciding which facets you need to leave open for search engines. Gather data from various sources, then programmatically replace keywords with their filter values, when appropriate. This is very similar to the Labeling and Categorization technique described in the Website Architecture section. You need to identify patterns and see which facets or filters are used the most by your visitors. In your ecommerce platform, mark the important filters and let them be indexed.

The URL structure for important facets must be as clean as possible. It is OK to keep the important filter values in a directory or the file path structure. It is also OK to keep them in URL parameters, as long as you use no more than two or three parameters.

Figure 346 – When an important filter is applied, its value is appended to the URL, in the form of a directory. In this URL, the filter value is Kohler, under the Brand filter.

Whenever possible, avoid using non-standard URL encoding—like commas or brackets—for URL parameters.[46]

Often, search engines treat pages created by filters like subsets of the unfiltered page. To avoid being pushed into the supplemental index, you need to create unique titles, descriptions, breadcrumbs, headings, and custom content on these filtered pages. Boilerplate titles and descriptions may be fine, but do not just repeat the title of the unfiltered view on facet pages. The unfiltered view will be the view-all page or the index page, in case you implemented pagination with rel=“prev”/next.

Additionally, the breadcrumbs have to update to reflect the user selection; so, do the headings. This may sound obvious, but it is amazing how many ecommerce websites do not do it.

One technique that can be useful to increase the relevance of each filtered page, and to decrease near-duplicate content problems, is to write product descriptions that include the filter values used to generate the page. For instance, let’s say you sell diamonds. When a user selects a value under the Material filter, the product description snippet would include the value of the filter.

Figure 347 – The PLP above filters the SKUs by Material=white gold. The quick view product description for the second item cleverly includes the words “white” and “gold”.

This quick view snippet is different from the product description on the product detail page:

Figure 348 – Section (1) is the quick view snippet, section (2) the full product description.

Section (1) in the previous screenshot shows the quick view product description snippet on the product listing page. As you can see, the snippet was carefully created to include all the important filter values. Section (2) depicts the product description as found on the product detail page. These two product descriptions are different.

Writing custom product snippets for listing pages is a very effective SEO tactic even when you feature only 20 to 25 words for each product. However, it is difficult to write such snippets when you have thousands of products. A workaround is to write the detailed product descriptions to include the most important product filter values either at the beginning or the end of the detailed product description and then automatically extract and display the first/last sentence on the product listing page.

Another method used to increase the relevance of the listing pages generated by important facets is to add the selected filter values to the product listing, on the fly. However, this can transform into spam if you are not careful. If you go with this approach, make sure that you have rules in place to avoid keyword stuffing.

Overhead filters
These filters generate pages that have minimal or no search volume. All they do is waste crawl budget on irrelevant URLs. A classic example of an overhead filter is Price; in many instances, so is Size. However, keep in mind that a filter can be overhead for a business, but it can be important or even essential for another business.

You should prevent search engines from crawling URLs generated based on overhead filters, and you should mark filters as overheads on a category basis. Whenever a combination of filters includes an overhead value add the “noindex, follow” meta tag to the generated page, and append the crawler=no parameter to its URL. Then block the crawler parameter with robots.txt.

The directive in robots.txt will prevent wasting crawl budget, while the noindex meta tag will prevent empty snippets from showing up in the SERPs. If you have pages in the index that you need to remove, first implement the “noindex, follow” and wait for the pages to be taken out of the index. Once they are removed, append the crawl control parameter to the URLs.

Be careful about the combination of robots.txt and the noindex meta tag, as robots.txt will not allow robots access to a page-level directive and noindex is a page-level directive. If your website does not have an index-bloat issue or crawling issues, you may consider implementing rel=“canonical” instead of robots.txt.

You can also use AJAX to generate the content for overhead facets in a way that the URLs will not change, so search engine crawlers will not request unnecessary content. In this case, you need to block the scripts (and all other resources needed for the AJAX calls), with robots.txt. This will prevent search engines from rendering the AJAX links.

If you want to degrade the code for users with JavaScript off, you can use URL parameters, which can be placed either after a hash mark (#) or in a URL string that is blocked with robots.txt. However, the most stringent crawling restrictions come from not making the overhead URLs available to bots.

Figure 349 – Notice how the URL above contains the NCNI-5 string at the end.

The NCNI-5 string is used to control crawlers because all URLs containing the NCNI-5 string are blocked with robots.txt:

Disallow: /*NCNI-5*

To summarize, this is how Home Depot defines the filtered URLs for all three types of facets:

Figure 350 – Each filter/facet is treated differently, depending on how important each facet is.

The URL for the essential facet is made of a clean category name. The important facet URL includes the category name and the filter value, Kohler. The overhead URL –while it includes the category name and the filter value –, it also includes the crawl control string, NCNI-5.

It is a bad idea to rewrite URLs to make overhead filters look like static URLs. The following sample URL includes the overhead filter Price, with the values 50 to 100.

The URL above does not exist on Home Depot’s website; I added the /Price/50-100 part only to exemplify. Generating search engine-friendly URLs does not change the fact that there will be millions of irrelevant pages on your website.

Regarding URL discoverability, search engines do not need to find the links pointing to overhead filters or facets. In fact, you have to prevent search engines from discovering overhead facets.

If you have to allow search engines to crawl overhead facets, then keep the filters in parameters using standard HTML encoding and key=value pairs instead of in directories. This helps search engines differentiate between useful and useless values.

A faceted navigation case study

A searched on Google for “Canon digital cameras” lists Overstock on the first page, OfficeMax on the fifth and Target on the seventh.

Overstock’s approach to filtered navigation

Figure 351 – The above is the Digital Cameras sub-subcategory page filtered by Brand=Canon. It has a unique title, customized breadcrumbs, and a relevant H1 heading. Also, this page uses a good meta description. These elements send quality signals to search engines.

When users filter the SKUs by another brand (e.g., Sony), the page elements update. If they did not, the Canon Digital Cameras page would have the same H1, title, description, and breadcrumbs as the Digital Cameras page, which is not desirable.

Additionally, Overstock allows the crawl of essential and important filters, and it does not create links for gray-end filters (filters that generate zero results).

Figure 352 – The “10 Megapixels” filter value generates zero results. Therefore, it is not hyperlinked.

Overstock’s implementation of faceted navigation is SEO friendly, because it allows crawlers to access various filtered pages, and it updates page elements based on user or crawler selection.

A note on gray-end filter values: whatever you choose to do with these filter values in the interface (i.e., not showing them at all, showing them at the bottom of the filters list, or hiding them behind a “show more” link), gray ends filters should not be hyperlinked. If you have to hyperlink them for some reason, the header response code for zero results pages should be 404. If returning 404 is not possible, mark the pages with “noindex,follow”. Alternatively, you can use robots.txt to block URLs generating zero results.

OfficeMax’s approach to filtered navigation

Figure 353 – The image above depicts the Digital Cameras PLP on OfficeMax.

The title, the description, and the breadcrumbs do not update when the user selects a filter value under the Brand filter. This means that thousands of filtered pages will have very similar on-page SEO elements to the unfiltered page. Although the products on each filtered page will change, search engines will get a lot of near-duplicate meta tag signals.

This “stallness” might be the cause for Google not to index the faceted page resulting from filtering by Canon. The page that ranks for “Canon digital cameras” on OfficeMax is the Digital Cameras category page. This is not the ideal page to rank with because it does not match the user intent behind the search query. Filtering by Brand=Canon means that searchers have to take an additional, unnecessary step.

On the cached version of the Digital Cameras page, we notice that the faceted navigation is nowhere to be found. That’s happening because the faceted navigation is not accessible to search engines.

Figure 354 – The faceted navigation is not accessible to search engines.

A quick reminder here: if the links are missing in the cached version, Google might still be able to find them when they render the page like a browser would.

Maybe OfficeMax tried to fix some over-indexation issues or a possible Panda filter on thin content pages. However, this faceted navigation implementation is not optimal, as it completely blocks access to all filtered pages. Unless OfficeMax creates manual landing pages for all essential and important filtered pages, they have closed the doors to search engines and to the traffic those pages could bring in.

Target’s approach to faceted navigation

Figure 355 – Like OfficeMax, Target does not create relevance signals for filtered pages.

On Target’s website, the page title, breadcrumb, heading and description for their Canon Digital Cameras page are the same as on the unfiltered page, Digital Cameras. As a matter of fact, the aforementioned elements will be the same on hundreds or thousands of other possible filtered pages.

Moreover, since the page has a canonical pointing to the unfiltered page, its ability to rank is (theoretically) zero. Because of their approach, I thought they would have a Canon Digital Cameras page that can be reached from the navigation or other pages. If they had one, Google was not able to identify it.

Figure 356 – Google could not find a category page (or even a facet URL) relevant to Canon Digital Cameras. All the most relevant results were PDPs.

Google’s cached version of the page shows that faceted navigation does not create links:

Figure 357 – Because Brand is an important filter, and because in our example we used only one filter value, all the filter values under Brand should be plain HTML links.

Categories in faceted navigation

Hierarchical, category-based navigation is useful as long as it is easy for users to choose between categories. For instance, it could be more helpful for users if easy-to-decide-upon subcategories are listed in the main content area, as opposed to being displayed as facet subcategories in the sidebar. Subcategory listing pages should be used:

whenever further navigation or scope definition is needed before it makes sense to display a list of products to users. Generally, sub-category pages make the most sense in the one or two top layers of the hierarchy where the scope is often too broad to produce a meaningful product list”.[47]

Figure 358 – This category displays the next level of categories of the hierarchy in the main content area.

In the previous screenshot, faceted navigation (usually present in the left navigation as filtering options) is not yet introduced at this level of the hierarchy (the category level), and not even on the sub-subcategory level.

In the next example, you can see how the category-based navigation ends at the third level of the hierarchy; the first level is the Décor category, the second level is Blinds & Window Treatments, and the third level is Blinds & Shades.

The faceted navigation is displayed only at the third level of the hierarchy.

Figure 359 – You can see the faceted navigation displayed in the left sidebar, to help with decision making. You can also notice that the subcategory listing has been replaced with the product listing.

It is important to keep hierarchies relatively shallow, so users do not have to click through more than four layers to get to the list of products. Search engines will have the same challenges and may deem products buried deep in the hierarchy as not important.

Because faceted navigation is a granular inventory segmentation feature, it generates excess content in most implementations. It will also generate duplicate content—for instance, if you do not enforce a strict order for parameter filters in URLs.

So, what options do we have for controlling faceted navigation?

Option rel=”canonical”

Although rel=“canonical” is supposed to be used for identical or near-identical content, it may be worth experimenting with canonicals to optimize content across faceted navigation URLs.

Vanessa Fox, who worked for Google Webmaster Central, has suggested the following approach for some cases:

“If the filtered view is a subset of a single non-filtered page (perhaps the view=100 option), you can use the canonical attribute to point the filtered page to the non-filtered one. However, if the filtered view results in a paginated content, this may not be viable (as each page may not be a subset of what you would like to point to as canonical)”.[48]

Rel=“canonical” will consolidate indexing signals to the canonical page and will address some of the duplicate content issues, but search engine crawlers may still get trapped into crawling irrelevant URLs.

Rel=“canonical” is a good option for new websites, or for adding new filtering options to existing websites. However, it is not helpful if you are trying to remove existing filtered URLs from search engine indices. If you do not have indexing and crawling issues, you can use rel=“canonical”, as Vanessa suggests.

Option robots.txt

Robots.txt is the crawl control sledgehammer. Keep in mind that if you use robots.txt to block URLs, you will tamper with the flow of PageRank to and from thousands of pages. That is because while URLs listed in robots.txt can get PageRank, they do not pass PageRank.[49] Also, remember that robots.txt does not prevent pages from being indexed.

However, in some cases, this approach is necessary—e.g., when you have a new website with no authority and a very large number of items that need to be discovered, or when you have thin content or indexing issues.
If you use parameters in URLs and would like to prevent the crawling of all the URLs generated by selecting values under the Price filter, you would add something like this in your robots.txt file:

User-agent: *
Disallow: *price=

This directive means that any URL containing the string price= will not be crawled.

Robots.txt blocked URL parameter/directory

This method requires you to selectively add a URL parameter to control which filtered pages are crawlable and which are not. I described this in the Crawl Optimization section, but I will repeat it here as well.

First, decide which URLs you want to block.

Let’s say that you want to control the crawling of the faceted navigation by not allowing search engines to crawl URLs generated when applying more than one filter value within the same filter (also known as multi-select). In this case, you will add the crawler=no parameter to all URLs generated when a second filter value is selected on the same filter.

If you want to block bots when they try to crawl a URL generated by applying more than two filter values on different filters, you will add the crawler=no parameter to all URLs generated when a third filter value is selected, no matter which options were chosen, nor the order they were chosen. Here’s a scenario for this example:

The crawler is on the Battery Chargers subcategory page.

The hierarchy is: Home > Accessories > Battery Chargers
The page URL is:

Then, the crawler “checks” one of the Brands filter values, Noco. This is the first filter value, and therefore you will let the crawler fetch that page.

The URL for this selection does not contain the exclusion parameter:

Then, the crawler checks one of the Style filter values, cables. Since this is the second filter value applied, you will still let the crawler access the URL.

The URL still does not contain the exclusion parameter. It contains just the brand and style parameters:

Then, the crawler “selects” one of the Pricing filter values, the number 1. Since this is the third filter value, you will append the crawler=no to the URL.

The URL becomes:

If you want to block the URL above, the robots.txt file will contain:

User-agent: *
Disallow: /*crawler=no

The method described above prevents the crawling of facet URLs when more than two filters values have been applied, but it does not allow specific control over which filters are going to be crawled and which ones not. For example, if the crawler “checks” the Pricing options first, the URL containing the pricing parameter will be crawled.

Note that blocking filtered pages based solely on how many filter values have been applied poses some risks. For instance, if a Price filter value is applied first, the generated pages will still be indexed, since only one filter value has been selected. You should have more solid crawl control rules—e.g., if an overhead filter value has been applied, always block the generated pages.

It is also a good idea to limit the number of selections a search engine robot can discover. We will discuss this a bit later in this section, as the JavaScript/AJAX crawl control option.

Important filters or facets must be plain HTML links. You can present overhead filters as plain text to search engines (no hyperlinks), but as functional HTML to users (hyperlinks).

The blocked directory approach requires putting the unwanted URLs under a directory, then blocking that directory in robots.txt.

In our previous example, when the crawler checks one of the Pricing options place the filtering URL under the /filtered/ directory. If your regular URL looks like this:
when you control crawlers, the URL will include the /filtered/ directory:

If you want to block the URL, the robots.txt will contain:

User-agent: *
Disallow: /filtered/

Option nofollow

Some websites prefer to nofollow unnecessary filters or facets. Surprisingly, and in contradiction with the other official recommendation that tells us not to nofollow any internal links, nofollow is one of Google’s recommendations for handling faceted navigation[50]. However, nofollow does not guarantee that search engines will not crawl the unnecessary URLs or that those pages will not be indexed. Additionally, nofollow-ing internal links might send search engines the wrong signals, because nofollow translates into “do not trust these links”.

Hence, nofollow does not solve current indexing issues. This option works best with new websites.

It may be a good idea to either “back up” the nofollow option with another method that prevents URLs from being indexed (e.g., blocking URLs with robots.txt) or to canonicalize the link to a superset.

Option JavaScript/AJAX

We established that essential and important facets/filters should always be accessible to search engines as links. Preferably those will be plain HTML links. URLs for overhead filters and facets, on the other hand, can safely be blocked for search engine bots.

Theoretically, you can obfuscate the entire faceted navigation from search engines by loading it with search engine “unfriendly” JavaScript or AJAX. We have seen this deployed at OfficeMax. However, excluding the entire faceted navigation is usually a bad idea and should only be done if there are alternative paths for search engines to reach pages created for all essential and important facets. In practice, this is neither feasible nor recommended.

One option is to allow search engines access only to essential and important facets links, while not generating overhead links. For example, you load only the important facets and filters as plain HTML, while the overhead filters or facets are loaded with JavaScript or AJAX. Users will be able to click on any of the links, as they will be generated in the browser (e.g., using “see more options” links).

Figure 360 – Some of the filter values in the faceted navigation are not hyperlinked.

In this example, users are shown just two filter values for Review Rating, with a link to Show All Review Rating (column 1). When they click on that link, they see all the filter values (column 2). However, the Show All Review Rating is not a link for search engines (column 3).

This will effectively limit the number of URLs search engines can discover, which may be good or bad depending on your situation. If your target market searches for “laminate flooring 3-star reviews” then you need to make the corresponding link available to bots.

Similarly, you can obfuscate entire filters or just some filter values. For example, eBay initially presents users with only a limited number of filters and filter values, but then at a click on “see all” or “More refinements” it opens all the filters in a modal window:

Figure 361 – This modal window contains all the links required by users to refine the list of products.

However, the content of the modal window is not accessible to search engines, as you can see in this screenshot, where the “More refinements” is not hyperlinked:

Figure 362 – The “More refinements” element looks and acts like a link, but it is not a regular href.

One advantage of selectively loading filters and facets with robotted AJAX or JavaScript is that it may help pass more PageRank to other, more important pages. This is very similar to the old PageRank sculpting concept. However, remember that this “sculpting” happens only if search engines are not able to execute AJAX on such pages. And search engines are getting better by the day at executing JavaScript and AJAX. To make sure the links are not accessible to Googlebot block the resources necessary for the JavaScript or AJAX calls with robots.txt, and then do a fetch and render in Google Search Console.

If you know that some pages are not valuable for search engines, and if you do not want those useless pages in the index, then why allowing bot access to them in the first place?
Another advantage of selectively loading URLs is that it will prevent unnecessary links from being crawled.

The hash mark option

You can append parameters after a hash mark (#) to avoid the indexing of faceted URLs. This means that you can let faceted navigation create URLs for every possible combination of filters. As a note, remember that AJAX content is signaled with hashbang (#!). However, this scheme is no longer recommended by Google.

If you do an “info:” search for a page that includes the hash mark in the URL you will see that Google defaults the page to the URL that excludes everything after the hash mark.

For search engines, this page:,70&sort=newest&page=1
defaults to the content on this page:

Figure 363 – Google caches the content of the page generated before the usage of hash marks.

The hash mark can potentially consolidate linking signals to, but all the pages generated using the hash mark will not be indexed; therefore, they cannot rank.

However, you can place just the overhead filters after the hash mark. Whenever an essential or important facet is selected, include it in a clean URL, before the hash mark. Multiple selection filters can also be added after the hash mark.

You can also control crawlers using the URL parameters handling tools offered by Bing and Google.

Figure 364 – This setup hints to Google that the mid parameter is used for narrowing the content. I prefer to tell Google about the effect that each parameter has on the page content, but in the end, I will let them decide which URLs to crawl.

This setup presents only a clue to Google, so you still need to address crawling and duplicate content using another method (e.g., blocking overhead facets with selective robots.txt), or with a combination of methods.

Option noindex, follow

Adding the “noindex, follow” meta tag to pages generated by overhead filters can help address “index bloat” issues, but it will not prevent spiders from getting caught in filtering traps.

A quick note about using the noindex directive at the same time robots.txt: theoretically “noindex,follow” can be used in conjunction with robots.txt to prevent the crawling and indexing of new websites. However, if unwanted URLs have already been crawled and indexed, first you have to add noindex, follow” to those pages and let search engine robots crawl them. This means you will not block the URLs with robots.txt yet. Block the unwanted URLs with robots.txt only after the URLs have been removed from the index.

Sorting items

Users must be allowed to sort listings based on various options. Some popular sort options are bestsellers, new arrivals, most rated, price (high to low or low to high), product names, and even discount percentage.

Figure 365 – Some popular sorting options.

Sorting simply changes the order the content is presented in, not the content itself. This will create duplicate or near-duplicate content problems, especially when the sorting can be bidirectional (e.g., sort by price—high to low and low to high) or when the entire listing is on a single page (view-all).

Google tells us that if the sort parameters never exist in the URLs by default, they do not even want to crawl those URLs.

Figure 366 – This screencap is from Google’s official presentation, “URL Parameters in Webmaster Tools”.[51]

The best way to approach sorting is situational, and it depends on how your listings are set up.

Use rel=“canonical”

Many times, sort parameters are kept in the URL. When users change the sort order, the sort parameters are appended to the URL, and the page reloads. In this case, you can use rel=“canonical” on sorted pages to point to a default page (e.g., sorted by bestsellers).

Figure 367 – In this screenshot, you see that while sorting generates unique URLs for ascending and descending sort options, both URLs point to the same canonical URL.

The use of rel=“canonical” is strongly advised when the sorting happens on a single page, because sorting the content will change only how it is displayed, but not the content itself. This means that the content on each page, although sortable, will not be different and the generated page will be an exact duplicate. For instance, when sorting reorders the content on a view-all page, you generate exact duplicates (given that the view-all page lists all items in the inventory). However, even when the content is sorted on a page-by-page basis rather than using the entire paginated listing, you also create near or exact duplicate content.

Removing or blocking sort order URLs

This requires either adding rel=“noindex, follow” to sorting URLs or blocking access to them all together using robots.txt or within Google Search Console.

A screenshot of a cell phone Description generated with very high confidence

Figure 368 – In this example, the Ski Boots listing can be sorted in two directions (price “high to low” and price “low to high”).

When items can be sorted in two directions, the first product on the first page sorted “high to low” becomes the last product on the last page sorted “low to high”. The second product on the first page then becomes the second to last product on the last page, and so on. Depending on the number of items you list by default and on how many products are listed, you may end up with exact or near duplicates. For example, let’s say you list 12 items per page, and there are 48 items in total. This means that the last page in the pagination series will display exactly 12 items. When you list by price “high to low”, the products on the first page of the pagination will be the same with the products on the last page when sorting “low to high”.

One way to handle bidirectional sorting is to allow search engines to index only one sorting direction and remove or block access to the other. For example, you allow the crawling and indexing of “oldest” sort URLs and block the “newest”.

Figure 369 – Removing or blocking sort-order URLs is the easiest method to implement, and may help address pagination issues quickly until you are ready to move ahead with a more complex solution.

Use AJAX to sort

With this approach, you sort the content using AJAX, and URLs do not change when users choose a new sort option. All external links are naturally consolidated to a single URL, as there will be only one URL to link to.

Figure 370 – Sorting with AJAX does not usually change the URL.

Notice how the URL in the previous screenshot does not change when the list is sorted again by Bestsellers, in the next image below:

Figure 371 – While the content updates when users select various sort options, the URL remains the same.

Because the URL does not update when sorting, this method makes it impossible to link, share, or bookmark URLs for sorted listings. But, do people link or share sorted or paginated listings? Even if they do, how relevant will pagination or sorting be a week or a month from the moment it was linked or shared? Products are added to or removed from listings on a regular basis, frequently changing the order of products. The chances are that the products listed on any sorted page will be partially or totally different from the products listed on the same page, the next week or the next month.

So, shareability and linkability should not be concerns when you are deciding whether to implement AJAX for sorting, or not. If it is better for users, do it.

Use hash marks URLs

Using hash marks in the URL allows sharing, bookmarking, and linking to individual URLs. A rel=“canonical” pointing to the default URL (without the #) will consolidate eventual links to a single URL.

Figure 372 – In this screenshot, the default view lists items sorted by Most Relevant SKUs.

The URL above will be the canonical page. In the next screenshot, you will notice how the URL changes when the list is filtered by Price, low to high:

Figure 373 – The URL includes the hash mark and the filter value, ~priceLowToHigh.

Currently, search engines typically ignore everything after the hash mark unless you use a hashbang (#!) to signal AJAX content (which itself is deprecated). Search engines ignore everything after the hash mark because using it in URLs does not cause additional information to be pulled from the web server.

The hash mark implementation is an elegant solution that addresses user experience and possible duplicate content issues.

View options

Just as users prefer different sort options, some users want to change the default way of displaying listings. The most popular view options are view N results per page or view as list/grid. While good for users, view options can cause problems for search engines.

Figure 374 – In the example above, users can choose to view the listing page as a compact grid or as a detailed list; they can also choose the number of items per page.

Grid and list views

Figure 375 – The grid view (left) and the list view (right).

Usually, the grid and the list view present the same SKUs, but the list-view can use far more white space. This space can be filled with additional product information and represents a big SEO opportunity as it can be used not only to increase the amount of content on the listing page but also to create relevant contextual internal links to products or parent categories.

The optimal approach for viewing options is to load the list-view content in the source code in a way that is accessible to search engines, then use JavaScript to switch between views in the browser.

You do not need to generate separate URLs for each view. In case you do generate separate URLs, those pages will contain duplicate content, and the way to handle them is with rel=“canonical” to a default view. The default view has to be the page that loads the content for the list view.

For example, these two URLs point the rel=“canonical” to /French-Door-Refrigerators/products:



Many ecommerce websites have the View-N-items per page feature, allowing users to select the number of items in the listing:

Figure 376 – This is a typical drop-down for the view-N-items per page option.

If possible, your default product listing will be the view-all page. If view-all is not an option, then display a default number in the list (let’s say 20) and allow users to click on a view-all link.

Figure 377 – Nike’s view-all option is displayed right in the menu.

If view-all generates an unmanageable list with thousands of items, let users choose between two numbers, where the second number is substantially bigger than the default (e.g., 60 and 180). Remember to keep users’ preferences in a session or a persistent cookie[52], not in URL parameters.

Figure 378 – The second view option is substantially larger than the first one.

From an SEO perspective, view-N-items per page URLs are traditionally handled with rel=“canonical” pointing to default listing pages (which are usually the index pages for department, category or subcategory pages). For instance, on a listing page with 464 items, the view 180 items per page option can be kept in the key=value pair itemsPerPage=180, and the URL may look like this:

The URL above lists 180 items per page and will contain a rel=“canonical” in the <head> that points to the category default URL:

However, the canonical URL lists only 60 items by default, and that is what search engines will index. This means that a larger subset (the one that lists 180SKUs) canonicalizes to a smaller subset (the one that lists 60 SKUs). This approach can create some issues because Google will index the content on the canonical page (60 items) while ignoring the content from the rest of the view-N-items pages. In this case, you need to make sure that search engines can somehow access each of the items in the entire set (464 items). For example, you can make this work with paginated content that is handled with rel=“prev” and rel=“next”, so that Google consolidates all component pages into the canonical URL.

The use of rel=“canonical” on a view-N-items page is appropriate if the canonical points either to a view-all page or the largest subset of items. The former option is not desirable if you want another page to surface in search results (e.g., the first page in a paginated series with 20 items listed by default).

The approaches for controlling view-N-items pages are similar to those for handling sorting: a view-all page combined with AJAX/JavaScript to change the display in the browser, uncrawlable AJAX/JavaScript links, hash-marked URLs, or using the “noindex” meta tag. I mentioned these approaches in my preferred order, but keep in mind that while one approach might suit the particular conditions of one website, it may not work for another.

  1. Prioritize: Good Content Bubbles to the Top,
  2. New snippets for list pages,
  3. More rich snippets on their way: G Testing Real Estate Rich Snippets,
  4. Product –,
  5. Below the fold,
  6. Implement the First 1-2 Levels of the E-Commerce Hierarchy as Custom Sub-Category Pages,
  7. Usability is not dead: how left navigation menu increased conversions by 34% for an eCommerce website,
  8. User Mental Models of Breadcrumbs,
  9. Breadcrumb Navigation Increasingly Useful,
  10. Breadcrumbs,
  11. New site hierarchies display in search results,
  12. Visualizing Site Structure And Enabling Site Navigation For A Search Result Or Linked Page, link
  13. Rich snippets – Breadcrumbs,
  14. Can I place multiple breadcrumbs on a page?
  15. Location, Path & Attribute Breadcrumbs,
  16. Taxonomies for E-Commerce, Best practices and design challenges -
  17. Breadcrumb Navigation Increasingly Useful,
  18. HTML Entity List,
  19. Pagination and SEO,
  20. Pagination and Googlebot Visit Efficiency,
  21. The Anatomy of a Large-Scale Hypertextual, Web Search Engine,
  22. Implement the First 1-2 Levels of the E-Commerce Hierarchy as Custom Sub-Category Pages,
  23. Five common SEO mistakes (and six good ideas!),
  24. Search results in search results,
  25. View-all in search results,
  26. Users’ Pagination Preferences and ‘View-all’,
  27. Progressive enhancement,
  28. Users’ Pagination Preferences and ‘View-all’,
  29. HTML <link> rel Attribute,
  30. Faceted navigation best (and 5 of the worst) practices,
  31. Implementing Markup For Paginated And Sequenced Content, link
  32. Infinite Scrolling: Let’s Get To The Bottom Of This,
  33. Web application/Progressive loading,
  34. Infinite scroll search-friendly recommendations,
  35. Infinite Scrolling: Let’s Get To The Bottom Of This,
  36. Infinite Scroll On Ecommerce Websites: The Pros And Cons,
  37. Why did infinite scroll fail at Etsy?,
  38. Brazillian Virtual Mall MuccaShop Increases Revenue by 25% with Installment of Infinite Scroll Browsing Feature,
  39. Typographical error,
  40. Better infinite scrolling,
  41. The Paradox of Choice,
  42. Search Patterns: Design for Discovery, [page 95]
  43. Adding product filter on eCommerce website boosts revenues by 76%,
  44. Configuring URL Parameters in Webmaster Tools,
  45. Permutation, Combination – Calculator,
  46. Faceted navigation best (and 5 of the worst) practices,
  47. Implement the First 1-2 Levels of the E-Commerce Hierarchy as Custom Sub-Category Pages,
  48. Implementing Pagination Attributes Correctly For Google,
  49. Do URLs in robots.txt pass PageRank?!category-topic/webmasters/crawling-indexing–ranking/OTeGqIhJmjo
  50. Faceted navigation best (and 5 of the worst) practices,
  51. URL Parameters in Webmaster Tools,, page 18
  52. Persistent cookie, Eight: Product Detail Pages