1. Glossary
  2. Crawlers
  3. URL Parameters


URL parameters are used to indicate how search engines should handle parts of your site based on your URLs, in order to crawl your site more efficiently. This refers to folders within a URL string, i.e. yoursite.com/folder-one/ or yousite.com/folder-two/, where folder one may have duplicate content to folder two or where the content in folder one should not be showing up in search results.

In many instances, marketers and site owners will use canonical tags or robots.txt files to indicate to search engines which pages on their site should not be indexed or read to avoid duplicate content or prevent unwanted indexing of content.

URL parameters are specifically used when sites show the same content at different URLs. Situations when this would frequently occur is during a customer’s journey on a shopping site.

Here, they might be shown content based on their searches and under their unique session ID. For example, the following would all show the same content:


Search engines may penalise a site that shows duplicate content without using URL parameters, or another tool which designates pages as duplicate content.

Google uses a tool, accessed through its Search Console called, helpfully, URL parameters. This allows you to select yes or no on crawls for parameters on your site. Bing too has a function that allows users to submit URL parameters to ignore.

Be careful when implementing these URL parameters. Google warns that you should only make changes if you are familiar with how they work as it could affect which pages are shown in search results. Your marketing agency will be able to advise you on the best course of action.