Let's Talk

Contact UsLet's Talk Solution

    SEO

    How To Fix “Crawled – Currently Not Indexed” in GSC

    How To Fix “Crawled – Currently Not Indexed” in GSC

    Pages that Google has crawled but chosen not to add to its search index are marked as “Crawled – currently not indexed” in the excluded status column. This implies that the URL won’t show up in search results.

    Although Google doesn’t say explicitly why a page receives this status, we do know that it frequently happens when Google determines that the page isn’t good enough to be indexed.

    Because Google’s Index Coverage report provides SEOs with more precise information about Google’s crawling and indexing decisions, it is simply amazing. At Go Fish Digital, we have been using it nearly every day since its launch to identify technical problems for our clients on a large scale.

    There are numerous “statuses” in the report that tell webmasters about how Google treats the content of their websites. Although a few of the statuses offer some insight into Google’s judgments regarding crawling and indexation, one is still unclear: “Crawled — currently not indexed.”

    We’ve received inquiries about the meaning of the stated “Crawled — currently not indexed” status from several site owners. Viewing a lot of data is one of the advantages of working for an agency. Since we’ve seen this message across several accounts, we’ve started to notice patterns from reported URLs.

    The definition provided by Google

    To begin, let us examine the official definition. This status indicates, per Google’s official documentation: “Google crawled the page, but it was not indexed. This URL doesn’t need to be submitted again for crawling; it might or might not be indexed.

    Thus, the essence of our knowledge is that:

    1. The page is accessible to Google
    2. It took Google some time to scan the page.
    3. Google opted not to include it in the index after crawling it.

    Google crawled the page

    The secret to comprehending this situation is to consider the rationale behind Google’s “conscious” decision to refuse indexation. Although we are aware that Google has no issue discovering the page, it appears to believe that people would not gain anything from finding it.

    Since you might not be aware of the reason why your material isn’t being indexed, this can be very aggravating.

    What does this status indicate?

    Google has aggressively crawled the page on your website but has decided not to add it to its index if it shows ‘crawled – presently not indexed’ in your page indexing report in Google Search Console.

    This indicates that this page will not appear for any query on Google’s search engine results pages.

    How to Fix Crawled, Currently Not Indexed in GSC

    The most frequent problems that lead to pages being marked as “Crawled – currently not indexed” are listed below, along with potential fixes to get Google to crawl them.

    repairs Crawled

    The “Excluded by ‘Noindex’ Tag” issue, which you might also encounter in your Google Search Console page indexing report, differs slightly from the Crawled – Currently Not Indexed error. Although both have been crawled but not indexed, it is easier to grasp the Crawled – Currently Not Indexed message.

    Boost internal connections

    If a page on your site lacks internal links or has a wrong internal link structure, Google may determine that the page isn’t worth indexing. An orphan page has no links leading to it.

    Start by navigating to an existing page on your site, finding a piece of the article related to the topic page you want Google to index, and adding a link in order to fix orphan pages or strengthen the internal linking structure.

    By conducting a site-based search for the phrase the orphan page is targeting on Google, you can find internal linking chances. The Google search query would look something like this: Site:yourdomain.com ‘orphan page target keyword’

    Pages from your website that already include the target term and may offer opportunities for internal linking will appear in the search results.

    Remember that internal connection is essential. It only takes a few minutes to implement, and it demonstrates how relevant the website is.

    Examine the URL

    To inspect the URL, click INSPECT.

    The fact that the page hasn’t been indexed is then confirmed.

    A little more scrolling reveals the cause: “No referring sitemaps detected.”

    We’ll click on the TEST LIVE URL to delve even further. (Since the TEST LIVE URL frequently yields more recent data, the outcomes might suggest that the page has been indexed.)

    In our instance, we discovered that the page is “not available to Google,” which prevents it from being indexed.

    Additionally, the message “Page cannot be indexed: Excluded by ‘no index’ tag” is visible.

    A similar notification appears when we scroll down a little bit further: “‘index detected in ‘robots’ meta tag.”

    Note: This notice will appear when you click REQUEST INDEXING if Google cannot access your URL.

    Examine the URL

    Make any necessary corrections.

    As a result, our page cannot be indexed as it is.

    Two things need to be fixed:

    The /blog/ needs to be included in the sitemap.

    The noindex tag must be removed from the page.

    Our example website does not use a content management system such as WordPress. In order to resolve the sitemap and index problems, we had to access the raw code.

    There is usually a lag time after a repair is implemented before Google detects that your page has been updated. Thus, be ready to wait around a day before hitting the TEST LIVE URL again.

    Request indexing and retesting the live URL

    • Pick up where you left off and click TEST LIVE URL once more after a day or so.
    • You ought to receive a notification stating that Google can access the URL.
    • Click REQUEST INDEXING after that.
    • Your website has been “added to a priority crawl queue,” according to a success notification.
    • After the page is indexed by Google, it will appear in search results.

    Request indexing

    Products that have expired

    Product pages, or PDPs or Product Display Pages, are a common issue I notice on e-commerce websites while navigating through lists of pages with the crawling – presently not indexed error. These pages typically display products as unavailable or out of stock.

    Googlebot crawls URLs to see if the product is available; if not, it lists the URL as crawled – presently not indexed. Given their emphasis on user experience and their desire to prevent URLs from appearing on Google if the products are unavailable, it makes perfect sense for Google to take this action.

    If you manage an online store and you are seeing the error “crawled – currently not indexed” on your product pages, make sure that every product that is available is displayed on the page itself by checking your stock feed. Next, you can explicitly request that Google re-crawl your URLs.

    John Mueller of Google provides some helpful advice on handling out-of-stock merchandise.

    301 reroutes

    This is a rare problem, but it can happen, particularly on more significant, more reputable websites where URLs are frequently and rapidly scanned.

    The destination URLs of redirected pages occasionally appear in the crawled but not indexed report. This has less to do with incorrectly misdirected redirects and more to do with how quickly Google crawls your website. On occasion, it appears as though Google is crawling the destination URL but has not indexed it. The SERP can be used to determine this.

    301 reroutes

    Creating a temporary sitemap.xml file is a popular solution to this problem. Gather all the URLs from the crawled but not yet indexed report, compare them in Google Sheets or Excel with the configured redirection, build a sitemap (you can use ScreamingFrog for this), and submit it to your Google Search Console dashboard.

    Problem with Site-Wide Quality

    As previously said, in order to verify the page’s actual condition, you need to take into account the URL Inspection Tool. However, it is not a reporting issue if the URL Inspection Tool also shows the status as Crawled—currently not indexed.

    Therefore, we advise you to investigate the kinds of URLs that are being blocked under this status in order to learn more about this problem.

    It seems sense that a page that offers little value to users would not be indexed in search results if you observe that the majority of pages are of the type feed pages, archives sites, and other miscellaneous pages with fragile content. You can safely disregard this status if that is the case.

    However, as Google’s John Mueller indicates, if you see any key pages (containing valuable/helpful information) on your website mentioned here, it is probably due to a site-wide quality issue that has prevented your essential pages from being indexed. He also suggests enhancing the website’s general quality and structure.

    Other factors

    False positive

    When a page is reported as excluded by Google Search Console but is actually indexed by the URL inspection tool or tested using a live URL, this is known as a false positive. According to Google Search Console coverage, this situation is seen as a false positive.

    To test a live URL:

    1. Enter the URL of your page as the search query on Google.com. For instance, domain.com/your-blog-post;
    2. Next, search for your page URL in the results; if it shows up there, it has been indexed, even if Google Search Console indicates it has been excluded. We refer to this as a false positive.

    There is nothing you need to do in this case because the Search Console is merely reporting an error.

    URLs with pagination

    Pagination is a technique blogs and eCommerce websites use to divide up material and make it easier to navigate. Pages with a number at the end designating the page are known as paginated URLs; an example would be www.myDomain.com/blog/page/2.

    URLs with pagination

    It’s possible that Google won’t index these pages. Depending on whether you think having paginated URLs in the search results is applicable, you may want to try to remedy this problem. Is there anything the paginated URL will genuinely rank for?

    See this SEO guide on pagination or Google’s best practices for pagination.

    URL of RSS feed

    RSS feeds are helpful for sharing content, but because they lack formatting, these URLs are not meant to be seen by humans.

    You should not be concerned if Google crawls your site but does not index your RSS feeds.

    How Can the Site’s General Quality Be Improved?

    In other words, a page (and website) must pass quality assessments in order to be indexed. Given that Google has not revealed any particular criteria they use for indexing, you should assess your website using established quality standards like as

    Site's General Quality

    Structure of Internal Links

    Make sure they contain internal links from pertinent sites on your website if you’re attempting to index a significant page. You can use Rank Math to propose relevant internal links from your essential pages, which will help you build internal links.

    Duplicate Content

    Verify whether your website has duplicates of the pages you’re attempting to index. If any duplicate pages are discovered, add a canonical tag from these pages linking to the original content you wish to index.

    Evaluation of Content

    A website will occasionally contain some out-of-date content. You may find these low-value pages with the use of a content audit and then make the necessary improvements. When it is not acceptable to add further material to a page, you may want to think about the following options: • Delete the page entirely; • Redirect to a more pertinent page.

    Including the noindex metatag

    The material will still be available to your audience on your website when you add a noindex meta tag. Still, search engines will be informed not to consider it for indexing. According to Google’s John Mueller, only pages scheduled for indexation are considered when evaluating site quality.

    Low-quality pages can be gradually removed from the index to raise the overall quality of the site. Nevertheless, site quality is a dynamic concept that takes time to develop. Google takes some time to recognize the signals, reprocess, and reassess the overall quality of your website.

    Written by Aayush
    Writer, editor, and marketing professional with 10 years of experience, Aayush Singh is a digital nomad. With a focus on engaging digital content and SEO campaigns for SMB, and enterprise clients, he is the content creator & manager at SERP WIZARD.