Newly Added Website Not Crawled, Identified, or Analyzed

Newly Added Website Not Crawled, Identified, or Analyzed

If you've recently added a website to our system and are experiencing issues with crawling, identification, or analysis, this guide will help you pinpoint potential causes and understand the limitations and supported features of Nytro Systems.

System Compatibility

Nytro Systems is engineered to seamlessly analyze a broad spectrum of websites, ensuring comprehensive insights. However, understanding the types of websites we support and the common issues that can disrupt crawling is key to resolving any challenges.

Supported Website Types

Nytro Systems currently supports the following types of websites:

  • Standard HTML-based Websites: Websites where content is served directly from the server in HTML format.

  • Server Side Rendering (SSR): Websites that dynamically generate content on the server side before sending it to the client in HTML format.

  • Standard CMS Platforms: Examples include WordPress, Wix, Squarespace and other popular content management systems.

Website supported upon request

There are certain types of websites and technologies that Nytro Systems does not currently support by default. These include:

  1. Client-Side Rendering (CSR) Websites: Websites that rely on JavaScript to render content in the browser. Our system is currently unable to crawl by default and analyze these websites. Like JavaScript sites that use the app shell model where the initial HTML does not contain the actual content and NytroSEO needs to execute JavaScript before being able to see the actual page content that JavaScript generates. Source

Common Issues Preventing Website Crawling

If your website is not being crawled or analyzed, it may be due to one of the following issues:

  • Robots.txt Blocking: If your website's robots.txt file disallows crawling, our system will be unable to access the site.

  • Crawling Blocked by Firewalls or Security Services: Websites protected by firewalls or services like Cloudflare may block our crawling attempts, preventing analysis.

  • Empty or Linkless Home Page: A home page without content or internal links can hinder our system from initiating the crawling process.

  • Home Page Errors:

    • 404 Not Found: The home page is missing.

    • Server Down or Not Responding: The server hosting the website is unresponsive.

    • 403 Access Denied: The system is restricted from accessing the home page.

    • Other HTTP Errors: Any other HTTP error codes that prevent the home page from loading.

  • Expired Domain: If the domain of the website has expired, crawling is not possible.

  • Single, Non-Canonical Page: Websites with only one page that is not marked as canonical may cause issues with crawling and identification.

  • Unexpected Failures in Our System:

    • Server Downtime: Temporary unavailability of our system may prevent crawling.

    • Logical Errors in the Crawling Process: Errors within the system's crawling logic might cause failures.

    • Website Offline: If your website is not connected to the internet or the server is down, crawling cannot be initiated.

Steps to Resolve Crawling Issues

If you encounter any of the issues listed above, we recommend the following steps:

  1. Check Your Website Configuration: Ensure that your website’s robots.txt file, security settings, and server configuration allow for crawling by our system.

  2. Review HTTP Status Codes: Verify that your home page is accessible and not returning error codes that would block crawling.

  3. Ensure Website Availability: Make sure that your website is online and the domain is active.


Contact Support

If you’ve reviewed your settings and the issue still isn’t resolved, please reach out to our support team by clicking the "?" icon from your dashboard or by visiting the link here.

    Submit a Ticket

      • Related Articles

      • How Website pages are Crawled & Indexed and methods for submitting and indexing

        When it comes to getting your website noticed by search engines like Google, two critical processes play a key role: crawling and indexing. Here’s an overview of how these processes work and how you can manage them to improve your website’s ...
      • Adding the NytroSEO Optimization snippet to your website

        The NytroSEO JS (Script) Optimization Snippet allows for the automatic SEO optimization of your webpage code. You will need to copy and paste the JS into the <HEAD> of your website. JS Snippet Setup Guide Add your website Firstly, you will need to ...
      • Adding Keyword search terms to a Website

        Adding Keywords to a Website This guide provides a detailed walkthrough of adding keyword search terms to the Nytro system for website optimization, rank monitoring, and pre-evaluation of meta-tags effectiveness. You can add Keyword search terms to ...
      • NytroSEO – How many keywords should I target for a website?

        How many keywords should I target for a website? It is generally recommended to target between 30% to 100% of keywords in relation to the number of pages on your website. Should I add repeated keywords or different variations, or is one type enough? ...
      • What are the steps and process for optimizing a website with NytroSEO?

        How does the process work? Utilizing advanced AI based technology, NytroSEO automatically and continually optimizes websites code by adding the relevant meta-tags & meta data to each page, based on its content and the optimal Keyword search terms. ...