Question: I’m trying to save a website with Web2Disk but I’m only getting the first few pages stored on my hard drive. It looks like Web2Disk is crawling the site. What’s wrong?

Answer: Web2Disk determines which files are on-site and which are off-site by the website’s domain name. If the site uses multiple domains names interchangeable, it can cause Web2Disk to only download part of the website.

For example, let’s assume that you’re trying to download the site “www.example.com”. If you enter only “example.com” as the Root URL (without the “www.” prefix), Web2Disk will not download any pages that start with “www.”.

To solve this, change the Root URL to what the site appears to use. If multiple domain names are used, you can add Domain Aliases in the Advanced Project Settings (click the ‘Wrench’ button to access these settings). A Domain Alias tells Web2Disk to treat several domains as on-site.

Additional Information:
Some websites stop returning content if they detects that a utility like Web2Disk is crawling the site. There are two options that you can adjust to try and remedy this problem. Click the ‘Wrench’ icon to access the ‘Advanced Project Settings’. In that menu, try increasing the ‘Crawler Delay’, this will force Web2Disk to crawl the site more slowly, waiting before downloading each file.

If that fails, try changing the ‘User Agent’ from ‘Inspyder’ to ‘Internet Explorer’ or ‘FireFox’. The User Agent string is how Web2Disk identifies itself with the remote server.