We have a test question to see if you read this fully.
We need to download about 10,000 website archives from the wayback machine ([login to view URL]). The data needs to be stored on local web server for future data mining. We are NOT looking for you to do the data processing. We need to setup a repeatable process that we can re-run as needed.
There are many products and frameworks to choose from. This should be more of a framework selection & configuration project with some coding to wrap any framework. We will have the target websites URL's in a UTF-8 CSV file. The process should run in batch until it is complete.
Here is a sample URL: [login to view URL]://[login to view URL]
To show you have read our requirement fully, put your favorite pizza toping as the first word in your bid.
Tell us about SPECIFIC EXPERIENCE in crawling, scraping and downloading websites. It you post a bunch of unrelated experience and expect us to be impressed, guess again. If you chat us and did not follow directions, we will mark your bid as spam, delete it and block you.
If you have done this type of work before, verify that you can download our example site. Chat us about it and you will have a high chance of getting this project. Actions over words!
Since we are looking for someone who knows how to do this already. We think about $300 is a good budget.
We are an awesome employer: 4.9 rating across 300+ projects. We are always looking for exceptional freelancers that we can use over and over. It that you?