I need a web scraper written for the following url:
[login to view URL]
The latest [login to view URL] file within that directory will need to be downloaded.
All information needed is available on the main page. The number of rows will vary.
The output should be a pipe (|) delimited file with the following column mappings:
origin_city --> data located in column "A"
origin_state --> data located in column "B"
ship_date --> data located in column "K", change to the YYYY-MM-DD format,
if the date listed is a past date, use the current days date, also in the YYYY-MM-DD format
destination_city --> data located in column "E"
destination_state --> data located in column "F"
receive_date --> leave blank
trailer_type --> data located in column "I"
load_size --> data located in column "O", if data is "TL" add the text "Full", if data is "LTL" add the text "Partial"
weight --> data located in column "Q", if no data leave blank
length --> leave blank
width --> leave blank
height --> leave blank
trip_miles --> leave blank
pay_rate --> leave blank
contact_phone --> leave blank
contact_name --> leave blank
tarp_required --> leave blank
comment --> leave blank
load_number --> leave blank
commodity --> leave blank
The first line of the output should contain all of the column headers.
Any field that contains no data should be left blank.
Please do not use words like "null" or "blank" in blank columns.
Below is a sample output of the first 5 columns using sample data:
The deliverable will be a Perl .pl file that must run on
Ubuntu Linux and must use Modern::Perl. The Perl .pl file
should be called '[login to view URL]' and the output file should be
called '[login to view URL]'
It will be scheduled in cron to run unattended every 15 minutes.
Please specific what language/OS/modules you plan to use.
Also, please include the word "raccoon" in your bid so I know that
you read this description.