I need a web scraper written for the .xls file in the following directory:
[login to view URL]
The latest .xls file within that directory will need to be downloaded.
The name of the file is subject to change daily and will need to be identified by the latest .xls extension.
All information needed is available on the main page. The number of rows will vary.
The output should be a pipe (|) delimited file with the following column mappings:
origin_city --> data located in the "Where We'll Be" column before the , (column C)
origin_state --> abbreviation located in the "Where We'll Be" column after the , (column C)
ship_date --> data located in the "Date" column changed to the YYYY-MM-DD format (column E)
destination_city --> leave blank
destination_state --> leave blank
receive_date --> leave blank
trailer_type --> data located in the "Type" column (column A)
load_size --> leave blank
weight --> leave blank
length --> leave blank
width --> leave blank
height --> leave blank
trip_miles --> leave blank
pay_rate --> leave blank
contact_phone --> leave blank
contact_name --> data located in the "Disp." column (column B)
tarp_required --> leave blank
comment --> data located in the "Ideal Spot" column and data located in the "Driver" column (column D & F)
load_number --> leave blank
commodity --> leave blank
The first line of the output should contain all of the column headers.
Any field that contain no data should be left blank.
Please do not use words like "null" or "blank" in blank columns.
Below is a sample output of the first 5 columns using sample data:
The deliverable will be a Perl .pl file that must run on
Ubuntu Linux and must use Modern::Perl. The Perl .pl file
should be called '[login to view URL]' and the output file should be
called '[login to view URL]'
It will be scheduled in cron to run unattended every 15 minutes.
Please specify what language/OS/modules you plan to use.
Also, please include the word "raccoon" in your bid so I know that
you read this description.
I can provide you same Perl script as before: WWW::Mechanize, HTML::TreeBuilder::LibXML etc. It will get information from [login to view URL] and save into pipe separated file.
12 freelancere byder i gennemsnit $226 på dette job
raccoon Hi, i would use Python and windows for this scraping work. i have knowledge of scraping. we can discuss details in chat. its not a big deal. waiting for your response. cheer!
Hello, I have experience in web scraping with Python, using Requests, Selenium, BeautifulSoup and Scrapy. I can help you with your project professionally. Lets work together!