i currently have a website that is generated via php code. the code generates/rewrites URL's that are human readable and search engine friendly. I want someone to download/scrape the site so that all pages are saved as html pages (I want to get rid of the aging php code). I want to retain ALL existing URL's though (so that backlinks to my site are not lost).
For example, the php code may generate/rewrite the URL as follows:
php: [url removed, login to view]
URL rewritten: [url removed, login to view]
I want that page to be saved as html in the relative directory. best way I think is to scrape it. I DONT want it to be done manually as I want everything to be precise. you can figure out how to retain css files etc.
At the end, I would like a spreadsheet of the html files/URL's created. There are approximately 1800 pages.