
Lukket
Slået op
Betales ved levering
Retrieve fields Owner Last Known Address City State Zip Code Reported By Amount Property ID Process: Every 24 hours at 5pm Navigate to: [login to view URL] Change display to 80 Clear daily postgresql table Loop through AA to ZZ as last name Collect all results Navigate through all pages Write results to master and daily postgresql table On property ID already existing, ignore Perform any captchas and rotate proxy as necessary Anti join where master is not in daily Populate Claimed in master as today where claimed not already set and property ID not in daily table Join on ID between daily and master, check for any fields that are different between the two Write exceptions into difference table with found_date as today Database structure: unclaimed_daily owner last_known_address city state zip_code reported_by amount property_id unclaimed_master owner last_known_address city state zip_code reported_by amount property_id claimed unclaimed_difference master_owner master_last_known_address master_city master_state master_zip_code master_reported_by master_amount master_property_id master_owner master_last_known_address master_city master_state master_zip_code master_reported_by master_amount master_property_id found_date
Projekt-ID: 40250782
14 forslag
Projekt på afstand
Aktiv 9 dage siden
Fastsæt dit budget og din tidsramme
Bliv betalt for dit arbejde
Oprids dit forslag
Det er gratis at skrive sig op og byde på jobs
14 freelancere byder i gennemsnit ₹7.332 INR på dette job

I read your project requirements and would be thrilled to collaborate with you. With expertise in Web Scraping and Data Extraction using Python, I specialize in navigating complex data structures and deliver efficient and scalable solutions. Let’s connect to discuss further
₹10.000 INR på 2 dage
4,0
4,0

I understand that the project involves retrieving unclaimed property data from the Maryland Comptroller’s website, specifically focusing on fields like owner, last known address, and property ID. The process requires daily automation to
₹1.650 INR på 7 dage
2,0
2,0

✅ I will build a Python-based automated scraper with proxy rotation, CAPTCHA handling, and full A–Z traversal. ✅ Implement PostgreSQL workflows: daily refresh, master deduplication, anti-join, and claimed status updates. ✅ Create robust comparison logic to log differences into exception tables with timestamps. ✅ Schedule via cron (5 PM daily) with reliable, scalable, and well-documented execution.
₹8.500 INR på 2 dage
2,1
2,1

With an ample skill set that includes Python, Data Extraction, and Web Scraping, I am confident that I can meet your needs for retrieving Maryland unclaimed property data. For starters, I am experienced in performing daily automated data extractions from websites using Python. By drawing on my extensive web scraping experience, I can guarantee the efficient collection and organization of the required fields such as Owner's name, Last Known Address, City, State, Zip Code, Reported By, Amounts as well as Property ID- crucial details that are central to your project. Lastly, what sets me apart is my unfaltering commitment to accuracy and attention-to-detail. With 100% cleints satisfication being my priority, I-coupled with reliabile communication - offer fast turnaround time while unravelling new insights into the given data. I look forward to demonstrating how my skills and familiarity with your requested technologies can positively impact your project!
₹5.000 INR på 7 dage
1,0
1,0

Hello, Most scrapers break. Scalable data systems don’t. You need automation that runs daily, stays clean, and updates your database safely. Your goal is clear: Automated AA–ZZ collection Daily → Master sync Smart claimed updates Field-level difference tracking Captcha + proxy handling This isn’t scraping — it’s structured data engineering. I build: Reliable bot-resistant scrapers Clean PostgreSQL pipelines Safe deduplication + anti-join logic Automated scheduled systems with logging My approach: design → build modular → validate → automate → monitor. You’ll get a stable, self-running system — not a fragile script. Andrew
₹10.000 INR på 7 dage
0,0
0,0

Hey — saw your post about automating the Maryland unclaimed property data scrape and syncing it into Postgres every day. The tricky part here is keeping the master table perfectly in sync despite captchas, pagination, and field changes over time. Quick question before I suggest an approach: Do you already have a server/environment in place to run this daily at 5pm (cron, Docker, etc.), or would you want me to set that up as well? I’ve built similar scheduled scrapers with anti-captcha, proxy rotation, and Postgres diff tables, so the flow you outlined (daily load, master sync, differences table) is very familiar. If you can send your current database connection details or any existing scripts you’ve started, I’ll review and tell you exactly how I’d wire this up end-to-end.
₹10.000 INR på 7 dage
0,0
0,0

Hello, I am a professional Web Developer with over 15 years of hands-on experience in website and web application development. Over the years, I have successfully delivered high-quality solutions for small businesses, startups, and enterprise-level clients. ? My Core Skills: Custom Website Development PHP, MySQL, HTML, CSS, JavaScript WordPress (Theme & Plugin Development) Laravel / Core PHP Applications Responsive & Mobile-Friendly Design Website Speed Optimization API Integration Website Maintenance & Bug Fixing
₹7.000 INR på 3 dage
0,0
0,0

Hello, I can develop a reliable Python-based scraping and ETL solution to automate the Maryland Unclaimed Property data process. The script will run daily at 5 PM, navigate through the official portal, set display to 80 records, loop from AA to ZZ by last name, paginate through all results, and store data into PostgreSQL daily and master tables. I will implement proxy rotation and CAPTCHA handling as required. The system will ignore existing property IDs, perform anti-join logic to update claimed records, compare daily vs master data, and log differences into the exception table with the current date. I have strong experience in Python, PostgreSQL, automation, and large-scale data extraction workflows. Ready to start immediately.
₹2.000 INR på 2 dage
0,0
0,0

Hi, I can build this Maryland unclaimed property scraper exactly as described. My approach: 1) Navigate to the Maryland Comptroller unclaimed property search page. 2) Set display to 80 results per page. 3) Loop through AA to ZZ as last name search. 4) For each search, paginate through all result pages, extract all 8 fields (Owner, Last Known Address, City, State, Zip Code, Reported By, Amount, Property ID). 5) Store results in PostgreSQL with two tables: unclaimed_master (daily snapshot) and unclaimed_difference (changes detected). 6) Schedule via cron to run daily at 5pm. Tech stack: Python plus Selenium for scraping, psycopg2 for PostgreSQL, cron for scheduling. I specialize in web scraping and data extraction. I can deliver a working script within 7 days, including documentation. Happy to discuss details.
₹5.000 INR på 7 dage
0,0
0,0

Hi I can build this automation and database workflow cleanly. I’ll set up the daily run collect the fields store them in daily and master tables handle duplicates comparisons and difference tracking so the data stays reliable and structured. Ready to start and set this up step by step with you.
₹12.500 INR på 1 dag
0,0
0,0

Your workflow is clear and well-structured, and I can build a reliable automation that collects the Maryland unclaimed-property data and maintains your PostgreSQL tables exactly as specified. The script will run daily at 5:00 PM, clear the unclaimed_daily table, loop through AA–ZZ last-name searches, paginate through all results, and store records in both the daily and master tables while ignoring existing Property IDs. I’ll implement stable scraping with proxy rotation and captcha handling so the process runs consistently without manual intervention. After each run, the system will: Perform the anti-join to mark records as claimed in unclaimed_master Compare daily vs master and log field changes into unclaimed_difference Maintain clean and consistent PostgreSQL updates Produce structured, traceable results The automation will be designed to run unattended (cron or scheduler), with clear logging so you always know the status of each run. I’ve built similar structured scraping pipelines with PostgreSQL syncing and exception tracking, so accuracy and stability will be the priority. I can start immediately and deliver a working version quickly for testing.
₹7.000 INR på 3 dage
0,0
0,0

Maryland’s property portal is a fortress of dynamic roadblocks; you don't need a simple script, you need a high-integrity data pipeline. I specialize in building "Invisible Scrapers" designed to treat captchas and rotating proxies as a handshake rather than a wall. I will architect your system using Python (Playwright) and PostgreSQL to handle the exhaustive AA-ZZ iteration with surgical precision. The complexity here isn't just the scrape—it’s the database logic. I’ll implement the Anti-join and Difference-tracking architecture you’ve outlined to ensure your Master table acts as a "single source of truth," accurately flagging claimed properties and field variations without data loss. I am currently scaling my profile reputation, which means your project receives my absolute focus and production-grade engineering at a starter rate. I treat every row of data as mission-critical because I am determined to earn your 5-star review. I can have the core extraction engine and PostgreSQL schema ready for a 5 PM test run by 30/02. Shall we discuss your preferred proxy provider to ensure zero-detection during the 80-result pagination? Best regards, Kalana
₹12.500 INR på 7 dage
0,0
0,0

I will build your automated Python/Selenium ETL pipeline to scrape the Maryland unclaimed property portal daily at 5 PM, rotating proxies/captchas and executing the exact PostgreSQL anti-joins to update your master and difference tables. As a Data Analytics specialist, I build robust data pipelines and write complex SQL daily. I love how clearly you have scoped this project. I am completely comfortable setting up the AA to ZZ search loop, handling pagination, and writing the specific PostgreSQL logic to populate the unclaimed_daily table, check for existing IDs in unclaimed_master, and log any updates to the unclaimed_difference table with the found_date. I will ensure the script is fully scheduled (via CRON or Windows Task Scheduler) to run autonomously every 24 hours. Do you already have a preferred proxy provider and captcha-solving API (like 2Captcha or Anti-Captcha) you want me to integrate, or would you like me to set those up for you as part of the script?
₹7.500 INR på 5 dage
0,0
0,0

Delhi, India
Betalingsmetode verificeret
Medlem siden nov. 29, 2023
₹1500-12500 INR
₹12500-37500 INR
₹1500-12500 INR
₹12500-37500 INR
$30-250 USD
$250-750 USD
₹12500-37500 INR
$2-8 USD / time
$2-8 USD / time
₹600-1500 INR
$250-750 USD
$1500-3000 USD
₹600-1500 INR
₹600-1500 INR
₹750-1250 INR / time
$250-750 USD
₹12500-37500 INR
£10-30 GBP
₹75000-150000 INR
£3000-5000 GBP
₹1500-12500 INR
$1500-2000 USD
₹600-1500 INR
$30-250 USD
₹37500-75000 INR