SeleniumJobs
This is an ongoing project and most likely it will turn into a long term business relationship if things play out accordingly. MUST have: Python Selenium/Scrapy Django/Flask HTML/CSS/Boostrap JS MySQL, Postgres, Sqlite Redis, mongodb AWS, Heroku GIT + zenhub TDD, SOLID Nice to have: -React/Redux -CI, CD -Graphql -Elastic search Please show your past experience in your proposal PD: state what passions you about working as a freelancer in less than 10 words. all details will be provided to the few selected throu DM. Best Regards!!
...nationality, position, current club, height/weight (where listed), and any notable career highlights that appear on the player’s own site. Because these pages vary in structure, the code should be resilient: graceful error handling, user-agent rotation, and clear selectors or XPath rules that are easy for me to extend later. I’m comfortable running Python, so libraries like Requests, BeautifulSoup, Selenium, or Scrapy are welcome; please choose the stack that gives the best balance of speed and maintainability. Deliverable • A runnable script (with a brief README) • The resulting CSV generated from a short test run (5–10 players is fine for proof) • Comments in the code explaining each major step Acceptance • Script executes from the c...
...Entrance, Plate, Drink, Dessert, Coffee) but it can be only 2 stages out of 5 (example: Plate+Drink) Please also tag each record with its district and municipality so the file can be filtered regionally. Deliverables 1. A single CSV or Excel file containing one row per restaurant with all fields above clearly labelled. 2. The script or notebook you use (Python with BeautifulSoup, Scrapy, Selenium, or any other tool you prefer) so I can rerun the scrape later. Acceptance criteria • No duplicate restaurants. • All mandatory columns populated where data exists on Google. • At least 95 % of entries correctly classified for fixed-price menu status and pricing. Keep the approach respectful of Google’s terms of service. If you need to enrich the d...
Saya ingin sebuah skrip otomasi yang berjalan di Windows yang mampu: • Membuka chart TradingView yang saya tentukan • Mengambil screenshot berkualitas (format gambar) secara berkala atau saat saya tetapkan kondisi tertentu di masa depan • Mengirim hasil screenshot tersebut langsung ke akun WhatsApp saya Silakan pilih bahasa pemrograman atau tool yang paling efisien—Python + Selenium, Playwright, atau opsi setara sepenuhnya OK selama stabil dan mudah saya jalankan kembali. Yang saya butuhkan sebagai hasil akhir: 1. File skrip siap pakai beserta library pendukung 2. Petunjuk instalasi langkah-demi-langkah untuk Windows 3. Cara menambah/mengubah interval atau pemicu kondisi di kemudian hari 4. Panduan singkat menghubungkan skrip ke WhatsApp (API resm...
...(Customer): • Android (Merchants): • iOS (Customer): Scope of work I need reliable automation that covers UI changes, UX-flow integrity and server-side performance. You are free to choose the stack—popular options such as Selenium + Appium for functional flows and JMeter, k6 or Locust for load tests are welcome—but explain your reasoning and keep the solution maintainable by our in-house developers after hand-over. Deliverables 1. End-to-end functional scripts that launch on Web, iOS and Android from a single command or CI trigger. 2. Visual regression setup (baseline screenshots and diff reporting). 3. Load-test
I need a high-speed automation tool for managing B2B leads. Requirements: - Detect new leads in real-time (within seconds) - Filter leads by: • Keywords (pharma products like ivermectin) ...leads. Requirements: - Detect new leads in real-time (within seconds) - Filter leads by: • Keywords (pharma products like ivermectin) • Buyer location (international preferred) • Quantity (bulk inquiries) - Perform instant automated actions on matching leads - Ensure very fast execution (within seconds of lead arrival) - Optional: automated response/message support Technical: - Python / Selenium / browser automation experience required Important: - The system should be efficient, fast, and reliable - Must simulate normal user behavior Please share your appro...
...all pages and pagination is essential. Please enter every entry in a separate column so we can analyze it easier. Deliverable • One .xlsx file containing a single sheet with the cleaned dataset As soon as the file opens without errors and every listed tractor is represented with the five fields correctly typed, the job is finished. If you already have experience with Python (BeautifulSoup, Selenium, Scrapy) or similar scraping tools and can turn this around quickly, let me know your timeframe....
...from a specific website and drops it into a neat, well-structured Excel workbook each month. The data points are the standard e-commerce essentials—name, price, SKU, description, availability and any variants that appear on the page. If the site nests details behind dynamic elements, please factor that in; I still expect a complete dataset. Your script can run in Python (BeautifulSoup, Scrapy or Selenium are all fine) or any language you prefer, as long as the final output is a tidy .xlsx file ready for analysis. I’ll trigger the run once a month, so the process should be repeatable with minimal manual tweaking—ideally a single command or scheduled task. Acceptance criteria • A working scraper that navigates pagination and captures every live product li...
...Project Scope: Scrape data from specified websites (details will be provided) Extract relevant fields (e.g., names, emails, prices, listings, etc.) Clean and structure the data (CSV, Excel, or database format) Ensure data accuracy and avoid duplicates Handle pagination, dynamic content, or login (if required) Requirements: Strong experience with web scraping tools (Python, BeautifulSoup, Scrapy, Selenium, etc.) Ability to handle anti-bot protections if needed Experience with data cleaning and formatting Attention to detail and reliability Nice to Have: Experience with automation and scheduling scripts Knowledge of APIs (if available instead of scraping) Deliverables: Clean, well-structured dataset Scraping script (optional but preferred) Documentation on how the data was c...
I need an experienced QA professional to put my web application through a rigorous round of Software Testing...behaviour, accessibility touch-points, and common security pitfalls (e.g., injection, authentication, session handling). • A clearly structured report listing each issue with severity, reproducible steps, environment details, and any relevant screenshots or short screen-captures. • Retesting of resolved defects to confirm fixes. I have no preference for a specific toolset; feel free to work with Selenium, Cypress, Postman, OWASP ZAP, or another stack you trust, so long as the findings are transparent and actionable. If you’re confident you can deliver an insightful test plan, thorough execution, and a report stakeholders can act on immediately, I&rsquo...
...CDN URLs * System should also optionally push data directly to WordPress via API --- ### 6. WordPress API Integration * Upload media and posters * Set alt tags automatically (format: title + “ - passthrough AR VR porn video - ”) * Create/schedule posts * Associate metadata, images, and tags correctly --- ## Technical Requirements * Python * Web scraping frameworks (Playwright, Selenium, or similar) * Media downloading (yt-dlp or similar) * API integration: BunnyCDN + WordPress REST API * Image processing (Pillow or similar) * AI/ML integration (OpenAI, local CLIP models, or similar) * Automation pipelines for multi-source input --- ## Deliverables * Fully working automation system * Clear documentation for setup and operation * Scalable to ~100 videos per mon...
...through an SMTP-level verifier (ZeroBounce, NeverBounce, or an in-house Python verifier—whichever you prefer, as long as it returns status codes for valid, invalid, catch-all, disposable, and role accounts). 4. Output only “valid” or “catch-all” emails in a downloadable CSV along with their metadata. • Technical notes - I’m comfortable with Python (Scrapy, Requests, BeautifulSoup, Selenium) or Node (Puppeteer, Cheerio); choose whichever stack you can scale and maintain. - Respect Google’s ToS with rotating residential proxies or a paid SERP API to avoid blocking and captchas. - The job should run via a daily cron or cloud function and log results to a lightweight dashboard (even a simple Flask/Express UI or Goo...
...utilizzando Python (preferibilmente con esperienza in **Firecrawl** o strumenti simili) - Utilizzo di modelli di intelligenza artificiale (OpenAI o Anthropic Claude) per: -- estrazione strutturata di informazioni -- pulizia e normalizzazione dei dati - Salvataggio e gestione dei dati su database Supabase Requisiti fondamentali: - Ottima esperienza con Python per scraping (BeautifulSoup, Playwright, Selenium o equivalenti) - Esperienza con Firecrawl (o forte capacità di apprendimento rapido) - Integrazione API AI (OpenAI / Claude) - Esperienza con Supabase (PostgreSQL, API, storage) - Capacità di progettare pipeline dati robuste e scalabili Nice to have: - Esperienza con scraping su siti complessi (JavaScript-heavy, anti-bot) - Conoscenza di sistemi di automazion...
Hi, I need a data scraper who can scrap a data from provided sources. Skills : Core Technical Skills The freelancer should know Python (the most common scraping language) with libraries like Scrapy, BeautifulSoup, or Playwright. They should also be comfortable with browser automation tools like Selenium or Puppeteer (JavaScript-based), since sites like TipRanks are JavaScript-heavy and need a real browser to render. Anti-Bot Bypass Experience This is the most critical skill for your specific case. Look for someone experienced with handling CAPTCHAs (2Captcha, Anti-Captcha services), rotating proxies and residential IPs, spoofing browser headers and fingerprints, and bypassing Cloudflare or similar bot protection. TipRanks specifically uses these protections, so this experience ...
...automate data scraping from a series of publicly accessible web pages. The script should accept a list of URLs, navigate through any paginated content, extract the specified fields, and save the results to CSV and JSON. The task suits someone with an intermediate grasp of Python who is comfortable working with libraries such as requests, BeautifulSoup, pandas, or, when a site relies on JavaScript, Selenium or Playwright. Clear, well-commented code and concise setup instructions are essential so the script can be dropped into an existing workflow without modification. Acceptance criteria and deliverables: • Fully functional .py script that runs from the command line. • Configuration section (or .env file) for URL list and field selectors. • Output in both ...
...challenges like page load delays and CAPTCHA (either automated or with manual intervention). Additionally, optional features such as bulk upload via Excel, filing status classification (filed/not filed), dashboard view, and possible API integration (if available) can be included. The final deliverable should be a functional automation tool or script (preferably using technologies like Python and Selenium), along with proper documentation and sample output, and the developer should clarify feasibility aspects including scraping limitations, CAPTCHA handling, development timeline, technology stack, bulk processing capability, and expected processing speed per company....
I need a request-based TikTok automation script written in Python that can run at scale without ever touching a browser engine. The core of the job is to deliver clean, well-documented source code that relies on the Requests library (or an equally lightwei...source code with a quick-start README • Simple CLI entry point that lets me set thread count, action type, and proxy list • Proof the script can run at least 20 accounts in parallel without failing a request batch If any dependency is required beyond the Python standard library (e.g., httpx, aiohttp, or a fast queue manager), pin the exact version in requirements.txt. That’s it—no GUI, no Selenium, just fast, invisible API calls. Let me know how you plan to tackle TikTok’s private endpoints and...
... and submit the details I pass in via JSON or a simple CSV. The first three targets are non-negotiable—DCMA, Google, and Cloudflare—because I file most of my takedown notices there. I also want the same script to recognise when a domain’s WHOIS form is required and, if no form exists, fall back to sending a templated abuse email to the registrant or hosting provider. Headless automation with Selenium, Playwright, or another stable browser driver is fine as long as it reliably handles captchas, hidden fields, and dynamic JavaScript. The core logic should be cleanly separated so I can add more providers later without touching the form-filling engine. Deliverables • A fully commented Python script or small package ready to run on Ubuntu 22.04. • A c...
...accounts in bulk. Because the target site’s protective measures were not disclosed, I need code that can adapt to common defences—CAPTCHA challenges, behavioural fingerprinting, rotating-IP detection, and similar hurdles—using headless browsers, human-like interaction timing, proxy rotation and programmatic CAPTCHA solving where required. You are free to choose the stack (Python + Playwright/Selenium or Node + Puppeteer are both fine) as long as it runs on a standard Linux VPS and I receive clean, well-commented source code. Deliverables • A reusable script or small set of scripts that: – logs new accounts, confirms email/SMS where needed, then stores the credentials; – navigates forms end-to-end with dynamic field handling...
I need a lightweight, computer-vision bot that relies on on-screen image recognition rather than Selenium or similar browser-automation libraries. The workflow is straightforward: the bot launches a local browser window, navigates to a specific site I will provide, and—through visual cues—automatically clicks the required elements, fills in form fields, and finally extracts the resulting text, images, and tabular data that the page returns. The process must loop continuously so it can watch for page changes and react in real time. A small delay between cycles is fine as long as the interaction remains smooth and does not overload the system or the target site. Robust error handling is important; if a button is missing or the page layout shifts, the bot should retry gr...
...scraping — controlled through a web-based dashboard, with no Selenium or browser drivers. We need a vision-based UI automation bot capable of navigating real browsers (Chrome, Firefox, Brave, Opera) installed on the user's machine. The bot will fill web forms and scrape report data using image-based template matching — no browser drivers or Selenium. A local web UI will serve as the control dashboard for operators to input data, monitor progress, and handle errors manually when needed. Core requirements 1. Cross-platform support Must run natively on Windows, Linux, and macOS without platform-specific hacks. 2. Native browser control Open and control the locally installed browser (Chrome, Firefox, Brave, or Opera) at the UI level. Selenium and Web...
...is straightforward: the bot finds a list of channels, fetches the email field inside each description, checks that it hasn’t mailed that address before, and then fires off the tailored message. It should pause or queue itself so we never exceed 50 sends in a 24-hour window and respect reasonable delays between messages to stay under Gmail / SMTP limits. I’m comfortable if you code it in Python—Selenium, BeautifulSoup, or the official YouTube Data API are all fine—as long as the final script runs on my Windows laptop or a lightweight VPS. Configuration for my SMTP credentials must be simple and secure (preferably loaded from an environment file). A small log file or dashboard that shows which channels were processed, whether an email was found, and the sen...
Preciso de... 2. Atualização diária automática, com opção de ajuste futuro para outros intervalos. 3. Saída em formato CSV ou JSON, organizada exatamente conforme a planilha de referência. 4. Código comentado e documentado, pronto para rodar em ambiente Linux. 5. Relatório breve explicando arquitetura, bibliotecas utilizadas e instruções de implantação. Sinta-se livre para empregar Python, Scrapy, Selenium, Playwright ou técnicas de IA que agilizem identificação de padrões ou bloqueios. O importante é garantir estabilidade, capacidade de lidar com CAPTCHAs ou paginações complexas e facilidade de manutenção. Se j&aacu...
...fix the existing scraping script Identify and resolve issues causing failures or incomplete data Ensure stable and consistent data extraction Handle pagination, dynamic content, or anti-bot protections if needed Improve performance and efficiency of the scraper Deliver clean, structured output (CSV / JSON / database) Technical Requirements Strong experience with Python (BeautifulSoup, Scrapy, Selenium, or similar) Experience handling JavaScript-rendered websites Familiarity with proxies, headers, and anti-bot bypass techniques Ability to troubleshoot and optimize existing code Experience with data formatting and storage Deliverables Fully working and stable scraping script Clean and structured dataset output Documentation on how to run and maintain the script Optional: suggestion...
...every product pulled into a single, well-structured Excel workbook. Once inside, the script will have to move through every category and pagination level, capture the product name, listed price and full product description, then export everything to .xlsx with clean columns and no duplicates. I will supply working credentials as soon as we start. Please handle the sign-in process programmatically—Selenium, Playwright or another headless solution is fine so long as it is reliable against possible anti-bot checks. If the site loads content via JavaScript, make sure the scraper waits for the prices to render before grabbing the data. Deliverables • Python or Node script (with / ) that I can rerun at any time • A first run of the script producing the complete E...
I need a reliable browser-based bot that logs in to my Amazon A to Z account, continuously monitors upcoming shifts and instantly books the ones that match criteria I will later define (location, start time, hours). The tool must operate inside a standard web browser—headless Chrome or a Selenium/Puppeteer script is fine—as long as it books faster than manual clicking and survives Amazon’s usual page refreshes, captchas, and timers. I will grant the bot full access to my account solely for booking purposes, so handling login securely (2FA, encrypted credentials, cookie reuse) is essential. A simple configuration file or UI where I can tweak preferred warehouses, shift lengths, and daily booking windows would be helpful, but speed and reliability come first. Deliver...
...number and/or email exactly as it appears in the ad. • Note the ad’s URL, title, price and posting date so my client can reference it. • Deliver everything in a simple CSV or Google Sheet that I can forward directly. I am not asking for a one-size-fits-all scraper; the job is a personalised contact-finding service. If you prefer to automate parts of the process with Python, BeautifulSoup, Selenium, Apify, etc., that is fine as long as the results stay accurate and Leboncoin’s terms of use and rate limits are respected. Turnaround is important to me: once I drop you a batch of links or search criteria, I need the contact file back the same day (or within the hour for smaller batches). Fluency in French and an eye for detail are essential because mislead...
...available Acceptance criteria The script runs headless, completes each action within Amazon’s normal response time, and logs every transaction to a local file or database so I can audit what happened and when. A simple JSON or YAML config should let me set preferred warehouse, dates, hours, and notification endpoints without touching the code. Tech preferences Python with Playwright or Selenium is ideal, but I’m open to Node, Go, or any language that handles the OAuth flow cleanly and can expose a small REST API so I can trigger actions from other tools. Once the bot books, cancels, or alerts, I want a succinct status message returned by the API so it can plug into my existing dashboard. All code must be well-commented and packaged so I can deploy it on a sma...
I currently have a Python-based data scraper built with Selenium/Requests, but it is running too slow and crashing because of high memory usage while processing large datasets. It is not scalable. Requirements: Optimize memory footprint for 5,000+ records. Refactor sync loops to Asyncio/Aiohttp. Implement a robust error-handling and retry mechanism. Need an expert who can handle high-performance Python code. No beginners please.
...microservice, API, and data pipeline. – User Experience: walkthroughs for end users, our own internal staff, and external third-party auditors, noting friction points, confusing flows, and accessibility gaps. – Security: a focused penetration and vulnerability sweep that mimics real-world attack vectors relevant to financial-grade SaaS. Key tools are up to you—whether that’s JMeter, Postman, Selenium, Cypress, Burp Suite, or your own preferred stack—so long as the results are reproducible and traceable back to code or configuration. Deliverables I must see • A concise test plan mapping the scenarios above to platform modules • Automated test scripts or source files where relevant • Log output and raw reports for performance...
...security—so you will be writing and executing a broad suite of tests rather than focusing on a single layer. Right now only high-level user journeys exist. I will share those, and you will expand them into a detailed test plan, automate or semi-automate wherever sensible, run everything in the staging environment, and produce a clear report of findings. Functional coverage may involve Postman, Selenium, or similar tools to drive the UI and hit the REST endpoints. For performance validation you can lean on JMeter, k6, or another load framework to capture throughput, latency, and breakpoints. Security checks must at minimum map to the OWASP Top-10; OWASP ZAP, Burp Suite, or your preferred scanner are fine as long as results are well documented. Deliverables • Test pla...
...exposes it Flow I have in mind 1. Front-end search box (simple HTML/JS is fine). 2. Back-end service that grabs the requested place page, extracts the data above, and returns JSON. 3. Results rendered in a clean table or card layout directly in the browser. No extra analytics or reports are needed right now. Technical notes • Runs on my existing Linux VPS. A Python (BeautifulSoup + Selenium/Playwright) or Node (Puppeteer) stack is perfectly acceptable—choose whichever can handle dynamic content and the popular-times chart reliably. • Please include an easy setup script or Dockerfile so I can deploy in one step, plus concise instructions on how to add my own Google session cookies or proxies if required to avoid blocking. • I’m fine wit...
...equal measure—nothing less than a 360° review will do. The system handles sensitive financial data and real-time transactions, so every user flow, API call and background job has to be airtight. What I expect from you • A structured test plan that covers user-facing journeys, back-office workflows and all integrations. • Automated and manual test cases using industry-standard tools (think Selenium/WebDriver, Postman, JMeter or equivalents) with clear pass/fail criteria. • A security sweep that goes beyond surface checks—please include vulnerability scanning, attempted exploit reproduction and a concise risk assessment. • Load and stress benchmarks that show how the platform behaves under peak usage, with recommendations for optimisatio...
I have a Python-based data scraper that is currently running too slow and crashing due to high memory usage when processing large datasets. It’s built with Selenium/Requests but isn't scalable. Requirements: -Refactor sync loops to Asyncio/Aiohttp. -Optimize memory footprint for 5,000+ records. -Implement a robust error-handling and retry mechanism. Need an expert who can handle high-performance Python code. No beginners please.
...qualifies for funding according to the criteria already laid out on that site. The page itself decides the pass/fail outcome, so your script simply has to submit the address, read the result field, and record it—nothing more complex than interpreting the website’s own ruling. Speed and stability matter. Ideally, the run completes in days, not months, . I am comfortable with solutions built in Python (Selenium, Playwright, or similar), but if you prefer another stack that can deliver the same throughput I’m open to it. Headless browsing, smart throttling, and graceful error handling are essential so the job can run unattended on a cloud VM. Deliverables: • A fully commented script or small application that accepts a CSV of addresses and outputs a CSV of ...
...functioning correctly. I need someone who can quickly identify and resolve the issue. **Project Details:** * Built in Python * Automates ticket purchasing on Skiddle * Was fully functional before (so likely a small fix or update needed) * Issue started recently (possibly due to website changes or API changes) **Requirements:** * Strong experience with Python * Experience with web automation (Selenium, requests, scraping, etc.) * Ability to debug and fix existing code quickly * Familiarity with anti-bot protections is a plus **What I’ll Provide:** * Full source code * Explanation of how it used to work * Details about the current issue **Goal:** Get the bot back to working condition as soon as possible. Please include: * Your experience with similar bots or automati...
...0xabd7d97b958595a8). Until recently I relied on an undocumented endpoint, but that call now comes back empty unless a logged-in cookie is present. Rather than shifting to the official Google Maps API, I want to continue scraping over plain HTTP requests so I can later fan this out to hundreds of thousands—or even millions—of IDs per day. Here is what I need from you: • A requests-based script or module (no Selenium, Playwright, or headless browsers) that successfully returns review data in JSON or Python objects. • A method for overcoming the new authentication hurdle. My preferred route is to rotate proxies to mask requests, but I’m open to any lightweight header or token strategy that keeps the workflow purely requests-driven. • Clear inst...
My goal is to stop touching stock sheets manually and have one reliable workflow that pushes the right figures to several food-delivery storefronts—Talabat, Instashop, Deliveroo and Hungerstation—once every day. Here is what I need built: • A script or small service (Python, NodeJS or simila...scheduled job triggers the complete update cycle daily. 2. Stock levels visible on all four storefronts match the master sheet after the run (tolerance 0). 3. Detailed success / failure log stored locally and optionally pushed to Slack or email. 4. Clean, well-commented code plus a README so future devs can extend it (for example to Shopify or Magento later). If you prefer particular libraries—Selenium, Puppeteer, Requests, etc.—let me know; I’m flex...
...Facebook, Instagram and Twitter from one place. The core flow is simple: I set the message (text, image or video), choose a date-time or immediate send, click once, and the tool handles login and posting in the background. A lightweight dashboard or command-line interface is fine as long as it is easy to configure accounts and see basic status logs. Tech stack is open—feel free to suggest Python + Selenium, Node with official APIs, or any framework you know will survive platform policy changes. Whatever you choose, I will need: • Source code and brief setup notes • A short test run showing one post pushed to all three platforms • Clear instructions on how to add additional accounts later When you respond, show related past work only; videos, screenshot...
I have a nearly finished website that now needs a final quality sweep and some polished content. First, I want every page put through rigorous responsive testing on Mobile, Tablet, and Desktop break-points. Use the tools you feel most confident with—BrowserStack, Chrome DevTools, Lighthouse, Selenium, or similar—just be sure the coverage is thorough and repeatable. From those sessions I need three clear report sets: • Bug reports that pinpoint steps to reproduce, affected URLs, screenshots or short clips, and suggested fixes. • A responsive-layout summary that highlights any visual or functional inconsistencies across the three device classes. • Performance metrics including load times, Core Web Vitals, and any blocking assets. Once the technical s...
...full duration, tuition fees and all stated entry-requirement details. Compile the results in one tidy Excel workbook, single sheet, with each row representing a unique program and each column matching the fields above. Accuracy matters: no duplicates, consistent currency symbols, and clean text with no hidden HTML tags. If you gather the data with an automated script (Python + BeautifulSoup/Selenium or similar) that you can hand over alongside the spreadsheet, even better—it will let me refresh the list later. On delivery I will quickly verify random samples against the live site, so the sheet should be ready to pass that check without edits. Deliverables: • .xlsx file containing all scraped programs in one sheet • (Optional but appreciated) the scraping ...
...páginas relevantes del directorio, incluyendo la paginación, hasta que no queden registros nuevos. • Extraer solo los campos de texto visibles mencionados anteriormente; omitir las fotos de perfil u otros archivos multimedia. • Generar los resultados en un archivo CSV limpio y sin duplicados (una fila por abogado/bufete). • Proporcionar código Python bien comentado: Scrapy, BeautifulSoup o Selenium son aceptables siempre que el script final se ejecute sin interfaz gráfica en Linux. • Respetar los retrasos de o implementar una limitación de velocidad para evitar la sobrecarga del sitio. Acceptance criteria 1. Running the script reproduces the full dataset without manual tweaks. 2. Column order and headers match the sam...
...calculate projected NOI, cap rate, cash-on-cash and simple IRR, then package the results in both a downloadable spreadsheet and a JSON payload so I can drop the file straight into my CRM. Editable assumption cells for vacancy, expense ratio, financing terms and exit cap are essential so I can fine-tune scenarios on the fly. I am language-agnostic, though Python (Pandas, Requests, BeautifulSoup or Selenium for scraping, plus Jupyter for quick tweaks) or JavaScript with Node and relevant SDKs would be easiest for me to maintain. If an official API is required I will provide keys; otherwise build a scraper that respects rate limits and captcha challenges. Deliverables: • Full source code with clear in-line comments • Setup guide and / • Example CSV, JSON and u...
...(for instance, minor punctuation or spacing differences should still count as a match). When a record fails the rule you set, flag it in a separate column and paste the live values you retrieved so I can see exactly where the discrepancy lies. Because the portal uses an automated captcha, the solution must run end-to-end without me having to sit and solve captchas manually. A headless Python/Selenium routine with an OCR-based captcha solver is perfectly acceptable, but I am open to any stack you are comfortable with as long as it is robust and easy for me to reuse in the future. Deliverable • Updated XLSX containing the original data, the three extracted fields, and a “Match / Mismatch” indicator based on our custom rule. Once delivered I will spot-che...
...sub-folder for every product, named with the product title or SKU • Within that folder: – a text file (UTF-8) containing the product description exactly as it appears online – all product images saved as JPG or PNG, keeping their original quality and naming them sequentially (image-1, image-2, …) Feel free to use the web-scraping stack you’re most comfortable with—Python, BeautifulSoup, Selenium, Scrapy, or equivalent—as long as the final structure matches what’s outlined above and every product on the site is included. Before delivery, spot-check at least 10 random products to confirm that descriptions are complete, images are viewable, and folder names are accurate. The products should match the collections...
...candidate should have a strong understanding of full-stack development and be capable of both manual and automated testing. Scope of Work: * End-to-end testing of web application functionality * Identify bugs, UI/UX issues, and performance bottlenecks * Write and execute automated test scripts using Selenium * Cross-browser testing (primarily on Google Chrome) * Validate APIs and backend integrations * Ensure responsiveness and usability across devices Required Skills: * Experience with Selenium and automation testing * Strong knowledge of React, Node.js, Python, and MySQL * Familiarity with debugging full-stack applications * Attention to detail and structured reporting Nice to Have: * Experience with CI/CD pipelines * Knowledge of testing frameworks and tools Look...
...scraper that reliably pulls pure text data from the official Supreme Court of India website as well as each High Court site. The crawler must respect , handle pagination, and normalise judgments, orders and daily cause-lists into a clean structure before inserting them straight into our existing database (PostgreSQL). If you prefer Python, feel free to combine requests, BeautifulSoup, Selenium or Playwright—whatever keeps it stable and headless-friendly. 2) Trace and fix the broken “Research” feature inside the AI tool. It currently fails when it tries to query the new tables; the bug appears in the indexing layer, not the LLM itself. Once the scraper has seeded fresh data, the search endpoint should return relevant citations in under two seconds. I’ll pr...
Playwright skills: required Selenium: nice to have In need of expert developer to build a Python (Playwright preferred) scraper that pulls ~500 filtered car listings daily from CarGurus and stores them in a structured dataset (CSV). The script should extract key fields (price, mileage, distance, dealer, etc.), handle pagination, and append daily snapshots for tracking changes over time. It must be reliable, use basic anti-blocking practices, and include clean, maintainable code with setup instructions. Bonus if you add simple analytics like price change tracking or value scoring.
...Instagram Reel data based on a given input. Requirements: - The tool should accept a single Instagram Reel URL as input - It should identify the account and extract all reels posted after that specific reel - For each reel, extract: - Reel URL - View count - (Optional) Upload date - Export all data into a clean Excel (.xlsx) file Key Expectations: - Use a reliable scraping method (Playwright/Selenium preferred) - Handle Instagram limitations (rate limits, blocking, login/session management) - Ensure accuracy (no duplicate or missing data) - Code should be clean, scalable, and well-structured Optional (Preferred): - Telegram bot integration (user sends link → receives Excel file) - Ability to handle multiple users concurrently Deliverables: - Fully working Pytho...
...Instagram Reel data based on a given input. Requirements: - The tool should accept a single Instagram Reel URL as input - It should identify the account and extract all reels posted after that specific reel - For each reel, extract: - Reel URL - View count - (Optional) Upload date - Export all data into a clean Excel (.xlsx) file Key Expectations: - Use a reliable scraping method (Playwright/Selenium preferred) - Handle Instagram limitations (rate limits, blocking, login/session management) - Ensure accuracy (no duplicate or missing data) - Code should be clean, scalable, and well-structured Optional (Preferred): - Telegram bot integration (user sends link → receives Excel file) - Ability to handle multiple users concurrently Deliverables: - Fully working Pytho...