The ultimate guide to hiring a web developer in 2021
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Selenium is an automation tool that helps to test web applications and quickly author functional tests without the need of a test script. Once the tests are written, they can be executed on different platforms, browsers and mobile devices, experiencing how the application behaves in a complete cycle. Selenium Experts are hands-on IT professionals who can develop and execute a series of automated tests and maintain existing automation frameworks to ensure fast, quality delivery of web and mobile applications.
Selenium Expert can be instrumental in helping a client maximize their application’s quality assurance representation, by saving them time and resources on manual testing procedures. An Expert can develop test scripts with the popular open source automation framework, and implement continuous integration for the applications for decreased functional defects, freeing up time for other tasks or development efforts.
Here's some projects that our expert Selenium Experts made real:
Selenium is an invaluable addition to any development team that requires application testing on multiple platforms simultaneously. By hiring a Selenium Expert on Freelancer.com, clients are sure to effectively reduce time spent identifying software or device bugs with ease. They will spend less time looking at log files and more time investing resources into making their applications market ready. Invite your clients to post their project today and hire a Selenium Expert on Freelancer.com!
Baseret på 22,820 bedømmelser, giver vores klienter os Selenium Experts 4.9 ud af 5 stjerner.Selenium is an automation tool that helps to test web applications and quickly author functional tests without the need of a test script. Once the tests are written, they can be executed on different platforms, browsers and mobile devices, experiencing how the application behaves in a complete cycle. Selenium Experts are hands-on IT professionals who can develop and execute a series of automated tests and maintain existing automation frameworks to ensure fast, quality delivery of web and mobile applications.
Selenium Expert can be instrumental in helping a client maximize their application’s quality assurance representation, by saving them time and resources on manual testing procedures. An Expert can develop test scripts with the popular open source automation framework, and implement continuous integration for the applications for decreased functional defects, freeing up time for other tasks or development efforts.
Here's some projects that our expert Selenium Experts made real:
Selenium is an invaluable addition to any development team that requires application testing on multiple platforms simultaneously. By hiring a Selenium Expert on Freelancer.com, clients are sure to effectively reduce time spent identifying software or device bugs with ease. They will spend less time looking at log files and more time investing resources into making their applications market ready. Invite your clients to post their project today and hire a Selenium Expert on Freelancer.com!
Baseret på 22,820 bedømmelser, giver vores klienter os Selenium Experts 4.9 ud af 5 stjerner.Hey! I’m looking to hire an experienced developer to build a universal product-detail scraping pipeline that takes a product URL (any website) and returns a complete structured product record. This is not a “simple HTML parse.” Many target sites are React/Next/Vue, load content via XHR/GraphQL, hide details behind tabs/accordions/modals, and lazy-load images/PDFs. The solution needs to reliably extract everything a human can see on the page, plus the underlying data used to render it. What the scraper must do (high level) Given a product URL, the pipeline should: Load the page like a real user (handle cookies/overlays). Capture all content from multiple sources (DOM + network + interactions). Use GPT API strategically to increase accuracy (field mapping, variant ext...
Project Description I am looking for an experienced developer who can implement a solution to send POST API requests directly from an active browser session, maintaining full session integrity and security context. The requirement is to replicate browser network requests exactly as they occur in DevTools — including all dynamic headers, cookies, tokens, and Akamai-related security parameters — without triggering bot protection or security blocks. Core Requirements: API request must originate from the same opened browser session Must reuse: All request headers Session cookies CSRF tokens Authorization tokens Dynamic security tokens Akamai-related parameters Must work with protected endpoints (Akamai / Bot Manager enabled) Should avoid 403, 401, 504, or security validation fail...
I need end-to-end testing for my web application. The project code is already written, so you will be focusing solely on testing. Requirements: - Thoroughly test on Chrome - Ensure all workflows function as intended - Identify and document any bugs or issues Ideal Skills: - Experience with end-to-end testing tools (e.g., Selenium, Cypress) - Strong attention to detail - Familiarity with web applications and browser testing Please provide a brief overview of your testing experience and any relevant tools you plan to use.
I need a reliable solution that can pull data from LinkedIn and insert it straight into a database I specify. The core requirement is the automated transfer—once the tool finishes scraping, every captured field should already be sitting in the database ready for queries and reporting, no manual copy-paste. You’ll advise me on the best approach to authenticate, respect rate limits, and minimise the risk of blocks while still collecting the typical profile-level details (name, headline, company, location, experience, education, skills and anything else you can legally obtain). I will confirm the final field list before you begin. Key objectives • Build or configure a scraper / API wrapper that logs in, navigates to each target profile and captures the agreed-upon fields ...
I need end-to-end testing for my web application. The project code is already written, so you will be focusing solely on testing. Requirements: - Thoroughly test on Chrome - Ensure all workflows function as intended - Identify and document any bugs or issues Ideal Skills: - Experience with end-to-end testing tools (e.g., Selenium, Cypress) - Strong attention to detail - Familiarity with web applications and browser testing Please provide a brief overview of your testing experience and any relevant tools you plan to use.
I want to take my current, intermediate knowledge of Dot Net C# and turn it into solid, real-world expertise in automation. The goal is to design and implement a clean, maintainable test framework focused on integration testing, then practise troubleshooting the kinds of issues that appear in day-to-day work. Here’s what I need from you: • Live, screen-sharing sessions where we build the framework together, explaining each architectural choice and its trade-offs. • Practical guidance for structuring test projects, organising test data, and wiring the framework into a CI pipeline. • Walk-throughs of flaky-test triage, mocking external dependencies, and debugging failures that only show up in complex environments. • Short take-home exercises or sample reposi...
Job Description: QA Engineer – Cybersecurity Product (Reverse Proxy Server) Position Title: QA Engineer Location: Mumbai, India (Hybrid/Remote options available) Domain: Cybersecurity – Reverse Proxy Server Development and enterprise data security while using the cloud services Framework: Agile/Scrum Role Overview We are seeking a detail-oriented and security-aware QA Engineer to ensure the quality, reliability, and resilience of a cloud-native reverse proxy server. You will be responsible for designing and executing test plans, automating test cases, and validating security features in a fast-paced Agile environment. Your work will directly contribute to the robustness of a critical cybersecurity product. Key Responsibilities • Design, develop, and execute manual...
Job Title: Playwright Automation Engineer (Java) – Setup & Framework Implementation Job Description: are looking for an experienced Playwright Automation Engineer (Java) to set up and implement an automation testing framework for our product. ideal candidate should have strong hands-on experience in Playwright with Java and be capable of building a scalable, maintainable automation framework from scratch. Responsibilities: up Playwright automation framework using Java and implement a robust test automation architecture with Maven/Gradle and CI/CD pipelines Page Object Model (POM) or similar design pattern reporting (Allure/Extent Reports or similar) reusable and maintainable test scripts documentation for setup and usage Required Skills: experience in Playw...
Project Description: Find school districts and charter schools who use a specific vendor for a large list of domains. I am seeking an experienced web scraping specialist to improve our Python script to analyze a large list of school district websites (approximately 4000+ URLs) and identify the ones who show a specific link on any page found in their sitemap. The primary method of identification must be to scan the website's for specific, known vendor links. Deliverables Required 1. A Production-Ready Python Script (.py file): The script must be commented, easily configurable, and capable of reading the provided CSV list, performing the scan, and generating the output CSV. It should handle timeouts and basic error handling gracefully. 2. The Final Results (CSV/Excel File): A c...
I need a Selenium-based solution that runs reliably on Windows and opens Google Chrome to simulate human visits to LinkedIn (and occasionally other) profile URLs listed in a Google Sheet. For each URL the program should: • Pull the next unused link from the sheet • Load the page in Chrome, wait a random time between 20 seconds and 3 minutes • Apply truly randomized scrolling patterns while the profile is open so behaviour looks organic • Fire a webhook the moment the visit completes, passing back any ID or payload I define so our CRM reflects the touch instantly Configuration items such as Google Sheet ID, webhook endpoint, minimum/maximum dwell time, and daily visit caps should live in a simple file I can edit without touching code. A short README on installi...
We are building a full internal marketplace analytics web system, not just a reporting script. The system is designed to combine competitive intelligence with internal sales and stock analytics in a single interface. Functional Requirements The system must provide the following capabilities: 1. Product and SKU structure - Each product must be split into individual SKUs based on flavor and volume. - All analytics and reports are built at the SKU level. 2. Our product analytics (primary focus) - Current stock levels (total and per SKU). - Sales volume for selected periods (daily / weekly / monthly). - Reorder recommendations based on stock thresholds and sales dynamics. - Revenue calculations per product and per SKU with period filtering. 3. Competitive analytics - Automated collection o...
I need a lightweight Windows-based application that can interact with a specific website entirely in the background—no browser window or UI should ever be visible. The software must: • Log in with a stored username and password • Navigate through the site, click the necessary elements, submit forms, and collect the returned data • Solve any CAPTCHA the site presents automatically (an API such as 2Captcha, Anti-Captcha, or a comparable service is acceptable) • Return the scraped information in JSON or CSV so it can be consumed by another process A simple tray icon, CLI, or service is fine; the key requirement is headless operation with reliable error handling. Source code and a compiled executable are both expected so I can run the tool on multiple machines...
I need a reliable script that logs into Placementindia and pushes my vacancy details without any manual input. The tool should store template data, fill every required field, submit the form, and then confirm success so I can see at a glance what went live. Inside a small, browser-based dashboard I want two key abilities: • schedule automation so I can decide the exact days and times each role goes out • one-click “auto post” that instantly publishes a job when I hit the button The posting frequency isn’t fixed; some days I may blast several openings, other weeks none at all, so the scheduler has to respect whatever plan I set. Use whichever method makes the process most stable—headless browser automation (Puppeteer, Playwright, Selenium) or direc...
Complete Lottery Prediction and Betting Automation System (Focused on Loterías y Apuestas del Estado - Spain) 2. System Features 2.1. Historical Data Collection and Update The system must automatically download complete historical results (drawn numbers, draw dates, prize breakdowns by category, accumulated jackpots) from the first draw of each lottery, directly from or reliable associated sources. Specific sources: Euromillones: (since Feb 13, 2004) La Primitiva: (since Oct 17, 1985 – modern version) El Gordo de la Primitiva: (since Oct 31, 1993) Updates automatic at exactly 00:02 the day after each draw, using ethical scraping (BeautifulSoup/Scrapy) with proper user-agent headers to mimic human behavior. Store data in PostgreSQL (structured) or MongoDB (flex...
I need a small, Windows-friendly Python script that will open a real browser with Selenium and wipe large batches of content from my X (Twitter), Facebook, and Instagram accounts. Because my X account sits on the free API tier I keep running into 403 errors, so this project must rely solely on browser automation—no official APIs or paid third-party tools. Here’s what I’m after: the script launches from the command prompt, asks for (or reads from a .env) my login credentials, signs in, and then iterates through all visible posts, tweets, and reels, deleting each one until none remain or until it hits an optional stop condition such as a date or a post count I can set. A simple console printout like “Deleted tweet #42” is enough for logging; I don’t need ...
IM TYRING TO RUN THE ATTACHED JPNY SCRIPT TO GET INFO FROM A WEBSITE BUT I CANT UNDERSTAND IT DOESN'T WORK. I NEED THIS SCRIPT TO BE FIX + PAGINATION TO FETCH AROUND 2400 RECORDS FOR YELLOWPAGES I ONLY USE JUPYTER
I’m looking for a data engineer who can take full ownership of a daily web-scraping workflow aimed at ongoing market research. The job centers on extracting selected data points from public web pages, transforming them into a clean, structured format, and making them available for analysis every 24 hours. Here’s what I need you to handle from end to end: • Source acquisition – fetch HTML from the URLs I provide, even when content is hidden behind JavaScript (a headless browser such as Playwright or Selenium is fine). • Parsing & cleansing – pull the specific fields I’ll list (product name, price, SKU, availability, and a time-stamp), remove duplicates, and standardize values. • Storage & delivery – load the daily output into ...
I am looking for a Python developer to create a simple and focused scraper script for Facebook Marketplace. Project Idea: The script will open a single Facebook Marketplace seller page and: • Extract all product links belonging to that seller only • Ignore any other data (no names, no prices, no images) • The final output should be a list of links only • Each product link on a separate line (link under link) Exact Requirements: • Input: Facebook Marketplace seller page URL • Output: • A file containing all product URLs for that seller • File format: TXT or CSV • Handle infinite scrolling to load all products Technical Requirements: • Python • Selenium or Playwright • Experience with dynamic websites • Clean, ...
I need a small automation script that periodically checks item availability on the Bigbasket website and pings me on Telegram the moment any of the tracked products come back in stock. You are free to choose the underlying tech stack (Python + Requests/BeautifulSoup, Selenium, Playwright, or a headless browser of your choice) as long as it works reliably with Bigbasket’s current site layout and protects my account from rate-limit blocks or captchas. The flow I have in mind is straightforward: I feed the bot a list of product URLs (or SKUs). It runs on a schedule I can change—every few minutes during peak shortages, maybe every hour otherwise—grabs the stock status, and fires a concise Telegram message whenever the status flips from “Out of Stock” to “Av...
I need every public phone number that appears on gathered into a single, well-structured Excel workbook. Please crawl the entire site, not just a few sections, and return each number alongside the key profile details that make the data usable at a glance—name, profile URL, and any other easily captured identifiers shown next to the number. A clean .xlsx with one row per profile, no duplicates, and clearly labelled columns is the only deliverable I’m expecting. If you prefer Python, Scrapy, Selenium, Beautiful Soup or a comparable stack, go ahead; I’m interested in results, not the specific toolset, as long as the script can be rerun later should the site content change. Before delivery, double-check that: • every row contains a valid phone number and url • n...
I need a reliable scraper that monitors every basketball league listed on Bet365 (). The script must do two separate pulls for each game: Objective 1 • Run #1 – as soon as Bet365 publishes the starting lineup. • Run #2 – again on game day, no later than one hour before tip-off. For each run, capture Teams and scores, all published lineups and odds, plus the Q1 Total, full Quarter and Half statistics as soon as they appear. The goal is to analyse how the line and odds move between the first and second snapshot, feeding a broader betting-strategy model, so accuracy and time-stamping are essential. Store everything in a structured database of your choice (PostgreSQL or MySQL are fine). The tables must let me query: • first-pull values • second-pull val...
I need a senior-level specialist to harvest product data from several e-commerce sites and deliver it in a single, well-structured CSV file. The task demands production-ready techniques—think Scrapy spiders hardened with rotating proxies, Selenium or Playwright for dynamic content, and solid anti-bot countermeasures. The information I’m after is very specific: product names, prices, pictures, and SKU. Nothing less, nothing more. Your solution must run reliably at scale, cope with frequent layout changes, and leave no trace that could trigger blocks. Python is the preferred stack, but if you have a proven alternative that meets the same bar, I’m open to hearing it. To be considered, include in your proposal: • At least one example of a comparable e-commerce scrapi...
I need a web-based automation that can reliably scan VFS Global’s calendar, pick the earliest available slot that matches preset criteria, complete the booking flow end-to-end, and confirm the reservation—no manual clicks from our side. Core scope • Appointment scheduling is the heart of the build. The script or service must log in with rotating credentials, pass the usual captcha / queue hurdles, search by mission and visa type, then lock the chosen slot before it disappears. • A notification system is also essential. As soon as an appointment is secured (or fails), the system should push an email and, if possible, a Telegram or SMS alert to our team. Access & roles Only Admins—our internal staff—will use the interface. A straightforward d...
I want a clean, well-commented Python 3 script that I can run locally whenever I need fresh information. The program should visit the target site (I’ll share the URL once we start) and pull Product Details exactly as they appear online. That means every time I point the script at a category or search page it should work through all pagination, capture the data, and save it to CSV or Excel so I can sort and analyse it later. Key points to cover • Use reliable, open-source libraries such as requests, BeautifulSoup, or Selenium—whichever gives the most stable results for the site once you see it. • Build in simple settings (URL, output file name, optional delay between requests) near the top of the file so I can tweak them without touching the core logic. • H...
I’m looking for a well-structured Python solution, built around BeautifulSoup (BS4) and any supportive libraries you deem essential, that reliably pulls both product details and customer reviews from Lazada on a daily schedule. The data will fuel ongoing competitor research, so consistency and clarity of the output are critical. I looking specifically to get data using bs4 by bypassing the captcha Here’s how I picture the flow: • Input: category URL(s) or product list I supply in a CSV/JSON. • Scrape: title, price, promos, specs, images, ratings, full review texts, review dates, and reviewer scores. • Output: clean CSV or JSON dropped into a dated folder after each run. Make the script easy to tweak if Lazada changes its markup. Acceptance criteria 1. S...
I need to build a reliable, well-structured lead list and I already know exactly what it should contain. The task is to extract contact information—email addresses, phone numbers and full mailing addresses—from three sources: company and organisation websites, their public social-media profiles, and well-known online directories. I expect the data to be gathered with a solid scraping workflow (Python, Scrapy, BeautifulSoup, Selenium or an equivalent stack is fine) and then verified so that bounced emails and dead numbers are kept to an absolute minimum. Deliverables • One CSV or Excel file with separate columns for name, company, job title, email, phone, street address, city, state, ZIP/postcode, country, source URL and date collected. • No duplicates; every...
I have a data-analysis pipeline that relies on a steady flow of fresh product images from a well-known e-commerce site. What I need is a robust scraper that can navigate the catalog, collect every product’s main and variant images, and deliver them to me neatly organized. Key points you should know: • Target: a single e-commerce platform (URL supplied after award). • Payload: high-resolution image files plus a CSV/JSON map linking each file to product ID, title, price, and category text that you extract during the same run. • Scale: thousands of products per crawl; a resumable approach is essential so partial failures don’t force a full restart. • Frequency: I’ll trigger the crawl weekly, so reusable code is a must. I’m happy with Pytho...
Hello, I am looking for a professional translator who can accurately and naturally translate Japanese content into English. The ideal candidate will have experience in translating business, technical, or creative content and can maintain the original tone and meaning while producing fluent, high-quality English text. Project Requirements: Translate Japanese text into clear, accurate, and natural English Maintain the original tone, style, and nuance of the Japanese content Ensure proper grammar, punctuation, and formatting Deliver translations on time and communicate proactively if there are any questions Qualifications: Native or near-native English proficiency Proven translation experience with samples or portfolio preferred Attention to detail and commitment to high-quality work Addi...
I need a reliable way to pull data from Facebook Marketplace seller pages at scale. The target platform is Facebook; other marketplaces such as eBay, Amazon or Etsy are irrelevant for this job. Here’s what I’m after: when I paste one or many seller profile URLs into your script or small desktop app, it should crawl every public listing on those pages and export the results to CSV or Google Sheets. I mainly care about item title, price, description, photos (image URLs are fine), posting date, item location and the seller’s profile link so I can trace each record back to its source. If you can collect additional fields that Facebook exposes, even better—just keep everything neatly labelled. No hard requirement on the stack: Python with BeautifulSoup / Selenium, ...
I am looking for an experienced developer with strong expertise in Python and web automation to build a smart system for monitoring ticket availability and event updates on the Webook platform. The system should focus on automation, notifications, and usability while following best technical and compliance practices. Scope of Work • Develop a Python-based automation system to monitor events and ticket availability. • Send real-time notifications when: • New events are published • New ticket batches become available • Build a clean and user-friendly dashboard to: • Manage monitoring settings • Control alerts and configurations • Implement structured and scalable automation logic. • Ensure the solution is maintainable and adaptable to f...
For an upcoming market research study, I need a fully-automated workflow that gathers and enriches data from well over 500 LinkedIn profiles. The automation should locate the profiles that match criteria I will provide, pull the key public details, then append reliable off-platform contact information so I can reach those professionals directly. Please design the script or low-code sequence with any reliable stack you prefer—Python, Selenium, PhantomBuster, Sales Navigator API, or comparable tools are fine as long as the method is repeatable and respects rate limits. Deliverables • CSV/Excel file containing one row per person with: – Current job title – Company name – Verified email (and phone, when available) • Source code or workflow fi...
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Learn how to find and work with a top-rated Google Chrome Developer for your project today!
Learn how to find and work with a skilled Geolocation Developer for your project. Tips and tricks to ensure successful collaboration.