Filtrare

Căutările mele recente
Filtrează în funcție de:
Buget
la
la
la
Tip
Aptitudini
Limbi
    Starea proiectului
    3,427 scrapy proiecte găsite
    Scrapy website spider
    S-a încheiat left

    Buna ziua, As vrea un script care va cere o lista de 500 sau mai multe nume in acelasi timp pe un website, si le va pune fiecare resultat intr-un fisier csv sau xls cu adresa si telefon.

    $237 Average bid
    $237 Oferta medie
    10 oferte

    I need a developer to collect data from multip...repeatable solution (script or small app) that I can run on demand Basic documentation: how to run it, how to adjust settings, where outputs go Quality requirements Reliable scraping with error handling and retries Respectful request rate / throttling to avoid overloading sites Clear logging (success/fail, pages processed) Ability to adapt if page structure changes Experience with Python (Scrapy/BeautifulSoup/Selenium/Playwright) or Node.js Proxy / rotating user-agents experience (only if needed) Scheduling/automation (cron, Docker, or cloud run) Deliverables Working scraper + instructions Sample output file(s) Final dataset from agreed sources (initial run) To apply, please include Examples of similar scraping work you...

    $142 Average bid
    $142 Oferta medie
    151 oferte
    Scrape Zillow Agent Data
    6 zile left
    Cont confirmat

    I have an urgent need for a clean, well-structured dataset containing the listing agent’s first name, last name, mailing address, and phone number for well over 500 active Zillow listings. Speed is critical, but accuracy matters just as much; the final file should be ready for immediate import into my CRM. You are free to use whichever stack you prefer—Python with BeautifulSoup or Scrapy, Selenium, residential proxies, even the unofficial Zillow API—so long as rate-limits are respected and the data is complete. I don’t need property details or price history; the focus is strictly on the agent contact fields. Deliverables • CSV or XLSX with a separate column for each required field • A short read-me explaining the script or method so I can reru...

    $25 Average bid
    $25 Oferta medie
    65 oferte

    ...visible textual content I specify, and returning it in a machine-readable format. I’m flexible on the final file type; CSV, Excel, or JSON all work as long as the fields are clearly labeled and easy for me to manipulate later. A small sample first will help confirm we’re on the same page before you run the full extraction. Please use whatever stack you prefer—Python with BeautifulSoup or Scrapy, JavaScript with Puppeteer, or a tool that suits the task best—just be sure to respect and provide the code so I can rerun the process when the site updates. Deliverables: • Re-usable script or notebook with clear comments • Complete dataset containing all extracted text, delivered in my chosen format • Brief read-me explaining setup, ...

    $67 Average bid
    $67 Oferta medie
    14 oferte

    ...public websites * Parse HTML, JSON, CSV, and PDF files * Clean and normalize messy real-world data * Write clear, maintainable utility scripts * Deliver working code (not just prototypes) --- ### Required Skills * Strong Python fundamentals * Real experience with web scraping * Data parsing and data cleaning * Comfortable working independently and async --- ### Nice to Have * BeautifulSoup, Scrapy, Playwright, or Selenium * pandas / numpy * Experience scraping government or legacy websites * Experience handling PDFs (text extraction, OCR) --- ### How We Evaluate * This role includes a **paid trial task (1–3 days)** * We care about **output and correctness**, not resumes * Clean, working code matters more than clever abstractions --- ### Important * Please includ...

    $11 / hr Average bid
    $11 / hr Oferta medie
    107 oferte

    I need a reliable scraping solution that collects every open position from ten job-board and company-career sites in one specific country. I already have the full URL list and will share it right after kickoff. Scope • Write and schedule a separate scr...postings, basic keyword search in the frontend, and an export button for CSV or Excel, but these are optional. Deliverables 1. Source code for all scrapers and the data pipeline. 2. Database schema or JSON structure. 3. Front-end webview ready to run locally. 4. README covering installation, configuration, and update routine. I’m happy to discuss your preferred stack—Python with BeautifulSoup/Scrapy or Node with Cheerio/Puppeteer are both fine—as long as the final result is stable and well documented....

    $98 Average bid
    $98 Oferta medie
    65 oferte

    ...LinkedIn, Indeed and HelloWork. • Captures, at minimum, the job title, full description, company name and location. • Stores everything in a structured database I can easily query or export. • Retrieves complete CVs from LinkedIn and, when possible, other social platforms, then links each profile to the same database scheme. Feel free to choose the most stable stack you trust—Python with Scrapy or Selenium, Node with Puppeteer, direct GraphQL or REST endpoints, etc.—as long as it runs unattended, copes gracefully with rate limits / captchas, and offers a simple way for me to schedule or trigger updates. Acceptance will be based on: 1. A repeatable script or service I can host (Docker image or cloud function are fine). 2. A concise setup guid...

    $1268 Average bid
    $1268 Oferta medie
    154 oferte

    ...build a reliable, well-structured lead list and I already know exactly what it should contain. The task is to extract contact information—email addresses, phone numbers and full mailing addresses—from three sources: company and organisation websites, their public social-media profiles, and well-known online directories. I expect the data to be gathered with a solid scraping workflow (Python, Scrapy, BeautifulSoup, Selenium or an equivalent stack is fine) and then verified so that bounced emails and dead numbers are kept to an absolute minimum. Deliverables • One CSV or Excel file with separate columns for name, company, job title, email, phone, street address, city, state, ZIP/postcode, country, source URL and date collected. • No duplicates; every...

    $2 / hr Average bid
    $2 / hr Oferta medie
    19 oferte

    ...following fields: • Job title and full description • Company name plus location (city, state/region, country) • Employment type and any salary or rate information available Your scraper should store results in a clean, normalized CSV (or optionally a relational DB if you prefer) and be easy for me to rerun on demand. I’m comfortable with Python, so a script leveraging requests/BeautifulSoup, Scrapy, or Playwright makes sense, but if another stack delivers better reliability feel free to suggest it. Key expectations • Site recommendations presented first for my approval before you start coding • Respect , add configurable request delays, and build basic anti-block measures (user-agent rotation, retries) • Clear documentation ex...

    $275 Average bid
    $275 Oferta medie
    63 oferte

    I ...first line of address, state, city, postcode • Format: every column saved as plain text (no numeric or date formatting) Delivery schedule • First 5,000 fully cleaned rows required within the first 6 hours • Remainder on a rolling basis until the full 15,000 are complete I will supply a surname list to guide the searches. A straightforward Python (requests / BeautifulSoup or Selenium) or Scrapy workflow is fine as long as the final output arrives in a single Excel file (.xlsx) that opens error-free in Microsoft Excel. Accuracy matters more than speed—random spot checks will be run. Any duplicates, blanks, or malformed addresses will be sent back for correction. Once the first 5,000 pass review, I’ll green-light the rest of the scrape so we ca...

    $23 Average bid
    $23 Oferta medie
    47 oferte
    Data Scraping Specialist
    3 zile left
    Cont confirmat

    Description: - We are looking for an experienced Data Scraping / Web Scraping expert. - We will share the industry name, and the...- Suggest suitable websites/sources to scrape - Suggest countries/regions that can be covered - Share estimated data volume & approach - After approval, the freelancer will scrape and deliver clean, structured data. Data Required (example): - Company name - Location - Contact details (email/phone/website – if available) Requirements: - Proven experience in data scraping - Knowledge of Python, Scrapy, Selenium, APIs, etc. - Ability to scrape multi-country data (based on feasibility) Deliverables: - Data in Excel / CSV / Google Sheets - Basic info of sources used To Apply, share: - Similar scraping work - Tools you use - Your approach after ...

    $67 Average bid
    $67 Oferta medie
    16 oferte

    I need all publicly available customer-facing email addresses extracted from a list of e-commerce websites that I will supply once the project begins. Please crawl only the domains I provide, respect where possible, and avoid triggering any rate limits or security blocks—rotating proxies or headless browsing with tools such as Python, Scrapy, BeautifulSoup, Selenium, or similar is fine as long as the result is reliable. Deliverable • One clean, de-duplicated CSV file containing the harvested email addresses, ready for direct import into my CRM. Acceptance criteria • Every email must originate from the target e-commerce domains. • No duplicates, placeholders, or obviously invalid addresses. • File encodes as UTF-8 and opens without warnings in Exc...

    $16 Average bid
    $16 Oferta medie
    45 oferte
    Portal Scraper & WooCommerce Sync
    2 zile left
    Cont confirmat

    ...associated images—then converts and calculates the raw values exactly as we define before pushing them straight into WooCommerce. My customers must only ever see the WooCommerce front end, so the sync has to feel native and instant. The portal changes frequently, so please code the extractor so that selectors and credentials can be updated without touching the core logic. I am open to Python (Scrapy, BeautifulSoup, Selenium), PHP or Node as long as the finished solution talks cleanly to the WooCommerce REST API and leaves no manual steps. Deliverables • Scraper that logs in and captures product details, stock, prices and images in real time or on a schedule we agree on • Conversion layer that performs the unit/price calculations before data enters WooCommerce ...

    $1292 Average bid
    $1292 Oferta medie
    236 oferte

    I need a developer to collect data from multip...repeatable solution (script or small app) that I can run on demand Basic documentation: how to run it, how to adjust settings, where outputs go Quality requirements Reliable scraping with error handling and retries Respectful request rate / throttling to avoid overloading sites Clear logging (success/fail, pages processed) Ability to adapt if page structure changes Experience with Python (Scrapy/BeautifulSoup/Selenium/Playwright) or Node.js Proxy / rotating user-agents experience (only if needed) Scheduling/automation (cron, Docker, or cloud run) Deliverables Working scraper + instructions Sample output file(s) Final dataset from agreed sources (initial run) To apply, please include Examples of similar scraping work you...

    $165 Average bid
    $165 Oferta medie
    169 oferte
    UK Vinyl Shops Data Scrape
    S-a încheiat left

    ...directory • → Record Stores tab • → search term “record shops” For each shop, capture these fields exactly: – Business name – Email address – Phone number All three data sets should be merged into one unified file; no source labels or separate sheets are required. Please scrape or crawl the sites directly—automated methods such as Python, BeautifulSoup, Scrapy, or similar tools are fine so long as the final output arrives de-duplicated and ready to open in any spreadsheet application that supports CSV. Accuracy matters more than speed, so feel free to build in basic checks (e.g., email format validation, obvious duplicate removal). Once complete, send the single CSV plus a brief note on how you gather...

    $134 Average bid
    $134 Oferta medie
    135 oferte

    I need an automated scraping solution that reliably collects product data from targeted websites and delivers it in a clean, structured file I can plug straight into my workflow. You’re free to use Python (BeautifulSoup, Scrapy, Selenium, Playwright) or a simple cloud instance, and the output lands in CSV or JSON.

    $88 Average bid
    $88 Oferta medie
    91 oferte

    ...directories you can legally access. For each product, capture at minimum the product name, its full description or spec blurb, and the page URL so I can verify the entry later. If additional details such as model numbers or imagery links are readily available while you scrape, feel free to include them as extra columns, but the name and description are non-negotiable. A Python-based workflow using Scrapy, BeautifulSoup, or a comparable toolset is fine by me so long as the end result arrives as a single Excel workbook, neatly separated by sheet or filterable fields. Please ensure your methods comply with site terms of service and New Zealand data-privacy requirements. I will consider the job complete when: • Every known NZ supplier type (storefront, manufacturer, d...

    $121 Average bid
    $121 Oferta medie
    52 oferte

    I need the brochure catalogue of a JavaScript-heavy e-commerce site captured and delivered as a clean CSV. My focus is on accurate prices and every available variant, pulled from each category the site offers. Python is the language of choice and I’m flexible on tooling—Scrapy + Playwright, straight Playwright, Selenium, or another robust approach—provided the code is modular, well-documented, and easy for me to rerun when the store layout shifts. If you already have proxy rotation or rate-limit handling baked into your pipeline, that will be an advantage. What has to happen • Crawl through every category filter so no product slips through the cracks. • Render dynamic content fully to capture price and variant data, along with URL, SKU, net price an...

    $98 Average bid
    $98 Oferta medie
    31 oferte

    ...SOMEONE WHOS PROFILE PICTURE MATCHES WHO THEY ACTUALLY ARE. A VIDEO CALL TO DISCUSS WILL BE NEEDED TO DISCUSS DETAILS PRIOR TO HIRING. **THIS IS A LONG TERM PROJECT THAT WILL BE HEAVILY FRONT LOADED HOPEFULLY MAKING YOUR LIFE EASIER OVER THE DURATION OF THE PROJECT. WITH THIS I PLAN ON PROVIDING A MONTHLY STIPEND THAT WILL EVEN OUT PAY OVER THAT TERM. Your toolkit is up to you, but Python with Scrapy or Selenium, a tidy Pandas workflow, and solid MySQL / PostgreSQL skills fit naturally with what we already run on AWS. Clean code, deduping, error logging, and clear documentation are non-negotiable; everything has to slip straight into the pipelines that feed our mobile app. ** I HAVE EXISTING SCRIPTS FROM THE PREVIOUS FREELANCER THAT WILL HELP YOU EXPEDITE THE WORK YOU DO. De...

    $487 Average bid
    $487 Oferta medie
    207 oferte
    NRL Data Scraping Pipeline
    S-a încheiat left

    ...then continues to run weekly. Scrapy is strongly preferred because I want to spin the whole thing up inside a GitHub Codespaces Docker container and keep deployment friction to a minimum. Scope of data • Players page first: full player-level stats are the initial priority. • Draw page next: I specifically need match dates, kick-off times and the team & player stats embedded in each fixture. • Ladder and any other public stats pages can follow once the core player and draw feeds are solid. Data model & output All raw HTML should be parsed into tidy, flat tables (CSV or Parquet). Please create sensible surrogate keys so that tables for players, matches, teams and ladder positions can be joined cleanly in a downstream warehouse. Deliverables &bull...

    $113 Average bid
    $113 Oferta medie
    67 oferte

    ...horse’s career starts, total earnings and current trainer. What I’m missing is the data itself. Two different public websites publish this information; I’m happy for you to pull from whichever source (or a mix of both) gives the most complete and accurate results. Your task is to automate the extraction—ideally with a clean, well-commented Python script that uses requests/BeautifulSoup, Selenium, Scrapy or any other library you prefer—and then populate my spreadsheet with the fetched records. When the job is done I need: • the updated Excel file, fully filled out and spot-checked for accuracy • the script and quick usage notes so I can rerun it later if the sites update That’s it. If you can turn this around quickly and the n...

    $17 Average bid
    $17 Oferta medie
    28 oferte

    I need a single, clean pull of all product information from an e-commerce site. The scope is limited to product names, full descriptions, and the corresponding images; no price or contact data is required. A fresh scrape is needed only once, so scheduling or cron work is unnecessary. You may use Python with BeautifulSoup, Scrapy, Selenium, or any comparable stack—what matters is that the final dataset is complete and easy for me to consume. Deliverables • CSV or XLSX listing every product with its name and full description • A folder (or ZIP) containing every product image, with filenames mapped to the rows in the data file • A brief README outlining the scrape process and any setup steps I would need to reproduce it locally Acceptance criteria ...

    $31 Average bid
    $31 Oferta medie
    92 oferte

    ... The page also includes the serving city, state, and ZIP code. I need all these elements pulled out, cleaned (no stray markup or line breaks), and placed into a single, well-structured Excel workbook that’s ready for immediate analysis and this data is used to create a dashboard analyzing negative and positive reviews. You are free to choose the scraping approach—Python with BeautifulSoup or Scrapy, a headless browser like Playwright, or any tool you trust—as long as the end result is an accurate Excel file with clear columns for Review_Text, Review_Date (YYYY-MM-DD), and Zip_Code. Deliverable: • One .xlsx file containing every review currently live on the site, fully deduplicated and formatted. * One dashboard for easy analysis. *sentiment analysis...

    $34 Average bid
    $34 Oferta medie
    49 oferte

    ...social-media link that appears on the page. • Storage: write everything to our MySQL database. • Data quality: run only basic validation—remove obvious duplicates, trim whitespace, keep field formats consistent. • Export: one-click download to XLS or CSV. • Deployment: install on our Linux server and document the steps so I can redeploy if needed. Preferred stack Python with BeautifulSoup, Scrapy, or Selenium is fine, as long as it’s cleanly modular. Use pandas or a similar library for the export. Deliverables 1. Source code and (or equivalent). 2. SQL script to create/update the necessary tables. 3. README covering setup, cron scheduling, and how to adjust throttling. 4. Working front-end files matching the mock-up. Keep it s...

    $102 Average bid
    $102 Oferta medie
    72 oferte

    ...compact, AI-assisted web-scraping module that plugs straight into my existing stack and pulls live product information—specifically price and availability—from several retailer websites. The scraper should detect layout changes automatically, respect where permitted, and expose a simple JSON or CSV feed so I can drop the data into my pricing engine without extra parsing. Python with Scrapy, Playwright, or a similar headless-browser approach is ideal, but I’m open to alternative tools if you can show they handle anti-bot measures reliably. Deliverables • Clean, well-commented source code for the scraper • A lightweight API or CLI wrapper that returns price and availability in structured form • Basic setup guide and a quick demo run showi...

    $19 / hr Average bid
    $19 / hr Oferta medie
    78 oferte

    ...extract phone number from all those businesses • Business name • Full street address (including city, state and ZIP) • Primary phone number The scrape must span all states and territories so the end result reflects a true national snapshot. An Excel or CSV file is fine for delivery; please keep one row per location and label columns clearly. If you intend to automate with Python, Selenium, Scrapy, the Google Maps API or a similar approach, note that accuracy and duplicate handling matter more to me than the tool you choose. Captcha-handling and rate-limiting should be built in so the run completes cleanly without blocks. Acceptance criteria 1. At least 95 % of returned rows contain the four core fields above. 2. No obvious duplicates (same name, phone...

    $16 Average bid
    $16 Oferta medie
    16 oferte
    GeM Portal Data Scraper
    S-a încheiat left

    ...Deliverables The Executable/Script: A Python script (preferred) or a standalone desktop tool. Excel Output: A clean .xlsx file with headers for all the data points mentioned above. Documentation: A brief "How-to" guide on running the tool and updating search parameters (e.g., how to change the Category URL or Keywords). 5. Preferred Freelancer Skills Expertise in Python (Selenium, BeautifulSoup, or Scrapy). Experience with E-commerce scraping and bypassing bot detection. Previous experience with Indian Government Portals (GeM, CPP, or Tenders) is a major plus. Ability to deliver a tool that does not get our IP blocked (rate-limiting features). Important Note on Compliance Since GeM is a government portal, ensure your developer follows ethical scraping practices (...

    $66 Average bid
    $66 Oferta medie
    17 oferte

    ...of websites from which I need specific text captured—product descriptions, article titles, meta-data, and similar fields—and placed into a clean, well-structured CSV file. Whenever automated collection isn’t possible, the same information will have to be entered manually, so accuracy in both scraping and data entry is essential. I don’t mind which stack you prefer—Python with BeautifulSoup or Scrapy, browser automation with Selenium, or a different approach—so long as the final CSV follows the column order I provide and you can replicate the steps for future runs. A short note explaining your method and any scripts you write will be part of the hand-off. Turnaround is flexible as long as we agree on it before each batch, and there will be...

    $402 Average bid
    $402 Oferta medie
    199 oferte

    ...business you can find in Mumbai, Pune, Nashik, and Surat. Your sources should be Google Maps plus the listings on Justdial and Yellow Pages. For each company, capture exactly three fields—business name, full address, and phone number—keeping the phone exactly “as listed” on the source page (no re-formatting into international or local styles). I do not mind which stack you prefer—Python with Scrapy or BeautifulSoup, browser automation with Selenium, or a different approach—so long as the final spreadsheet is comprehensive, de-duplicated across the three sources, and ready for me to open in Excel without further cleaning. Before delivery, please spot-check for obvious errors (e.g., mismatched phone numbers, partial addresses) and remove...

    $17 Average bid
    $17 Oferta medie
    6 oferte

    ...into an AI agent, and immediately turns the results into usable insights and automated replies. Here’s the flow I’m aiming for: the scraper pulls fresh information from the sources I’ll share with you, filters and structures it on the fly, then hands it to the agent. The agent should run on-the-spot data analysis and generate context-aware responses without human intervention. Think Python with Scrapy or Selenium for the collection layer, fast in-memory handling (Redis, Kafka, or similar) to keep everything real-time, and an LLM framework such as LangChain or a custom GPT wrapper for the reasoning layer. If you have a better tech stack in mind, I’m open to it as long as it remains fast and easy to scale. Deliverables I need to sign off on: • A wor...

    $75 Average bid
    $75 Oferta medie
    63 oferte
    Amazon Book Data Scraper
    S-a încheiat left

    ...product fields (title, price, images, description, etc.) Requirements: Proven experience scraping Amazon & similar websites. Ability to bypass anti-bot systems safely. High accuracy & clean data. Automation for bulk scraping preferred. Experience with WordPress product import is a plus. Final backup file in Excel/CSV also required. Please share: Similar past projects Your approach & tools (Python, Scrapy, API, etc.) Timeline Total cost Looking forward to working together....

    $85 Average bid
    $85 Oferta medie
    43 oferte
    Daily car.gr Data Scraper
    S-a încheiat left

    I need a reliable, fully-automated pipeline that pulls fresh information from every listing on once per day, stores it in a query-friendly repository, and highlights what changed since the previous run. Core requirements • Daily...cron-ready execution script and README. 4. Sample CSV/JSON export demonstrating daily deltas and summary stats. Acceptance criteria • A 24-hour test run proves 100 % listing coverage with no duplicate rows. • Second run correctly labels adds/removals and updates analytic tables. • Installation from a clean server takes under 15 minutes using only the supplied documentation. If you’ve built Scrapy spiders, headless-browser collectors, or data pipelines on AWS/Lambda/GCP before, I’m keen to see your approach an...

    $197 Average bid
    $197 Oferta medie
    72 oferte

    ...code. Structure of the data matters to me: each record should include the dish name, price, brief description (when it exists), restaurant name, cuisine tag, and the page URL it was taken from, also the images of the dish. Please deliver the final dataset in a single, clean Excel workbook so I can filter and pivot the information easily. Python is preferred, and I’m comfortable if you rely on Scrapy, BeautifulSoup, Selenium, or a mix—whatever guarantees accurate extraction and keeps the scraper robust against layout changes and basic anti-bot measures. Acceptance criteria • Excel file contains every scraped item with the fields listed above, one row per menu item • Separate worksheet (or a clear column) notes which source the data came from (restaur...

    $11 Average bid
    $11 Oferta medie
    11 oferte

    ...numbers pulled from multiple pages of a single website. You will build or run a scraper, capture every unique contact detail, and deliver a clean, de-duplicated data file—CSV or Excel is fine—along with the script you used so I can rerun it later if the site updates. The target site structure is consistent across pages, so once you handle one page the rest should follow the same pattern. Use Python, Scrapy, BeautifulSoup, Selenium, or a comparable stack—whatever you are quickest with—as long as the final script is well-commented and easy for me to execute on my own machine. Deliverables • Scraper script with clear setup instructions • CSV/Excel containing all email addresses and phone numbers found on every specified page • Brief not...

    $12 Average bid
    $12 Oferta medie
    19 oferte

    I need a comprehensive, fully-verified database of retail jewellers operating in Delhi, Jaipur, Jodhpur and Chandigarh. For every store the dataset must inc...in a single Excel workbook; I’ll spot-check phone numbers and addresses to confirm authenticity. Any entry that fails verification will be rejected, so your process should already cover deduplication, address standardisation and phone validation. Once the sample is cleared, I’ll expect the complete, cleaned file plus a brief note on the sources and tools you used. Feel free to rely on Python, Scrapy, Selenium, Google Maps API—or any stack you trust—so long as the final data is accurate, up-to-date and refreshable later if needed. Drop me a quick outline of your scraping approach and timeline, and we c...

    $74 Average bid
    $74 Oferta medie
    58 oferte

    I need every contact-type email that appears in Google (Map...need every contact-type email that appears in Google (Map.. etc) for pet breeders in Spain (aprox 2.000) and placed into a Google Sheet I can immediately share with my team. The scraper should crawl all google sites and capture each unique email address. I attach the search term in spanish with the 52 more prominent cities in Spain where the search should be done... Python with BeautifulSoup, Scrapy, or Selenium is fine—whatever achieves a clean, de-duplicated result without hitting the site too aggressively. Deliverables • A Google Sheet containing two columns: Email | with no duplicates Accuracy and speed are my priorities; once I confirm the sheet is complete and the emails are valid, I’ll m...

    $41 Average bid
    $41 Oferta medie
    29 oferte
    Trophy icon Lead Generation Web Scraper
    S-a încheiat left

    I need contact emails from a defined set of websites and popular online directories so I can fast-track my lead-generation efforts. The job is straightforward: build and run an automated scraper—Python, Scrapy, BeautifulSoup, Selenium, or another proven stack works for me—that moves through each page, respects , skips obvious traps, and produces a clean, ready-to-import data file. What matters most is accuracy and speed. Every record should include, at minimum, name (when present), email, and the source URL. De-dupe results to keep my CRM tidy, and validate the contacts so bounce rates stay low. Deliverables • Sample output (small CSV) for quick review before the final run • Final CSV or Excel file containing all collected leads Acceptance criteria ...

    $67 Average bid
    Garantat

    ...Postcode • Phone Number • Email (where publicly shown) • Business Category • Short Description / Tagline Because the file will drive quantitative analysis, coverage and consistency are critical. Please remove obvious duplicates, keep empty cells where information is missing, and preserve any special characters in company names or addresses. You are free to choose your own stack—Python with Scrapy or BeautifulSoup, Selenium for dynamic pages, or a comparable approach—as long as it handles pagination, category browsing, and any anti-bot measures without degrading accuracy. Deliverables 1. Excel workbook (.xlsx) with all records, ready for pivoting or filtering 2. A brief read-me describing your scraping approach and any limitations e...

    $101 Average bid
    $101 Oferta medie
    119 oferte

    ...from an online catalogue and now want the entire process fully automated. The goal is a repeatable script that captures every piece of relevant content on the site—text, product images, and any internal or external links—then saves it in a tidy, structured format I can work with straight away. Here’s what I need you to build and hand over: • A scraping script (Python preferred—BeautifulSoup, Scrapy, or Selenium if the pages are dynamic) that logs in if required, navigates through all catalogue sections, and pulls text, images, and links without missing hidden or paginated items. • Clean output: text and links in CSV or JSON, images downloaded into organised folders with filenames that reference their corresponding records. • A simple c...

    $61 Average bid
    $61 Oferta medie
    22 oferte
    UK Industry Email Scraping
    S-a încheiat left

    I need a clean, well-structured list of fresh leads gathered exclusively from UK-based, industry-specific websites. No social...role/title, direct email address, phone (if listed), and website URL. • Deliver the data in an Excel or Google Sheets file, clearly de-duplicated and ready for outreach. Quality criteria • All emails must be live (no bounces on a quick MailTester check). • Minimum 90 % accuracy on contact-to-company matching. • Each entry must show its source URL for validation. Feel free to use Python, Scrapy, BeautifulSoup, Selenium, or any reliable scraping stack you prefer—as long as the final sheet meets the accuracy standards above and respects UK data compliance. Let me know your estimated turnaround time and any clarifying questions...

    $31 Average bid
    $31 Oferta medie
    101 oferte

    I need every contact-type email that appears anywhere on an openly accessible website gathered and placed into a Google Sheet I can immediately share with my team. The scraper should crawl all public pages, capture each unique address, and note the exact URL where it was found so we can verify provenance later. Python with BeautifulSoup, Scrapy, or Selenium is fine—whatever achieves a clean, de-duplicated result without hitting the site too aggressively. Deliverables • A Google Sheet containing two columns: Email | Source URL, with no duplicates Accuracy and speed are my priorities; once I confirm the sheet is complete and the emails are valid, I’ll mark the project done.

    $28 Average bid
    $28 Oferta medie
    49 oferte

    ... and pushes the results straight into MySQL database tables I can query. The data set must cover the full catalogue on the site and include, for every tour: the day-by-day itinerary text, available dates, all stated inclusions or exclusions, and the advertised price and details in Australian dollars for each departure date. You are free to use Python with BeautifulSoup, Scrapy (preferred), Selenium or another stack you trust, as long as the final result is a repeatable, well-commented script plus clear connection settings so I can run it myself on a schedule. Deliverables • Script loaded to server, tested and working. • Ongoing support (paid as separate tasks on hourly rate or fixed fee) to tune the script to deal with site changes. • Schedule weekl...

    $333 Average bid
    $333 Oferta medie
    155 oferte

    ...file directly into Wix for an email-marketing campaign and to run LinkedIn outreach in parallel, so data hygiene matters—no duplicates, no obvious spam traps, and the email field must pass a quick syntax check. Please structure the CSV with one row per contact and clear column headers (Name, Business Address, Email, Phone, LinkedIn URL). Feel free to work in Python with Selenium, BeautifulSoup, Scrapy, or another stack you prefer so long as it runs headless and I can launch it from Windows. Include the source code, a short read-me, and one test CSV that proves the workflow end-to-end on at least a small sample set....

    $141 Average bid
    $141 Oferta medie
    80 oferte

    ...and stock status I also want the full product description, brand name, SKU code, pack or size information, and any promotional wording that appears on the page (discount banners, “limited-edition” tags, gift-with-purchase notes, etc.). Because this project centres on Sephora only, you can tune your approach to whatever will stay stable against their layout changes—Python with BeautifulSoup or Scrapy, Node with Puppeteer, or another stack you’re comfortable with—as long as the final script is documented and can be rerun on my side without a steep learning curve. Deliverables • A well-annotated script or notebook that scrapes the fields listed above from every product URL in a category I specify. • Output saved to CSV or JSON with c...

    $68 Average bid
    $68 Oferta medie
    37 oferte

    Pl...give me a clean CSV file. For each record I only need 4 fields: the licensee’s full name, their and their cell-phone number and city. If the site paginates or hides the phone behind an extra click, the script still has to collect it. I’m happy with a one-off delivery right now, yet I’d like the code kept tidy so we can run it every month without rewriting anything. Feel free to use Python with Scrapy, BeautifulSoup, Selenium—or any stack you trust—as long as: • The final CSV opens without errors in Excel. • Every live record on the site is captured once, no duplicates. • The script, a short README, and any environment notes are included. Let me know how quickly you can turn around the first scrape and what you’d charg...

    $95 Average bid
    $95 Oferta medie
    78 oferte
    Product Data Scraping
    S-a încheiat left

    I need a skilled web scraper to extract specific product details from a site. The data should be delivered in a CSV file. Requirements: - Scrape product descriptions and images - Provide data in CSV format Ideal Skills: - Experience with web scraping tools (e.g., Python, BeautifulSoup, Scrapy) - Ability to handle image downloads - Attention to detail and accuracy

    $20 Average bid
    $20 Oferta medie
    28 oferte

    I need a small script that will visit a specific site (I will share the URL and login details after award), locate every file that meets my criteria, and download those files to a local folder. The job is a single run only; once the files are saved, the task is complete. A lightweight Python solu...Crawls the relevant pages and identifies the target files (PDFs and ZIPs in my case) • Saves each file locally, keeping the original filename intact • Produces a short README so I can rerun the scraper later if needed Please keep the code clean, commented, and fully self-contained—no proprietary frameworks. Let me know at the start if the site’s structure suggests a more efficient tool such as Scrapy; I’m flexible as long as the deliverable runs on a stan...

    $61 Average bid
    $61 Oferta medie
    30 oferte

    ...putting together a system that will regularly scrape multiple online sources for car-parts information and push the results into my product-testing database. The scraper has to collect every key data point I rely on—price, full specifications, product name, category, and any product codes—so I can run internal tests without manual look-ups. Here’s how I picture the workflow: the script (Python with Scrapy, BeautifulSoup or a similarly reliable stack) runs on a weekly schedule, reaches the target sites, extracts the fields above, cleans obvious duplicates, and stores everything in a structured format that my current database can ingest. If an API is available for a site, feel free to use it; otherwise, a robust HTML scraper is fine as long as it respects and ke...

    $136 Average bid
    $136 Oferta medie
    150 oferte

    I need a reliable script that visits every relevant listing on...vital. Key points • Only the product description field is required along with pricing data— no stock data. • The scraper must cover all pages under the Indian section, following pagination or any internal links that reveal additional products. • I should be able to re-run the solution later, so please supply both the raw dataset and the executable code (a Python script using BeautifulSoup, Scrapy, Selenium, or any comparable tool you prefer). • Output must be clean, UTF-8 encoded, and free of duplicates. I will validate the work by running the script on my machine and comparing the generated file to the template. Let me know your preferred tooling and estimated turnaround, and we c...

    $220 Average bid
    $220 Oferta medie
    19 oferte

    Top Articole ale Comunității scrapy