The ultimate guide to hiring a web developer in 2021
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
Conform celor 361,850 recenzii, clienții îi evaluează pe Web Scraping Specialists cu 4.9 din 5 stele.Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
Conform celor 361,850 recenzii, clienții îi evaluează pe Web Scraping Specialists cu 4.9 din 5 stele.Headline: Looking for a professional Telegram Automation Expert for a high-intensity marketing campaign for a Solana Meme project ($PIPPIN). Task Overview: I need a technical team/freelancer to execute a Targeted Mass DM campaign. You must have your own infrastructure (Software, Proxies, and aged Telegram Accounts/Sessions). Key Requirements (Non-Negotiable): Targeted Scraping: You must be able to scrape members from specific competitor Telegram groups (links will be provided). Advanced Filtering: You MUST filter the scraped list to target ONLY "Active Users" (those seen online within the last 24 hours). I do not want to waste messages on bots or inactive accounts. High Deliverability: Use your own premium Session/Json accounts. You must manage delays and rotations to ensure...
Trabajo exclusivamente en español. Necesito apoyo para recorrer portales de universidades españolas, descargar los PDFs de sus syllabus y, a la vez, ir alimentando una matriz que ya tengo preparada. En cada fila de esa matriz deben quedar, como datos básicos, la Información del curso y la Duración del curso. Además, para cada asignatura hay que anotar: Código del curso, Número de ECTS, Nombre, Tipo de curso (obligatorio u optativo) y generar un pequeño .txt con el listado de bibliografía indicada en el propio PDF. Una vez descargados, los syllabus deben guardarse siguiendo una estructura de carpetas que yo mismo te facilitaré; el workflow ya está totalmente definido, solo tendrás que arrastrar cada...
Trabajo exclusivamente en español. Necesito apoyo para recorrer portales de universidades españolas, descargar los PDFs de sus syllabus y, a la vez, ir alimentando una matriz que ya tengo preparada. En cada fila de esa matriz deben quedar, como datos básicos, la Información del curso y la Duración del curso. Además, para cada asignatura hay que anotar: Código del curso, Número de ECTS, Nombre, Tipo de curso (obligatorio u optativo) y generar un pequeño .txt con el listado de bibliografía indicada en el propio PDF. Una vez descargados, los syllabus deben guardarse siguiendo una estructura de carpetas que yo mismo te facilitaré; el workflow ya está totalmente definido, solo tendrás que arrastrar cada...
Need to scrap videos from a website . If this is something you are good and familiar doing please do respond to this project . I do multiple projects with similar help I need so am looking for a log term relationship so I dont have to keep on posting the same ad. Let me know if you have any questions .
I need an organized Excel spreadsheet that lists every private clinic located in Madison, WI and anywhere within a 30-mile radius. For each clinic, the sheet must include: • Full street address • Name of every practicing doctor (first and last) • Main phone number • Fax number • Email address, when it can be found Please place each clinic on its own row and keep the columns clearly labeled so the data can be filtered or sorted later. No preset categorization is required—I just need the information complete and accurate; I can handle any further grouping once I have the file. The finished .xlsx file is the only deliverable. Budget $50 . You can easily find these from insurer websites such as aetna and bluecross, by doing a physician search. Do n...
I need a robust Python function that can log in to a password-protected site, navigate to a given page, locate the primary table, and convert it into a clean Pandas DataFrame before writing the result to CSV. The same function must work on each URL I provide, and ideally on any future page built on the same template, so please keep the approach modular and scalable. Because the pages sit behind authentication, the username and password can be hard-coded directly in the script; no interactive prompts or external files are necessary this time. Anti-blocking tactics (session persistence, realistic headers, controlled request pacing, etc.) are mandatory—I want to be able to run the notebook repeatedly without getting shut out. Deliverables • A Jupyter notebook (.ipynb) containin...
extract pdf catalogue from
I’m looking to replace a tedious manual copy-paste process with a reliable workflow that pulls data directly from a set of public websites and feeds it into my spreadsheet every day. I already know the exact pages and the fields I need; what I need from you is an automated solution—ideally a clean, well-commented script in Python using libraries you feel are best suited (Scrapy, BeautifulSoup, Selenium, or a combination). The final output should be a structured CSV (or Google Sheet via API) that mirrors my current column layout so I can slot it straight into existing reports without any reformatting. The script must run unattended on Windows, handle occasional site layout changes gracefully, and log any rows it can’t capture so I can review them later. A quick README exp...
Our Pain Point: We are currently receiving industry update emails (DIARY'S LATEST NEWS - DIARY directory), which include important information for our business outreach strategy, we wish to be able to organise and utilise this information for our client outreach. Our current process is incredibly manual, it involves setting aside time for an employee to review the emails we have received, click through into the links for the articles provided and pull the relevant information from the article, they then will try and find the person mentioned or referenced in the article on LinkedIn and connect with them on the platform. However, reading and scanning all emails on a regular basis is very time-consuming and not possible for our team to perform on a weekly basis. Hence, our goal would be...
I have to pull many thousands of PDF files from a publicly available but poorly structured online database. The pages are slow, there are no clear download links, and navigation relies on clunky JavaScript forms, so a straightforward “save as” approach will take far too long. You will receive a text file that contains the exact filenames for every document I need. Those filenames appear in the HTML once the record is loaded, so they can be used as reliable anchors for the scrape. The order in which the files arrive does not matter; accuracy and completeness do. I expect an automated approach—Python with Selenium, Playwright, Scrapy, or any comparable tool is fine—as long as it can work around the site’s fragile structure and occasional timeouts. If headles...
I need a Python-based trading bot that executes a clean trend-following strategy and feeds its output to a lightweight web dashboard. The trading logic should automatically detect and ride upward or downward trends, handle position sizing, manage risk with configurable stop-loss / take-profit rules, and run with as few external dependencies as practical (NumPy, Pandas, TA-Lib are fine). Exchange connectivity is flexible: as long as live orders and historical price data can flow reliably, I’m happy to integrate through CCXT or a direct API of a major venue such as Binance, Coinbase, or Kraken—let me know which you can implement fastest. The dashboard is just as important as the core bot. Through it I want to see: • Real-time performance metrics (open PnL, equity curve, ...
I’m building an automated, end-to-end pipeline that pulls results from a fast-changing election website every few minutes, cleans and enriches the feed with AI, then pushes out clear charts and graphs that highlight vote counts per party, regional voting trends, overall voter turnout and a concise statistical summary of the outcomes. There is an existing Google Data Studio (Looker) template available. The scope breaks down into three tightly-linked steps: • Data capture – a headless, resilient scraper (Selenium, Playwright or a similar tool) must track roughly 3 500 individual race entries across 17 political parties in six regions, coping smoothly with AJAX calls, pagination and any CAPTCHA or session refreshes. • AI-powered processing – once ingested, th...
Thumbnail Design Specialist • High-CTR Designs: Expert in creating eye-catching, high-click-through-rate (CTR) thumbnails that drive views and grow channels. • Visual Storytelling: Skilled at distilling complex video topics into a single, compelling image using vibrant colors and bold typography. • A/B Testing Knowledge: Deep understanding of YouTube/social media trends, ensuring your content stands out against competitors. • Quick Turnaround: Ability to deliver high-quality, professional assets within tight deadlines (often under 24 hours). Data Entry & Administrative Expert • 99% Accuracy Rate: Committed to delivering error-free data management, lead generation, and spreadsheet organization. • Software Proficiency: Advanced skills in Microsoft Excel (VLO...
I need a reliable script that pulls fresh product details and current prices from eBay every 24 hours and drops the results into a clean Excel workbook. The data points I absolutely need are: • Full product title • Item ID / listing URL • Current price (and currency) • Shipping cost if shown • Seller name and feedback score • Listing time-stamp so I can track changes day-to-day WORKFLOW OVERVIEW: You will receive: Images from Singapore card vendors showing buyback prices Use AI tools to: Identify the card from the image Search eBay: For recent SOLD listings of the same PSA card Extract: Actual accepted Best Offer prices Calculate: Total landed cost in SGD Compare: Vendor Buyback Price vs Total Cost Flag: Arbitrage opportunity if pro...
Saya membutuhkan bantuan entri data manual. Sumber data akan saya berikan berupa dokumen atau file terstruktur; tugas Anda hanyalah memindahkan setiap baris informasi secara teliti ke Google Sheets. Ruang lingkup: • Entri data manual, bukan scraping atau otomasi. • Tabel Google Sheets sudah saya siapkan; Anda cukup mengisi kolom sesuai petunjuk. • Pastikan akurasi 100 %—jumlah baris, ejaan, dan format numerik harus sama persis dengan sumber. Saya menghargai kecepatan, kerapian, dan kerahasiaan. Beri tahu berapa banyak baris yang dapat Anda selesaikan per hari dan contoh proyek serupa yang pernah Anda kerjakan.
Google Search Trends Analysis (Python) ## Project Overview This project analyzes **Google Search Trends data** using Python to understand how a keyword's popularity changes over time and across regions. The analysis covers **15 countries**, compares **time-wise interest**, and explores **related keywords** to uncover search behavior patterns. ## Objectives * Analyze time-wise search interest of a keyword * Compare keyword popularity across 15 countries * Identify and analyze related search keywords * Visualize trends for better insights ## Tools & Technologies * Python * Pytrends (Google Trends API Wrapper) * Pandas * NumPy * Matplotlib * Seaborn * Jupyter Notebook ## Project Structure ``` ├── google data analysis # Main notebook ├── # Proj...
1. Add basic Captcha to - Will you be using a WordPress plug-in or some other application? If so, which plug-in will you use and how will it work? How long will it take to install it? 2. Stop website scraping. What application do you propose using that will stop website scraping? How effective is it? Will it stop all illegal bot scraping? Will it still allow legitimate search bots such as Google, Yahoo, etc. to search the website? 3. Create a new page titled 142-page Private Placement Memorandum example document. This Private Placement consists of 5 distinct documents that make up the Private Placement Memorandum (PPM). Offering Document – 60 Pages This document is given to the investor and does not need to be returned. Business Plan – 39 pages Addendum - A This doc...
We need a Python expert to build an end-to-end automated web-scraping and data-processing system for German B2B directories. The goal is to eliminate all manual steps — scraping, cleaning, deduplication, enrichment, and export — within 15 days. You’ll design and implement a high-speed, fault-tolerant pipeline that scrapes ~10 directories, normalizes data (including Umlauts and formatting), removes duplicates, detects updates, and exports clean structured data automatically. ⸻ Core Responsibilities • Develop asynchronous scraping system (Python 3.11+, aiohttp/httpx/Scrapy/Playwright). • Build deduplication & change-detection logic using hash comparison and timestamps. • Design and connect central database (PostgreSQL + SQLite) to store unique compa...
This project is a python scraper that will scrape dynamic changing prices from different websites everyday automatically thru cronjob. A couple of the websites have tokens that have to be initiated that keep changing. All of the sites are getting scraped except for one has a problem. You will fix that. This scraper is also inputting the incorrect price into our database and you will need to fix that as well. Heres how the system works: Prices are scraped from several theme park websites. Then they are ran thru a calculation algorithm to determine what our price should be. All the sites are inputting the correct prices except for universal. You will fix that (Currently its putting the scraped price in our database and not the calculated price like it should). All the calculations are ther...
Data Collection – Forex Cash Rates for 140 City-Destination Pairs 1. Objective We need to gather pricing data for Foreign Currency Notes (Cash) from the top 10 forex vendors in various Indian cities. You will simulate a transaction on their websites to capture the final landing cost (including hidden fees, taxes, and service charges). Scope: Product: Foreign Currency Notes (CASH only). Do NOT collect data for Forex Cards. Transaction Value: Approximately ₹40,000 INR per transaction. Total Pairs: 14 Indian Cities × 11 Foreign Destinations = 154 Pairs. Volume: Top 10 vendors per pair (approx. 1,500+ data points). 2. Input Data A. The 14 Indian Cities (Source Locations) Bangalore Delhi Mumbai Chennai Hyderabad Pune Kolkata Ahmedabad Kochi Lucknow Jaipur Surat Indore Ludhiana B. T...
Hello, I have an Excel file with 5000 rows. This file contains the names and websites of 5000 companies. What I need are the personal email addresses for the import, export departments. You can use apollo, rocketreach, hunter etc. Can you help me? Thank you.
I need a robust, repeatable scraper that gathers every English-language review for roughly one million hotel listings and stores them in a clean JSON database. The first milestone covers the initial 10 000 reviews: once those entries are parsed accurately I will release payment and we can scale the same solution across the remaining content. Database requirements • JSON output only, one record per review • Mandatory fields: hotel_id, reviewer_name, review_date, rating_score, full_review_text Source sites The exact platform will be shared privately after award, so structure the code to accept multiple endpoints with minimal change. Deliverables for each milestone 1. Fully documented scraping/parsing script (Python, Node, or another well-supported language) 2. Corres...
I already have a Zapier automation that hands a submitted Tally-form URL to a ChatGPT step, but right now GPT is not able to see everything on the website. I need the workflow upgraded so the ChatGPT step can “see” absolutely everything on that URL—headers, body copy, layout structure, CTAs, images, and any other visual element—before it starts writing the internal report that follows. Here is what I’m after: once a new Tally response triggers the Zap, an additional step (or steps) should grab the complete HTML of the provided link, break it out in a way GPT can handle its token limits, and then pass the full context into my existing ChatGPT action. The end result is richer, more accurate analysis that I’ll feed straight into our internal reports. Feel...
Experienced Data Scraper Needed – C-Level Contacts in Jewelry Retail (KSA & Qatar) Description: We are looking for a highly experienced and reliable data scraping specialist to collect accurate contact details of C-level executives and key decision makers from the jewelry retail industry in Saudi Arabia (KSA) and Qatar. If you only know basic scraping, this job is not for you. We need someone who understands data accuracy, validation, and structured delivery. Scope of Work: • Identify jewelry retail companies in KSA and Qatar • Extract verified details of: • CEO / Founder / Owner • Managing Director / General Manager • Head of Operations / Procurement / Retail • Provide the following data fields: • Full Name • Job Title • C...
I have a collection of websites that hold the textual information I need consolidated into a single, well-structured dataset. Rather than copying the material manually, I want the process handled through reliable web-scraping tools so the capture is fast, consistent, and repeatable. Your task is straightforward: • Build (or adapt) a scraper that targets the pages I specify, pulls only the relevant text, and skips ads, navigation links, and other noise. • Deliver the harvested content in a clean CSV or Excel file with clear column headings; if you prefer a database export, let me know and we can adjust. • Include the finished script or notebook so I can rerun the extraction later. Accuracy and formatting matter more to me than sheer speed, so please allow time for basic...
I’m ready to turn an idea into a working Chrome extension that makes it easier to land roles listed on Amazon Jobs. At this stage I know the core requirement: the add-on must run smoothly in Google Chrome and integrate directly with the Amazon Jobs site. Beyond that, I’m open to building in whichever features will give candidates the biggest advantage—whether that ends up being live job alerts, streamlined application auto-fill, advanced filtering or a thoughtful mix of all three. Here’s what I’m looking for you to do: • Help define the exact feature set and workflow after a quick discovery chat. • Design a clean, intuitive UI that feels native inside Chrome. • Build the extension with best-practice front-end tooling (Manifest V3, JavaScript/T...
Necesito ayuda para desarrollar y optimizar un modelo de detección y segmentación en tiempo real basado en YOLO11n que procese videos de ultrasonido de resolución media. El foco principal es localizar con alta precisión la aguja, aunque también deben marcarse venas y arterias para dar contexto al operador. Alcance del trabajo • Entrenamiento y ajuste fino del modelo YOLO11n con un dataset de ultrasonido que aportaré. • Implementación de segmentación y tracking cuadro a cuadro para mostrar contornos superpuestos en vivo. • Optimización de velocidad de inferencia a nivel de GPU/CPU para mantener latencia mínima. • Entrega de código limpio en Python (PyTorch o framework equivalente), pesos ...
I’ve created a Google Sheet that will act as a central “lead collector,” and I need it filled with fresh, accurate contact data pulled from publicly available sites. The focus is simple yet crucial: for every company you find, capture the homepage URL and a working email address. (ask for details in the sheet ) A completely ethical approach is non-negotiable—no gated content, no third-party lists, and no automated harvesting that violates site terms. I’m happy for you to use tools you’re comfortable with (Python, Scrapy, BeautifulSoup, Selenium, Google Apps Script, etc.) as long as you respect and rate limits. Email addresses must appear in plain text within the sheet; please avoid hyperlinks or HTML encoding. Deliverables • A Google Sheet ...
Project Title: Automated Web Scraper & AI Summarizer for Legal Documents (Italian Administrative Justice Portal) Project Overview: I need an automated workflow to monitor judicial rulings from the Italian Administrative Justice website (). The system should perform weekly searches based on dynamic keywords (e.g., "appalti"). Requirements: Web Scraping: Create a scraper (using Python/Playwright or Browse AI) that can bypass anti-bot protections, input keywords, and extract PDF links and metadata for rulings published in the last 7 days. Cloud Integration: Use (Integromat) to organize the results. For each keyword, create/update a specific Google Drive folder and upload the new PDFs. AI Processing: Integrate Google Gemini API to: Analyze the rulings within each folder. ...
I’m assembling a nationwide database of barbers and beauty-salon owners in the United States and need your help compiling it. Every record must be fully verified and include three data points: • Owner’s full name with current phone number and email address • Business name with complete street address (city, state, ZIP) • Key service specialties the shop promotes (e.g., fades, coloring, micro-blading, etc.) Accuracy is critical, so I will spot-check contact details and discard any duplicates or outdated entries. Please provide the finished list in a clean spreadsheet or CSV that I can filter and sort easily. If you already have a recent, reliable dataset, let me know; otherwise, outline how you’ll source and verify each contact before I award the p...
Quiero un script en Python que revise en tiempo real todas las acciones disponibles en Revolut y me envíe una alerta por Telegram cuando detecte una subida inminente del 5 % dentro de un intervalo de 5 min. Requisitos clave • Cobertura: el monitor debe abarcar todas las acciones listadas en la plataforma, sin exclusiones. • Frecuencia: chequeo continuo o, como máximo, con una latencia de segundos; no quiero perder ningún movimiento rápido. • Precisión muy alta: la lógica de cálculo debe minimizar falsos positivos y entregar la alerta solo cuando el diferencial de precio represente realmente un +5 % en menos de 5 min. • Notificación: el aviso debe entrar por Telegram (puedes usar bots de la API oficial). ...
Je veux constituer un annuaire exhaustif de toutes les bases nautiques en France métropolitaine et DOM-TOM offrant jetski, bouée tractée, flyboard, ski nautique, wakeboard, parachute ascensionnel ou encore location de bateaux. Pour y parvenir, j’ai besoin d’un workflow automatisé reposant exclusivement sur Google Maps – c’est la source retenue – capable de collecter, dédupliquer puis nettoyer les données avant de les mettre en forme dans un CSV directement importable dans le CMS Wix. Les coordonnées devront être fournies en degrés décimaux. Livrables attendus • Un script réutilisable (Python + Selenium, Scrapy ou équivalent) qui interroge Google Maps, gère le r...
Proyecto: API / Sistema de búsqueda de productos en múltiples tiendas (Texto + Foto tipo Lens) Estoy buscando un desarrollador freelance con experiencia en web scraping, desarrollo de APIs y reconocimiento de productos por imagen, para crear un sistema que permita buscar un producto en múltiples tiendas online al mismo tiempo. Objetivo Desarrollar una API (o sistema backend) donde yo pueda buscar productos de dos maneras: 1) Búsqueda por texto Ej: “iPhone 15 Pro 256GB” / “Zapatillas Nike Air Force blancas” 2) Búsqueda por imagen (tipo Google Lens) Subir una foto del producto y que el sistema intente identificarlo (marca/modelo) y luego busque coincidencias en las tiendas. El sistema debe consultar automáticamente una lista...
PROJECT OVERVIEW I am looking for a detail-oriented researcher to perform a competitive analysis of the Restoration & Mitigation industry (Water, Fire, and Environmental services) in specific geographic regions. SCOPE OF WORK 1) Target Identification: Identify active service providers/contractors currently utilizing Google Local Service Ads (LSAs) within our target markets. 2) Data Extraction: For each qualifying business, compile a clean spreadsheet including: A - Company Name, Physical Address, website if applicable B - Primary Office Line. C - Key Decision Maker (Owner/Principal): Identify the primary point of contact. D - Direct Contact Information: Provide verified Direct Dials or business contact lines for the identified decision-maker. Requiremen...
Hello, I am building a small data collection tool for a medical AI research project. The goal is to collect doctor-patient role-play audio data and prepare it for AI training. I need a simple system with the following features: 1) Web-based interface (works on mobile and desktop) 2) Upload audio files (WAV/MP3) 3) Automatic speech-to-text using Whisper 4) Editable transcript (dialect correction) 5) Field for medical Arabic normalization 6) Term notes input 7) Automatic folder/package generation per session 8) Cloud storage (Google Drive / S3 or similar) This is Phase 1: Data collection only (no diagnosis, no reports). Tech preference: - Frontend: Web (React or simple HTML) - Backend: Python (Flask/FastAPI) - ASR: Whisper - Storage: Cloud + Local Deliverable: A working MVP that allow...
I need investment performance data scraped from a research website and organized into an Excel file. It needs to be set up for scraping on a regular basis. Requirements: - Experience with web scraping tools and techniques - Ability to extract and format data accurately - Proficiency in Excel - Attention to detail and reliability Ideal Skills: - Familiarity with Python, BeautifulSoup, or similar scraping libraries - Prior experience with financial data - Strong data handling and manipulation skills Please provide a sample of your previous work related to web scraping.
I have a collection of websites that contain the text I need organised in a single Excel spreadsheet. Your task is straightforward: visit each assigned site, copy the required text exactly as it appears, and paste it into the correct columns and rows of the workbook I supply. Accuracy and consistency matter more than speed. Please keep original spellings, line breaks, and capitalisation, and double-check that no unseen characters or extra spaces slip in. The spreadsheet already has headers; all you do is populate the empty cells beneath them. I will share: • The list of URLs • A short field-by-field guide so you know which snippet of text belongs in which column • The blank .xlsx file You return: • The completed Excel file, ready for me to import into our system ...
I need a compact Python crawler that pulls public content from Twitter, Instagram and LinkedIn, covering text, image and video posts for any handle I feed it. Here’s the flow I have in mind. The script collects the raw post data (caption, hashtags, basic engagement numbers and, where accessible, image/video URLs) through whichever mix of libraries makes sense—Tweepy or Twitter API v2 for Twitter, Instaloader or Selenium for Instagram, and the official or unofficial LinkedIn API for LinkedIn. After normalising everything into a common JSON schema, the crawler should pass that dataset to an LLM endpoint (OpenAI or similar) and receive back a concise, structured report that includes: • Brand sentiment (positive / neutral / negative trends) • Key thematic buckets t...
Scope of Work: Automated Data Extraction and Alert System from Etimad Tenders Portal Project Title: Automated Data Scraping, Google Sheets Integration, and Email Alerts for Etimad Pre-Planning Opportunities 1. Background The Etimad platform ( ) publishes pre-planning and upcoming tenders from Saudi government entities. The goal is to develop a fully automated system within two days that extracts tender data, stores it in a structured Google Sheet, and sends email alerts when new opportunities appear that match specific keywords (to be provided). 2. Objectives Automatically extract all relevant tender data from the Etimad Pre-Planning portal. Store and update the extracted data in Google Sheets on a scheduled basis. Detect and highlight new tenders. Send email alerts for new tenders...
Scrape Google For Emails For Cheap
I want a straightforward, low-cost way to pull every unique email address from well over 5,000 messages in my Gmail account. No names, phone numbers, or other fields—just the raw sender and recipient addresses that appear in the header or body of each email. You are free to tackle this with the Gmail API, IMAP, Google Apps Script, Python’s gmail-api-client, or any other method you trust, as long as it stays within Google’s usage limits and my account remains secure. Speed is important but data accuracy matters more; I need a clean list with duplicates removed and obvious bounces, “no-reply”, and Google system addresses filtered out. Deliverable • CSV or XLSX file containing every unique, valid email address extracted from the full mailbox. Acceptance...
Transferring this project from another freelancer. First thing I will need is my files uploaded to Github, and connected to the current hosting build on Railway server. Here is what I will need next: Show discovery (Looking for host) 1. Search overhaul. Database currently has around 250 shows pulled through YouTube API. Right now, I am in the process of getting a higher quota, however, I want to optimize the search, by using different keywords based on user's profile through AI so it does not try to analyze channels already searched. This section is for looking for Youtube channels AND Spotify shows that interview guest. 2. Open API integration to find the email contact of each channel. Users can also manually put an email in if there is none. If there is a website with a form, ...
CONTACT ENRICHMENT PROJECT: We need contact information (CEO/President, Plant Manager, Purchasing Manager) for automotive supplier companies. The "To Enrich" sheet contains companies that need contact data. The "Already Done" sheet shows completed examples. WHAT TO FILL IN (Yellow columns in "To Enrich" sheet): Column G: CEO/President Name Full name of CEO, President, Managing Director, or General Manager Column H: CEO/President Title Their exact title (e.g., CEO, President, Managing Director) Column I: CEO/President Email Professional email if available (not required) Column J: Purchasing Contact Name Name of Purchasing Manager, Procurement Director, or Head of Purchasing Column K: Purchasing Contact Title Their exact title Column L: Purchasing Contact ...
I need help extracting text data from CSV files and inputting it into a specified format or system. Requirements: - Experience with data extraction and manipulation - Proficiency in handling CSV files - Attention to detail to ensure accuracy - Ability to work within the specified budget Ideal Skills: - Data entry - Familiarity with spreadsheet software (e.g., Excel) - Basic understanding of data organization principles Looking for freelancers who can complete the task efficiently and accurately. this is the full requirment for the project Your Operational Requirements (Summary) 1) Quote & Job Intake Create quotes (Q-files) and job packs (J-files) as you do today Save them into SharePoint folders System automatically reads them and creates: Quote records Assets Tasks per as...
Înregistrează-te sau Conectează-te pentru a vedea detaliile.
I need a robust scraping solution that continuously pulls price, full product descriptions, and customer reviews for electronics across roughly ten different e-commerce sites. Each time the script runs, the results should append to a master Excel workbook so that every entry is stored chronologically—allowing me to compare today’s prices with yesterday’s and build long-term trend charts without any manual work. Key expectations • The scraper must visit all target URLs, handle pagination or lazy-loaded content where it exists, and respect each site’s structure. • Collected fields: date/time stamp, site name, product name, current price, full description text, average rating, review count, and a link to the product page. • Excel output: one sheet...
I already have a curated list of LinkedIn profile URLs and need the key networking details moved into a single Google Sheet. For every profile, capture each person’s stated interests and list the five types of people they say they want to meet. Those “meet-up” types should be tagged under the three clear categories I care about—Industry experts, Potential clients and Collaborators—so that I can later filter the sheet by networking goal. Please place one profile per row in the Google Sheet and create separate columns for: • Profile URL • Name (as it appears) • Interests (comma-separated) • Type 1 through Type 5 (verbatim wording) • Category tag (Industry experts / Potential clients / Collaborators) Accuracy of the text you...
I need a small utility that sits in the background, pings a single public web page every second, and alerts me the moment it detects any difference in either the visible text or the images. The page updates unpredictably, so true real-time tracking is essential; a one-minute polling interval is already too slow for my use-case. A lightweight approach that respects the site’s bandwidth and avoids triggering blocks or captchas will be valued. I am open to whatever stack you favour—Python with BeautifulSoup or Selenium, Node.js with Puppeteer, or a compiled solution—so long as it is stable on a Windows environment and easy for me to tweak the target URL later. Notification method is flexible: an email is fine, but if you have a smarter suggestion (desktop toast, webhoo...
I have an excel document of roughly 15,000 B2B companies operating in whole the world. For every company on that list, I need one working, business-grade email address. Preferred research channels are clear: • Company websites • LinkedIn profiles • Reputable public databases (apollo) Accuracy is more important than speed. If an address cannot be found after reasonable effort, note “No email found” in the sheet so I can audit the attempt. Deliverables: 1. Completed CSV or Excel file containing – Company name (as provided) – Discovered email address – Source link or brief source note 2. A short summary of any recurring issues you encountered (e.g., companies with only web forms). Please keep formatting consistent and a...
I need an end-to-end AI agent that automatically scouts freelancing websites, general job boards, and specialised training platforms for roles or courses that involve artificial-intelligence work. The agent must: • Crawl and scrape the relevant pages in real time or on a frequent schedule. • Apply NLP or other classification techniques to decide whether a posting is truly AI-related, then tag it by sub-domain (e.g. vision, NLP, MLOps, prompt-engineering). • Deliver concise, deduplicated listings to me through an in-app notification feed—no email or SMS required. For the deployment side I’m open to Python (Scrapy, BeautifulSoup, Selenium), Node, or any stack you are comfortable with so long as it is containerised and can run unattended on a small cloud insta...
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Learn how to find and work with a top-rated Google Chrome Developer for your project today!
Learn how to find and work with a skilled Geolocation Developer for your project. Tips and tricks to ensure successful collaboration.