...niche industry , and using them in appropriate density; √ Using title, headers, and meta description tags correctly, with the right number of words so that the spider's next crawl and pass results in better ranking value. √ Creating a URL structure that is acceptable by the search engines and easy for the users to remember. √ Creating an internal link
Due to the research purpose, we would like to get a specialist of healthcare analytics in the US - To crawl data from the hospital websites. - Hospitals within 2 major states in the United states - Collecting the data such as Hospital CEOs name, career path, and majors - list of hospitals will be given Payment can be negotiated with an amount of workloads
We collect information from a major international social project. The essence of the project is family hospitality, exchange of knowledge and experience between different peoples and cultures. The project, among other things, will help families from different countries to receive additional income, taking tourists from all over the world. The project will help to bring the existing family business...
Gathering quick details including prices of specific skin cream products (named brands) from internet retailers in 20 countries - about 500 prices in total. I will supply a list of 20 countries - for each country, I will send the leading 5 brand names for each country. You need to find 5-7 of their facial skin creams in a major online retailer and copy name, price, volume, etc. see attached...
HI...we are looking for small search engine which can crawl few targeted sites for us, drop the garbage tags, and store data in loosly structured(not unstructured format). We would need a simple UI framework where we can select a site name, review cached content per site basis and drag/drop specific fields of content(title, description, price, etc)
A crawler application with a php backend using Laravel, and a js frontend using vuejs, that finds email addresses on the internet. Install this application to a domain I provide: [conectează-te pentru a consulta linkul URL]
...the user authentication and then pull the following data into an array of arrays. Include the ability for the authentication to be stored locally (or even on the server) The top level array is per website. The inner array is for the following data: - pages indexed (valid, excluded, error) - crawl errors (web, smartphone) - # manual actions - mobile
To gather data in Excel template - 1500ad-1900ad European V African battle casualties tally. You will be provided with a template. Search the Internet . Find stats for European vs African conflicts soldiers, cavalry, ships, killed, injuried etc.
I have 15 exams that send me data to my email, I want an app to gather all that data, It will have the following: 1. All the data of each student (which exams he have taken). 2. The app will have users, these users will manage estudent for areas (we have 5 areas that mean we will have 5 admins) 3. Data reading
Hi, We would like a freelancer to visit several webpages & gather data for the creator of the set. You have to fill the second column in attached excel sheet. The first column indicates the link which you have to visit and see the creator name and fill the second column accordingly. For example, visit below page, you can note that the creator name
...or data capture, just opening all links from a search, one at a time. The app already has settings for pausing between each page. The process is broken and needs to be fixed. Here is a video example of the broken google process. [conectează-te pentru a consulta linkul URL], In this case, there is no 4th page and the error is not trapped. The crawl inside
I need top 5 Bar from Florida, USA. I need Bar Name, Address, Phone Number. I am looking for a super fast Virtual Assistant who are able to do this task perfect and accurately. Web search project will be continuous if he/she does well. No spam bidder will be accepted. Please read this carefully. Write orange at the top of your bid. Otherwise it will be ignored. I need this list in the morning. ...
I need a way to crawl a site which is hosted at Cloudflare. It is protected against automated access, but open to access from a real web browser. I suppose they have velocity checks, etc. But I am not sure. I need to receive the data in a PHP application. So the crawler part can be either a PHP component, which I can call from my program, or a web
Please write a detailed proposal for a node.js service that does the following: - Can read a JSON object of URLs and ping each one to make sure it is online - Can crawl through links on each supplied URL to verify that it isn't broken - Can record the site speed and packet loss to each supplied URL to determine the quality of each website's connection
I need a website crawler to crawl the following websites for "For Sale By Owner" and "Make Me Move" in the location "Staten Island, NY" / Brooklyn, NY" and "Manhattan, NY” - Zillow - [conectează-te pentru a consulta linkul URL] - For sale by owner . com - Trulia The output must be in Excel. The excel must have the following columns...
...com/walking-tour-monaco-monte-carlo/ Nice Walking Tour : [conectează-te pentru a consulta linkul URL] Nice Bar Crawl : [conectează-te pentru a consulta linkul URL] Cannes Bar Crawl : [conectează-te pentru a consulta linkul URL] The content must be original NO COPY PASTE and YOAST SEO OK a native English speaker will be appreciate
...Create Unique static page based on their profile when tutor sign up. [conectează-te pentru a consulta linkul URL] Static Profile Page. Google must be able to crawl. And Add to xml sitemap. 2) Unique Title and Description based on their profile Title="Tuition Subang Jaya by Ben - "Subjects" <meta name="description" content="Ben Nelson
I was top of page 1 google google ranking for certain keywords for 4 ye...myself and worked on SEO descriptions and titles using relevant keywords. I have been linking content on social media. i have requested google re-crawls links and increase crawl rate from my site. looking for someone else to help get my ranking back and repair this damage done.
Hello I have a list of 22,000 websites. I would like a crawl done to determine if the site is alive, what language (written language), what country the website company is from, and what currency their products are priced in.
...expect that you will deliver? - A PHP class which we can use static. - Using Guzzle library for scraping. - The crawl function takes 4 arguments; postalcode, housenumber, housenumber_addon, ean_type (electricity/gas). - The crawler submits the data on the desired website. - We want to receive a FALSE (bool) when there are no results found . - We want to
I would like a list of Campervan sales, conversion and hire companies in the UK. The list needs to contain an email address, the company name, type (hire or conversion), and postcode. I think the best way to do it will be by using google maps, searching campervan conversion and zooming in and scouring. Or just going through google search. I am not interested in places that do new motorhomes onl...
We need Open source web scrapping tool which can scrap data for Given Category and Information to collect, by taking care of [conectează-te pentru a consulta linkul URL] of sites to be crawl.
Project- Script to crawl smartphone information and return in json format. About Myself- I am an engineer and work on algorithms and data structures. Purpose- To build a script which parse and return data in json format from 2 websites. This is one of the small parts of my project. Goals to achieve- 1)To Retrieve and return basic information
I need someone to write me a script to crawl an asp.net website and extract all the links. The website is a news website and I need all the links in their archive pages. The problem is the website is using ajax for paging and also it uses captcha to prevent crawling.
Need a script (ideally python) that takes list of uncrawled facebook urls, find links of all friends, and adds those to the end of the list of uncrawled urls. Get urls for all images from photo galleries (and profile pic) , save the urls. Then move to next uncrawled url. Same for linkedin
...for an individual who can help to gather data from companies websites on student/graduate vacancies that are currently open and career events to populate a database on our website. Data collection will involve careful manual research, not automated data crawling. English proficiency is essential. Some websites may ask for registration to access
Hello, I installed Scrapy in my PC and could crawl an URL without problems. Now I installed it in my VPS but I'm having problems with this specific URL , tests with [conectează-te pentru a consulta linkul URL] works. Error is : [<[conectează-te pentru a consulta linkul URL] [conectează-te pentru a consulta linkul URL]: Connection to the other side was lost in a non-clean fash...
Hello, I'm looking to have a script/cmd for Azure environment that will crawl any page no matter the file extension and create a print out with directory path Ex. If I wanted to find all pages that contain 'health' I can run the script and create a txt or excel file so I can view
I...and when. I also need the bot to export all data to Excel and CSV. I need to the bot to be able to capture HTML elements on a page using XPATH. I need the bot to be able to crawl JS heavy pages. I need the bot to. E able to throttle up/down depending on my preferences. i need the bot to be able to crawl at least 100M pages without any issues.
...clue=gyms&locationClue=&lat=&lon= 2. Visit their website and see if they have the fb pixel installed. If they dont, collect the website address. 3. Get their facebook page. Deliver list of websites to me that dont have FB pixel and also their FB Page in a spreadsheet. Bid right now, as Im choosing someone fast :) Guaranteed, great feedback for you if completed.
I need a crawler can crawl content from Instagram, Facebook, Reddit follow some specific rule attach in file below, then can automate up to Twitter. The bot should have some funtion like: - Replace some specific text by another. - Automate adding text. - Uploading crawled data to Twitter. Tool should be able run in mutil-tab and have friendly UI
I need the development of a custom Plugin, that will be able to be installed on the current and previous release of OpenCart. The Plugin, needs to hook the events and post them to a URL that I have provided to the file. Do NOT bid, if you don't have experience on OpenCart Event Handling. Of course, I will also need Error Handling Capabilities and documented code.
Take a look at this site, I need to download 5000 pictures. [conectează-te pentru a consulta linkul URL] To do it manually, I click on the left panel for Album, then click on the girl model, click on each thumbnail, and save them. Save them in order of [Studio Name]_[Model Name]_[File Name] Quote me by 5000 pictures. I may need to download their entire database
...check image. we only need yes when there is a green check icon and no when the product has a red cross icon. we can provide a list of sku's what can be searched on the site, or crawl all pages is also possible! You can decide.. we need a csv or xml export with products SKU and stock status every day. Our servers running on Linux so please don't offer windows
We have more than 4000 documents in word or PDF. We need consulting in how we can drive usable and tangible insights from the text (proposals, cover letters etc).
I have the URL...page has one table of data that I need extracted. End result should be in .CSV file. This is a simple task for an person with expertise scraping & parsing. Show that you have read this post by putting the name of your favorite scraping tool as the first word in your bid. I will give example URLs so you can review the target data.
Samla alla gränshandelsbutiker längs norska gränsen mot sverige. Samla in kontaktperson och adress till dem.