I'm really impressed by Playwright. It feels like it has learned all of the lessons from systems like Selenium that came before it - it's very well designed and easy to apply to problems.
I wrote my own CLI scraping tool on top of Playwright a few months ago, which has been a fun way to explore Playwright's capabilities: https://simonwillison.net/2022/Mar/14/scraping-web-pages-sho...
[1] https://github.com/altilunium/wistalk (Scrap wikipedia to analyze user's activity)
[2] https://github.com/altilunium/psedex (Scrap goverment website to get list of all registered online services in Indonesia)
[3] https://github.com/altilunium/makalahIF (Scrap university lecturer's web page to get list of papers)
[4] https://github.com/altilunium/wi-page (Scrap wikipedia to get most active contributors that contribute to a certain article)
[5] https://github.com/altilunium/arachnid (Web scraper, optimized for wordpress and blogger)
People love it for its ease-of-use because you can record actions via click-and-point rather than having to manually come up with CSS selectors. It intelligently handles lists, infinite scrolling, pagination, etc. and can run on both your desktop and in the cloud.
Grateful for how much love it received when it launched on HN 8 months ago: https://news.ycombinator.com/item?id=29254147
Try it out and let me know what you think!
The only thing I wish was present was better support for RegExes. Bash and most unix tools don't support PCRE which can severely limiting. Plus, sometimes you want to process text as a whole vs line-by-line.
I would also recommend Python's sh[4] module if Shell scripting isn't your cup of tea. You get best of both worlds: faster dev work with Bash utils, and a saner syntax.
[1]: https://github.com/ericchiang/pup
[2]: https://csvkit.readthedocs.io/en/latest/
We've invented the industry what you referring as "data type specific APIs"; APIs that abstract away all proxies issues, captcha solvings, various layouts support, even scrapping-related legal issues, and much more to a clean JSON response every single call. It was a lot of work but our success rate and response times are now rivaling non-scraping commercial APIs: https://serpapi.com/status
I think the next battle will be still legal despite all the wins in favor of scrapping public pages and common sense understanding this is the way to go. The EFF has been doing an amazing work in this world and we are proud to be a significant yearly contributor to the EFF.
Scrapers are very simple, effective and probably one of the least fun things to build.
from helium import *
start_chrome('github.com/login')
write('user', into='Username')
write('password', into='Password')
click('Sign in')
To get started: pip install helium
Also, you need to download the latest ChromeDriver and put it in your PATH.Have fun :-)
The worst thing about Puppeteer is chrome and it's bad memory management so I'm going to give playwright a spin soon.
It is a modern alternative to the few OSS projects available for such needs, like scrapyd and gerapy. estela aims to help web scraping teams and individuals that are considering moving away from proprietary scraping clouds, or who are in the process of designing their on-premise scraping architecture, so as not to needlessly reinvent the wheel, and to benefit from the get-go from features such as built-in scalability and elasticity, among others.
estela has been recently published as OSS under the MIT license:
https://github.com/bitmakerla/estela
More details about it can be found in the release blog post and the official documentation:
https://bitmaker.la/blog/2022/06/24/estela-oss-release.html
https://estela.bitmaker.la/docs/
estela supports Scrapy spiders for the moment being, but additional frameworks/languages are on the roadmap.
All kinds of feedback and contributions are welcome!
Disclaimer: I'm part of the development team behind estela :-)
For sites that are "difficult" I remote control a real browser, GUI and all. I don't use Chrome headless because if there's e.g. a captcha I want to be able to fill it in manually.
[1] https://github.com/brutuscat/medusa-crawler
Which I maintain as a fork of the unmaintained Anemone gem.
Obviously sometimes you have to go that route.
[0] - https://cheerio.js.org/
The main purpose was to submit HTML forms. You just say in which input fields something should be written and then it does the other things (i.e. download the page, find all other fields and their default values, build a HTTP request from all of them and send that ).
The last 5 years, I spent updating the XPath implementation to XPath/XQuery 3.1. The W3C has put a lot new stuff in the new XPath versions like JSON support or higher order functions, for some reason they decided to turn XPath into a Turing-complete functional programming language.
Of course, if you don't need a full javascript-enabled browser parse, consider alternatives first: simple HTTP requests, API, RSS, etc.
https://github.com/WebReflection/linkedom
When the content is complex or involves clicking, Playwright is probably the best tool for the job.
one is signature / fingerprinting emulation. It helps to run the bot in a real browser and export the fingerprint (e.g. UA, canvass, geoloc etc) into JS object . Add noise to the data too.
Simulate residential IPs by routing through a residential proxy. If you run bots from cloud you will get blocked.
scrapy is still king for me (scrapy.org). there are even packages to use headless browsers for those awful javascript heavy sites
however, APIs and RSS are still in play, and that does not require a heavy scraper. I am building vertical industry portals, and many of my data rollups consume APIs and structured XML/RSS feeds from social and other sites.
The purpose was to enable "live interactive" scraping of forms/js/ajax sites, with a web frontend controlling maybe 10 scrapers for each user. When that project fell through, I stopped maintaining it and the spidermonkey api has long since moved on.
It works for simple sites that don't require the DOM to actually do anything (for example triggering images to load with some magic url). But many simple DOM behaviours can be implemented.
Puppeteer + JSDOM is what I used to build https://www.getscrape.com, which is a high-level web scraping API. Basically, you tell the API if you want links, images, texts, headings, numbers, etc; and the API gets all that stuff for you without the need to pass selectors or parsing instructions.
In case anyone here wants something straightforward. It works well to build generic scraping operations.
* Apify (https://apify.com/) is a great, comprehensive system if you need to get fairly low-level. Everything is hosted there, they've got their own proxy service (or you can roll your own), and their open source framework (https://github.com/apify/crawlee) is excellent.
* I've also experimented with running both their SDK (crawlee) and Playwright directly on Google Cloud Run, and that also works well and is an order-of-magnitude less expensive than running directly on their platform.
* Bright Data nee Luminati is excellent for cheap data center proxies ($0.65/GB pay as you go), but prices get several orders of magnitude more expensive if you need anything more thorough than data center proxies.
* For some direct API crawls that I do, all of the scraping stuff is unnecessary and I just ping the APIs directly.
* If the site you're scraping is using any sort of anti-bot protection, I've found that ScrapingBee (https://www.scrapingbee.com/) is by far the easiest solution. I spent many many hours fighting anti-bot protection doing it myself with some combination of Bright Data, Apify and Playwright, and in the end I kinda stopped battling and just decided to let ScrapingBee deal with it for me. I may be lucky in that the sites I'm scraping don't really use JS heavily, so the plain vanilla, no-JS ScrapingBee service works almost all of the time for those. Otherwise it can get quite expensive if you need JS rendering, premium proxies, etc. But a big thumbs up to them for making it really easy.
Always looking for new techniques and tools, so I'll monitor this thread closely.
It lets you train a bot in 2 minutes. The bot will then open the site with rotating geolocated ip addresses, solve captchas, click on buttons and scroll and fill out forms, to get you the data you need.
It’s integrated with Google Sheets, Airtable, Zapier, and more.
We have a Google Sheets addon too which lets you run robots and get their results all in a spreadsheet.
We have close to 10,000 users with 1,000+ signing up every week these days. That made us raise a bit of funding from Zapier and others to be able to scale quicker and build the next version.
Would be cool to reverse engineer it and probably plug it into some JS rendering testing solution (say Puppeteer, etc.)
[1] https://chrome.google.com/webstore/detail/instant-data-scrap...
Web scraping is fun, but in production it’s an absolute joke.
Personally, I use Indexed (https://www.indexedinc.com) because they are technical and reliable, although there are many other providers out there..
>Thanks for the links. And I read too. I see a lot of useful stuff that I will use for my site https://los-angeles-plumbers.com/
[1] Scrapy is a well-documented framework, so any Python programmer can start using it after 1 month of training. There are a lot of guides for beginners.
[2] Lots of features are already implemented and open-source, you won’t have to waste time & money on them.
[3] There is a strong community that can help with most of the questions (I don't think any other alternative has that).
[4] Scrapy developers are cheap. You will only need junior+ to middle level software engineers to pull out most of the projects. It’s not rocket since.
[5] Recruiting is easier: - there are hundreds of freelancers with relevant expertise - if you search on LinkedIn - there are hundreds of software developers that have worked with Scrapy in the past, and you don’t need that many - you can grow expertise in your own team quickly - developers are easily replaceable, even on larger projects - you can use the same developers on backend tasks.
[6] You don’t need a DevOps expertise in your web scraping team because Scrapy Cloud (https://www.zyte.com/scrapy-cloud/) is good and cheap enough for 99% of the projects.
[7] If you decide to have your own infrastructure, you can use https://github.com/scrapy/scrapyd.
[8] The entire ecosystem is well-well-maintained and steadily growing. You can integrate a lot of 3-rd party services into your project within hours: proxies, captcha solving, headless browsers, HTML parsing APIs.
[9] It’s easy to integrate your own AI/ML models into the scraping workflow.
[10]. With some work, you can use Scrapy for distributed projects that are scraping thousands (millions) of domains. We are using https://github.com/rmax/scrapy-redis.
[11] Commercial support is available. There are several companies that can develop you an entire project or take over an existing one - if you don’t have the time/don’t want to do it on your own.
We have built dozens of projects in multiple industries:
- news monitoring
- job aggregators
- real estate aggregators
- ecommerce (anything from 1 website, to monitoring prices on 100k+ domains)
- lead generation
- search engines in a specific niche (SEO, pdf files, ecommerce, chemical retail)
- macroeconomic research & indicators
- social media, NFT marketplaces, etc
So, most of the projects can be finished using these tools.
Great things is, it have support for Zapier, webhooks and API access too.!