By parsing the html from google scholar, I wrote a python script that does this which was very fast and was working perfectly, however, after using it for a couple of minutes (maybe about 10-15 requests), I can no longer query google scholar using python requests (the returned html is a request for captcha). It appears that google disallows any programmatic use of google scholar (even though this was not spammy at all, the user has to manually click on a paper to send a request to google scholar).
Anyway I was wondering if there is any decent and free API to get the url of a paper given its name, I have found a couple of paid ones but they are way too expensive.
[1] https://sioyek.info
[2] https://github.com/ahrm/sioyek-python-extensions#-paper_downloader
you could always download the scihub backup torrents from libgen and host them yourself somewhere. its probably like 100TB of data by now though so this isn't really a cheap approach.
Example: User opens paper on page 1. Page 1 has citations to 3 other papers. Your tool instantly begins downloading the other 3 papers. User goes on to Page 2 which cites 1 other paper. You begin downloading the new paper. User clicks on the citation and your tool now has the linked paper already downloaded and ready to open.
Might be a bit of an imposition on sci-hub though.
https://openapc.github.io/general/openapc/2018/01/29/doi-rev...