A LOT of that becomes tedious. Mailchimp can auto post to FB/twitter/instagram etc, but even converting the blog post into mailchimp emails gets tiring, as you need to choose a feature photo, the specific mailing list etc.
I'm a selenium guy, so have built some scripts to minimise some of it, but the rush each time of finding a deal still pains me when I think of the tedium ahead of publishing it.
Ideally, my goal would be to submit [to][from][when] to a script and the whole process would be automatic. I can see that selenium could do it, but my goodness, it'd be slow and potentially flaky all via the web UI for those sites.
I'd love suggestions, examples of automation you've done, and tools used?
Others tell me if my server is getting swamped, forward important emails like renewing a cert, a deploy failed, bug in prod or something else that requires often immediate attention. Chat widgets from sites in production also go directly to Slack so I can reply quickly if needed. (Gmail, Sentry, Freshchat, Ploi etc, integrate them all and filter out what's critical).
Everything generally operates on a 15 min interval or so it's not noisy and I've pretty much tuned it to only receiving specific stuff I want to see. Having a central assistant like hub to go to is very useful and time saving.
I wrote this https://github.com/buzzlawless/ynab-live-import to import credit card transactions instantly with no need of giving up my bank credentials.
The whole stack runs on Amazon Web Services. Simple Email Service receives a notification email from the bank that I've made a purchase, saves it to S3, and triggers a particular lambda function tailored to whichever bank the notification came from. The lambda function retrieves the email from S3, parses it for transaction data (account, payee, amount, date), and writes that data to a DynamoDB table. The table has a stream enabled, which triggers another lambda function when the table is updated. The function reads the transaction data from the stream and posts the transaction to YNAB using their API.
I've mentioned this on HN once or twice before and got some positive interest with people even submitting pull requests, which is awesome :) Going to find the time soon to review those and maybe add more features
One of our primary CAD suites was Rhino which is very nice and has a great python API. So I wrote a full-fledged 2.5D CAM processor for that. This allowed us to batch process hundreds of these parts with a single click.
Disclaimer: I work with the Playwright team at Microsoft.
Generally, I've had better luck using undocumented APIs for this kind of stuff. I was a heavy user of Selenium to automate many a task (I work in wholesale construction goods, tons of automations needed everywhere).
But then discovered that using internal APIs (which surprisingly don't change much) is far easier than trying to exception manage changing UIs.
Open up dev tools on your browser and watch the GETS/POSTS as you complete your daily tedium. I use Python (usually just requests and beautifulsoup is enough) to mimic the calls and I have yet to find a use case where I wasn't able to automate something online.
I'm currently developing an SDK for our 18th-century ERP system, which has no API. I've also automated getting shipping container tracking data from a various shipping lines and railroad company websites, many with complicated login processes.
Happy to chat, email in profile.
- I mostly automate my bookkeeping with a set of recurring & dependent taskwarrior tasks and scripts as annotations that I run with taskopen[1]. That's creating a bunch of folders, turning some emails in mutt into PDFs, gathering PDFs from emails, fetching bills with selenium, moving files from $DOWNLOADS into the appropriate bookeeping folder, putting a date on some files, turning the whole thing into a zip file, and sending it to the bookkeeper with mailx.
- I automated the email send of my daily summary to my clients with mailx (so I can send it directly from vim)
- I automated turning screen recordings into thumbnails+mp4 link (since GitHub only supports gifs)
- I automated making before/after screen recordings for when I do noticeable performance improvements (page load/animations)
- I automated booting/killing my development servers
- I automated making PRs with `hub pr` (finding the origin/upstream, putting labels, etc.)
- I bound to a key combo switching to the logs of specific development servers
- I turned my client's time tracking (tempo) into a CLI because I got tired of using the UI to say I worked X hours on that ticket and 7.5 - X on the other. Now I only do `tempo log $ticket1 2h $ticket2 3h $supportTicket rest`
- switch on when I get home and sun is set
- switch on when I'm home and the sun sets
- switch on in the morning, full brightness, coldest color to help waking up (really helps me a lot and is not as stressful as an alarm clock)
- automatically turn light off when I (my smartphone) leave home (not visible in the WLAN for 10 minutes)
Will look into more automations there. Perhaps something about turning the heat up in the bathroom in the morning when the windows are closed (there are sensors for that).
I ran NodeRed on a Raspberry Pi too for some time, but I found that helped me only little. Mainly did logfile analysis through that but did not look into other use cases for me.
For work I've got the de-facto standard for the printing industry, I'm doing everything possible with that.
I was already looking for an excuse to learn Python and this was perfect.
Instead of recreating the slides I created templates (just the last set of presentations) and update them with info directly queried from the database, using python-pptx and pyodbc.
This isn’t exactly an example of a “tool”, but I think the most important part isn’t really the tool, but to be able to identify what you can automate. How you do it might range from DIY to paying someone to code for you, but the feeling after getting rid of manual processes is great!
Especially for public library catalogues. There the webpage shows when the borrowed books are due, and you need to check it every week, or pay late fees. And you cannot set a bookmark, since you need to login first, which is tedious. So I automated it to renew all books and show me a warning.
And because that needs to run silently everyday, I decided to make it as fast as possible (and because I only had a single core cpu with less than 1gb ram). No browser, no javascript, no selenium, no python, no java. Only viable languages would be C or Pascal, and for memory safety I implemented everything in Pascal.
Because it is also tedious to build HTTP requests in native code and recompile it all the time, I then wrote a scripting language for it, all the webscraping is done in the script, but it is still fast because the processing is done in native code.
To read data from the HTML, I use pattern matching, e.g. + would read all links, and foo+ would read all links with text foo.
Nowadays I do not even visit public libraries anymore, but the automation tool has become an open-source project at http://www.videlibri.de/xidel.html .
I also started playing some browser games, but they are boring, so I stopped. But I want to stay in the highscore list, so I wrotes bot for them. Because they have anti cheat detections, you cannot send the http requests directly. For javascript based games, I wrote a bot with Greasemonkey. For flash games that does not work, so I have decompiled the flash file, added the automation functions to the flash, and then recompiled it.
Its a browser plugin that allows you to record and playback anything that you do in your browser.
https://selenium.dev/selenium-ide/
There is a right-click action to record hovering over a section if it is hidden by css, its a well-known gotcha in wordpress admin listings and other sites that love to hide until you hover over a section.
I've still to look into whether the file format that is saved from selenium ide could be used within a virtual browser like phantomjs or the like.
For direct scraping, i tend to go straight to xpath with dom.
Its far more reliable than trying to get stuff out via regular expressions, that i only use when i need to get a certain part of the node value from a text node.
I used to use php for that, but i wrote my own programming language that uses a library called htmlquery (its a go library)
https://github.com/antchfx/htmlquery
If i have to login, i fake each request with the correct headers. Plenty of libraries around that help with that for every language.
It's nice to know what movies are out, and when we can book, without having to remember to check or wait for pushy emails from the cinemas themselves.
I'm planning on doing the same to email us the latest menus from our favourite restaurants/cocktail bars so we can see when they change.
- FileJuggler on Windows, to automatically clean Desktop, Download folder and archive everything into a date-based folder structure in Drive.
- AutoHotKey to insert text templates into everything (e.g. current date, formatted Jira issues, annoying things to type)
The problem with automation is that one will want to break even quickly [time saved > (time spent finding/setting/maintaining)] otherwise automation is just a glorified delayed action. Time saving is not the only reason for automating but I assume it's n°1's when it comes to personal use.
I use it mainly for when I can't do things in IFTTT or have sensitive things I'd prefer to keep in a self-hosted system.
It crawls through monthly CSV files provided by my payment gateways, finds sales and returns by a specific affiliate, calculates what I owe them and returns a dollar amount, a sales break down for each course, how to send it (paypal, wire, etc.) and who to send it to.
It's nice because it takes around 2 minutes to payout everyone each month with a very high confidence level in the results. Prior to the script, it was a stressful work flow with many chances of human error. I only did it once before automating it.
Another script I use helps me invoice my freelance clients each month. It's a Bash script and is open source https://github.com/nickjj/invoice. It saves me a ton of time every month since I can calculate how much I need to invoice folks in a few seconds with it.
Summarizing long text-based articles! Found this on HN. Now when my family/ friends send me stuff they haven't actually read ... at least I can reply somewhat intelligently.
I wrote a blog post about how to monitor a competitor website using Python / Lambda and serverless framework for deployment: https://dzone.com/articles/monitor-your-competitors-with-aws...
For lots of workflows we also use No-code/Low-code tools like Zapier / Integromat.
1 - Notify me when Movie tickets are available to book. Used google scripts that run every minute and hit a URL to check if tickets are available now.
2 - Book a class when it's available, I have joined a Gym where you have to book a class before going, and the Zumba is very famous there the time it opens it fills up in 30seconds and there are limited seats, so if you are late it's gone. I wrote a script that checks and books the slot for me and then adds the same in my calendar. I extended the same using google forms when I want to book another class at some specific time which is currently not available if somebody cancels it will be available. The script keeps on checking until class start time, if available books and notify me.
3 - In my team, everyone plays Foosball and after lunch, it's everyday discussion who will play first and with whom. I wrote a script which will decide the matches and players in each team. I used the google app engine to deploy it, which is still running and just by hitting the API will sort the things for us.
There are a few other automation I did using IFTTT.
I use a few services to automatically invest money into a few different financial vehicles at a regular basis
* Vanguard finds (bi-weekly transfer into various ETFS)
* Titan Invest (robo advisor bi weekly)
* IRA and 529 (monthly)
I use personal capital to connect see view my cash flow and returns across all my accounts.
I have been doing this for the past 3 years and my returns have increased significantly.
I am 34 and I wish I had started doing this when I was younger.
I've semiautomated setting up a new laptop using a private repo with my dot files with some bash scripts. Started with linux, now works with Mac os x (but won't handle linux well and I haven't gone back enough to make it worth case by case stuff).
I still have to generate a public private key pair, and some things on macs aren't perfect, but I have a todo list for the non automated things in the repo. I get going less on new macs nowdays since the keyboards are such crap (fingers crossed this year will bring better 13 inch keyboards), but everytime I do it I add something.
Next I use Tableau Public to read data from Google Sheets and publish a dashboard to Tableau Public Online. Tableau Public automatically refreshes data on the server side from Google Sheets every few hours.
The good part is that all of this can run automatically without any manual intervention whatsoever and I get a beautiful near real-time dashboard for my daily expenses that I can access from anywhere (dashboard is publicly accessible but hidden).
For Windows automation AutoIt is good. VBScript for MSOffice automation. Python/shell scripts for mostly everything.
I’ve been working on a “mail to cloud storage” project on the side for fun, but also as an excuse to learn Rust.
As part of this effort, I decided to try out Ansible. Man, it is one great tool!
For development, I have all of my server “groups” pointing to the same host, so my Ansible playbooks install and setup everything on a single machine. Makes testing much easier tbh.
1. TyperTask (https://typertask.en.uptodown.com/windows) to automate common tasks like adding different email signatures for different addresses etc etc.
2. ClipX (http://bluemars.org/clipx/) for managing a huge clipboard history file. This saves me SO much time it's unbelievable.
Disclaimer, it’s a project of mine, but if the websites you’re watching are compatible that will save you some hassle trying to extract data.
Use a lot of google sheets as well, it’s super helpful to make sense and use of the data you scrape.
You can check it out here [0], and feel free to ask if you need something.
[0]: https://monitoro.xyz
From pandoc pre-processing with a great tool called 'pp', to generating my CV in different languages and paper sizes, to ensuring my repositories have the latest .gitignore files.
I feel it requires a significant effort to setup, but once you're there, it eases your development flow and enhances reproducible outcomes, in my opinion.
A fairly simple Python script can automate the fiddly things like creating XML for a new podcast and uploading things to wherever you're hosting. (I used to use a script to add intro and outro audio too but I now do that as part of the audio leveling process with Auphonic.)
It was too simple to just automate and control my laptop with scrolling to certain web pages, xpath the elements, and copy-pasta data to a CSV file.
Nowadays, I use bash scripts to automate some scraping. Cron scripts in Travis CI are amazing, this https://visalogy.com web site practically runs by itself thanks to them.
I have always been annoyed by creating directories for new projects. Always the same procedure. Always the same commands. Always the same source files. So I wrote a small shell function which created C++ and Python projects for me. But after a few months I started to learn golang. And there I was again. Creating directories and files myself. But at this point the function could hardly be extended.
So I started to transform my shell function into a powerful and expandable go application. Learning go by starting a new project and solving a personal problem at the same time? Perfect!
Now a few months have passed and proji has become much more powerful and diverse. The templates it uses, which are called classes, are not bound to languages or anything like that, you can create a class for really everything. No matter how complex the class is, proji creates the project in seconds. Class configurations can be imported and exported, making it easy to share them with other users.
With the latest version of proji there is a new feature that takes proji to the next level. Proji can copy the structures of repositories on Github and Gitlab and import them as a class which you can use locally to create your own projects.
Additional features: Classes support shell scripts, template files to minimize the boilerplate code you have to write, ...
Zap: https://zapier.com/apps/google-calendar/integrations/google-...
* an extensive collection of custom made /usr/local/bin scripts which automate things such as video recording and encoding for youtube (recordmydesktop + avconv), backups, and certain time-taking and repetitive operations on websites (like downloading invoices; through selenium-webdriver - github.com/SeleniumHQ/selenium/wiki/Ruby-Bindings)
* zaps on zapier.com for pushing data between multiple cloud services that would otherwise have to be moved by hand (i.e., Gmail email with matching subject to a task in kanbantool.com)
* autokey on Ubuntu, which allows me to type phrases like "!thx" and have them automatically expanded to i.e., "Thank you for your email!"
Using the google rss notification (e-mail) for keywords which I am interested in (e.g my name, stock ticker and company name etc).
I use www.hnreplies.com for alerts when someone responds to my HN comments.
• my OS packages
• my OS-overlay packages (e.g. Homebrew, Nix)
• the versions of any standalone SDKs I use, that aren't maintained as OS packages (e.g. Rust from rustup, the Google Cloud SDK)
• any globally-installed packages for the package ecosystem of each language I install such stuff from (not libraries, but standalone utilities, e.g. youtube-dl in pip, or Foreman in Rubygems)
Here's the script: https://gist.github.com/tsutsu/270e09c68690ec85c51dbd054e22b...
I think automating this might help a lot of people, because people tend to forget that they even can update a lot of this stuff, or they don't know there's a command to get certain things to happen at all. (E.g. did you know that `brew cask upgrade` will switch out your application-bundles installed via casks with newer versions?)
I never polished it up, though, because it's still frail in some ways. (I still don't exactly trust the way it does pip package updates, for example; sometimes pip decides to upgrade packages into an inconsistent state where a package's deps get too new for it to use.)
But the idea is that you run this interactively, first thing in the morning when you sit down, in its own fresh terminal window, right after maybe letting your computer restart to update to a new OS version. It's like putting your workspace in order in the morning. It's doing the grunt work, but you're still making the decisions and fixing the exceptional conditions.
-----
I feel like this could be polished into a universal tool that'd literally update everything for everybody with one well-known command.
But, better yet, the problem could be reversed, and a standard for registering installed package ecosystems and their respective update/clean/etc. commands could be created, that installed ecosystems could register with by placing files in a directory (like e.g. pkg-config or bash-completion) such that this command could outsource its smarts to the ecosystem creators themselves.
Manually, if have to receive each spreadsheet or email of choices from each instructor and then add it to an Excel spreadsheet of some sort. This wouldn't have been live at all.
So I came up with a system.
I made an excel proposal template each instructor fills out with their name, school ID and the courses they want to teach as well as the hours they'd have available for each course. They then send it to my email which has a rule on it to send excel attachments to a Gmail account I own. This is because I can't access the API of outlook so I need to get outlook to send this stuff to my Gmail account. My Gmail account has a watch on it so whenever an email comes in to it, I get a push request on my server. My server reads the contents of the excel file and sends it to a database.
From the user side, if a teacher wants to access the full list of what courses each instructor has chosen, I've made a website in react with an excel like row and column layout where the columns are each teacher and the rows are the courses. Where the two meet we have the hours the teacher wants to teach for that course. When this page is visited, it pulls from my database and populates the whole thing with the latest data. The site can also export all the data to an Excel file to be implemented once all the teachers have made their choices.
I learned a lot with this project. Learned react, setting up https, basic authentication, postgres, running a server and routing, and a while lotta other stuff. Super valuable to me in my learning even if I could have spent less time doing it manually!
At one point I decided that it would be nice to add header images to every issue. I decided to do it through a quick and dirty python script that adds the text for me.
Another thing I've just recently automated is the python script calling mailchimp API to show me the most popular link together with unique opens. I'm currently planning to embedd this info in every issue.
The next step that I'd like to automate is the campaign creation. With the API I don't suppose this will be a huge deal, just need to put some time on this. When I have it then I'd like to create a commit hook that will do all of these things automatically after pushing changes to the repo.
There's a bunch of stuff in there that saves you seconds each time you have to work with a user story or cut a branch or merge etc... Saved me a bunch of time over the years!
Selenium takes care of it now.
* At least on one of the three locations my cv lives... the next step is to point all the other domains at my new CI/CD home...
At my last job, I used Zapier & Zoho CRM to automate creating leads from email requests which saved me hours of time.