[0] https://news.ycombinator.com/item?id=21916949
#!/usr/bin/perl
$| = 1; $i = 0;
if ($#ARGV < 0) { $number = 1000; } else { $number = shift(@ARGV); }
while ( print "\n";
I have another one that splits AAX files from Audible by chapters: https://github.com/captn3m0/Scripts/blob/master/split-audio-... (After stripping the DRM)
ansi2html converts terminal output to HTML: https://github.com/captn3m0/Scripts/blob/master/ansi2html.sh
dydns.sh sets up a dynamic DNS entry on CloudFlare against my current local IP address. https://github.com/captn3m0/Scripts/blob/master/dydns.sh
emojify converts emoji shortcodes to emojies on the command line: https://github.com/captn3m0/Scripts/blob/master/emojify
vtt2srt: convert vtt subtitle files to SRT https://github.com/captn3m0/Scripts/blob/master/vtt2srt
https://gist.github.com/davidmoreno/c049e922e41aaa94e18955b9...
https://gist.github.com/nl5887/56912b70b782baa4bd580ae22bde6...
So, this will automatically file away your screenshots to your $HOME/Documents/screenshot/ folder, organized by year/year_month/file.png.
Where file.png is in the format yyyy_mmdd_hhmmss.png.
I use it to take an area screenshot of all my research notes, useful comments, gold nuggets, etc. The automatic folder organization files it away nicely, and keeps it organized as the years go by.
Create it and set the execute bit:
sudo vi /usr/bin/area_screenshot
chmod ugo+x /usr/bin/area_screenshot
Then copy the contents below: #!/bin/bash
screenshot_dir="$HOME/Documents/screenshot"
current_year_dir="$screenshot_dir/$(date +%Y)"
current_month_dir="$current_year_dir/$(date +%Y_%m)"
fileout="$current_month_dir/$(date +%Y_%m%d_%H%M%S).png"
# Step 1: Check for screenshot directory
[ -d "$screenshot_dir" ] || mkdir "$screenshot_dir"
# Step 2: Check year and month directory
[ -d "$current_year_dir" ] || mkdir "$current_year_dir"
[ -d "$current_month_dir" ] || mkdir "$current_month_dir"
# Step 3: Take area screenshot to the current month
[ -d "$current_month_dir" ] && /usr/bin/gnome-screenshot -a -f "$fileout" $@
Then map it to the printscreen key.
(This goes with the idea, which I also try to encourage, that any repeated process should be first documented in some rough form at least (like a personal note-base or team wiki), then improved over time, via improving the doc, scripting it, and moving toward full automation based on balancing considerations of cost/benefit over time, YAGNI, and avoiding debt, and ideas from the "checklist manifesto", such as the realization that even smart people can forget important things, drop the ball sometimes, or leave.)
Edit: This also lets me script away differences between platforms, so I can just remember my usual little 1-3 letter command and it takes care of the rest, while the script records the details for reference.
find . -name '*.html' -print0 | xargs -0 sed -i "" "s/replaceme/withthis/g"
The same but with fd: fd -e html --print0 | xargs -0 sed -i "" "s/replaceme/withthis/g"
I have a tiny Sinatra app that I use in testing some parts of my Rails app. I either start it in the background or in a separate terminal window (that I close after some time). I have a script that kills a process given its port number (4567): kill -9 $(lsof -ti tcp:4567)
There's a bit of scaffolding around things going into a docs folder, some JS and CSS tweaks for the various output formats (ePub, HTML & PDF), but there is a very simple script I use to reduce some friction whilst writing.
It's nice to have a lot of small files. However, you may often want to:
1. Rearrange those files
2. Insert a new section between two other files
So, I hacked together this incredibly tiny script that means anytime I want to add anything, I can immediately get to wherever I'm up to:
next="$(ag TODO docs | awk -F':' '{print $1}' | uniq | sort | head -n1)"
if [ -z "$next" ]
n="$(basename "$(find docs -type f | sort | tail -n1)" .md)"
n2="$(echo "$n + 10" | bc -l)"
next=docs/"$(printf "%05d" "$n2")".md
end
echo "$next"
nano -s 'aspell -c' "$next"
Bonus: It outputs the current filename to the console, so that if I want to add stuff after where I'm currently at, I have the starting number.The entire scaffold for my books is three scripts, some standard CSS, JS and YAML. Which makes setting up a new one to match my sensibilities, quick and simple.
The script then runs "devices", looks for an alias "niss", and substitutes in the corresponding address. I use expect in Python to script it all together.
https://github.com/ivanmaeder/vimv
Basically it loads up the output of `ls` into an editor, then runs a `mv` command for each line.
Not something I use daily, but still handy.
#!/bin/bash --
f0() {
echo 'select moz_bookmarks.title || '"'"' = '"'"' || url from moz_places, moz_bookmarks on moz_places.id = moz_bookmarks.fk where parent = 2;' | sqlite3 /home/user/.mozilla/firefox/twht79zd.default/places.sqlite
}
f1() {
firefox `echo 'select url from moz_places, moz_bookmarks on moz_places.id = moz_bookmarks.fk where moz_bookmarks.title = '"'$1'"';' | sqlite3 /home/user/.mozilla/firefox/twht79zd.default/places.sqlite`
}
f$# $1
Another one: #!/bin/bash --
curl 'http://icanhazip.com/'
This is for a side project and being able to launch the dev environment so quickly has allowed me to start working more easily as opposed to going on YouTube or Reddit instead. Your brain will crave the easiest source of dopamine so anything you can do to make the habits you want to build (like working on a side project) easier will help you immensely!
https://github.com/robgibbons/highlander/
These days I use Ubuntu MATE which has a Mac-like dock with the same functionality for free.
- SSH wrappers/eval-ed Aliases, which do: exec ssh -l user host "$@"
- AWS wrappers, that allow me to: exec aws --profile blah "$@"
- DB wrappers, that allow me to hit a target DB
- cred: Allows me to pick creds from a password manager / keychain independent of OS; Usually a polyfill.
- subtract, intersect, union, sample: for handling one liner data, incl. CSV files
- csvcut: Python wrapper though that does -f, -d, -c but on CSV files
- j2y, y2j: Python wrappers for JSON to YAML, YAML to JSON
- envsubst, watch: polyfills for some environments
- vdimount, imgmount: Allow loopback mounting for partitions / images for VirtualBox/Qemu
- nanny: Nanny for starting processes, passing listen sockets on restart, change NS properties; Usually a polyfill
lw which does ls -l “$(which -a “$1”)”
ew which does ”$EDITOR” “$(which “$1”)”
newsh which does touch ~/bin/“$1” && chmod +x ~/bin/“$1” && “$EDITOR” ~/bin/“$1”
And habitat (hab) and arch pkgbuild, which use shell scripts as their package DSL... the former I’ve hacked up to screen-scrape package versions (due to the dearth of RDF/metalink usage in release artifact publication) and check gpg keys.
If VPN stops working, I can always check to see if the address changed. May not be the best solution, but it works for me.
Script itself: https://sirtoffski.github.io/docs/my-pub-ip
btex - interactively compile LaTeX to PDF or PS or DVI, highlighting warnings and errors ( https://github.com/ppurka/btex )
searchforfile - search for files interactively with results "as-you-type" ( https://github.com/ppurka/searchforfile )
displaymessage - wrapper around a bunch of gui dialogs ( https://github.com/ppurka/displaymessage )
A couple more in my GitHub account.
I have a project that is going on since 2015. So ran the above command to see how much code we have. I was expecting our models to be really large turns out we have ~(404 * 3) rest endpoints. the scariest thing was Vendor dir.
LOC per dir
Models: 15053
HTTP Endpoints: 28877
migrations: 49757
Laravel: 81728 (Framework)
Vendor: 1158299 (third party php code)
Total: 1797417
sub [2] is a simple shell based template processor.
Boom - histogram!
gocomics () { eom $(curl "$@" | grep item-comic-image | grep -Eo 'https://assets.amuniversal.com/\w*') &> /dev/null; }
this takes advantage of eye of MATE's ability to fetch remote files. i assume 'eog' would work too.