HACKER Q&A
📣 andygrunwald

Hey Podcasters, do you use AI to optimize your workflow?


I am the host of a German-speaking software engineering podcast (https://engineeringkiosk.dev/). This is a side project, and publishing one episode every week is challenging. Primarily if you target an evergreen strategy and not a news podcast.

Things like preparing a topic, recording, audio editing, writing shownotes/links/chapters, marketing (social media posts), and maybe editing reels/shorts. This can quickly consume ~8h per week.

We looked into the usage of AI with the goal of reducing the time spent.

Right now we are using https://www.assemblyai.com/ to generate transcripts. This helps us a lot because we often ask ourselves, "Didn't we talk about X in some episodes already?" - DUe to transcripts, we can easily search it.

We are experimenting with https://podsqueeze.com/ to auto-generate chapters, show notes, auto-extract links etc. - In the end, this seems to be a service that generates transcripts and sends them to ChatGPT with a few custom prompts - We don't use the full service, but might end up with our own workflow doing the same. Still a good inspiration for now.

When recording is going wrong and we only end up with one audio track (vs. 1 audio track per speaker), we use https://auphonic.com/ to normalize the audio loudness, etc. This service is doing a pretty good job.

Right now, we are playing around with the automatic removal of filler words (like "ehms") in the audio file with the help of a transcript. The challenge here is that we record in German. Many AI's don't support the German language as well as the English one. Time will tell.

However, we can say the transcripts had the biggest improvement in our workflow - The speed up writing the show notes a lot!

How do you use AI to speed up / get support for your podcasting workflow?


  👤 nonrandomstring Accepted Answer ✓
Yes and no

For Cybershow [0] we love to use AI images, from a variety of generators, because it's nice to have quick, catchy iconic pictures that help listeners remember/search the episodes visually.

Also most of the main cast channels now do great transcription, and our audio is clear, so that's one more job that's automatically done for us these days by "AI".

In the production and audio domain there's very little that "AI" can do better than an experienced writer and editor, imho as a half-decent writer/editor :).

A lot of editing is moment by moment decision making that just requires a good ear. "AI" can't tell me if something a guest says is ambiguous, or sounds a bit poorly evidenced etc.

Maybe one tool I'd maybe use if it existed is a "fluff remover" for "ummmm, like, well... y'know, ummmmm, literally... like...."

But I wouldn't trust it not to butcher the audio and leave things in an irrecoverable (no undo path) state that only gets spotted in the mastering stage.

Otherwise, many "intelligent" plugins already have built in clever DSP features these days, for things like noise reduction, silence removal, level matching.

Scripting things is the key... and Audacity is a dream for this!

For planning/organisation I use emacs org-mode which is fine for capturing notes, linking resources and pulling it all together into a script or talking-points list for recording.

For an amateur show we don't need much more quality than this.

[0] https://cybershow.uk/


👤 jprete
I’m confused as to why you need to pay an AI service to normalize audio volume? Isn’t this something basic audio tools would do for free?