Why don't podcast apps fix speech impediments (e.g., lisps) using AI?
This could be optional so I doubt anyone will mind.
Why favor conformance to the mean expectation over reality?
Because it turns out that "rub some AI on it, then MAGIC HAPPPENS" doesn't work in anything besides TED talks and marketing pitches.
Why would they? Apart from time spent it has no effect on me listening to someone with a speech disorder or impairment.
Maybe because sound programming is too hard? Even the question tells about it, or why you have mentioned such a silver bullet.