Say, if instead of an all-out shadow ban a moderator (or automated scripts, for that matter) would tweak some algorithmic parameters that instruct the platform AI to limit someone's freedom of speech in very specific ways.
Just like a person based on their PII and content can be accurately targeted by ads, they can be accurately singled out and subtly limited in the amount of influence they can have on the platform.
Unlike a shadow ban this may never be noticed. A person's non-controversial opinion may pass through, while their political statements are directed to a small echo chamber of other people. They will never reach the audience they are intended for.
The possibilities for shadow suppression are limitless. Without algorithmic transparency how do we even know how far such practices are applied?
[0] https://news.ycombinator.com/item?id=28343848
Generally what is perceived as shadow banning is just algorithms learning that the poster has low engagement, so it ranks their posted content (comments, posts) lower in the future.