What would ideal explainability even look like? For example, in deep learning if we knew what individual neurons were doing, would this solve any problems? Or even if we could specifically know why the model makes a particular error, the solution is still just to throw more training data at the model. Maybe increased explainability could help with more rational design of model architectures and training methods, but that has never been such a huge bottleneck in my experience.
SHAP, LIME, saliency maps, and the like I have always felt to be pretty hand wavy, inaccurate, and they don’t provide much insight for what developers need. I feel like they only exist to placate AI/ML skeptics so we can claim we sort of know what is going on with our models only when someone cares.
Am I missing something here? What specific efficiencies or insights would a developer benefit from if we had more explainable AI/ML models?
It is the reason why there is little to no application of AI models in the most serious of use-cases, especially in high risk environment(s) like transportation, finance, medical systems, legal applications. Even if there are, they are either extremely limited or have to be forced to comply with very strict regulations due to their untrustworthiness. There is a lot of money and lives at risk here if it goes wrong.
Tesla FSD is a great example of this and I guarantee that not many here trust the system to safely drive them without their hands on the wheel, end-to-end every day since it can get confused without explanation as to why it hallucinates. Not even Wall Street trusts AI black-boxes like ChatGPT to use it which also regularly hallucinates its answers [0]. Once again, even if they do use it, is in a very limited non-serious applications.
As you can see, it especially affects 'deep neural networks' in general which is what FSD and ChatGPT use and their black box nature makes their outputs highly suspicious and untrustworthy for anything serious. Hence the influx of 'chatbots', BS generators, etc on the internet.
[0] https://news.bloomberglaw.com/banking-law/wall-street-banks-...
Well none, but only because you can code, you understand what's being generated (probably because it's already been "sampled" from your private GH repos) and you're reading it as you're using said code.
On the other hand, remember that people want to be able to "prompt" for working code without an engineer present. Those people might be in for a major shock when their code starts doing things that it shouldn't and they hadn't scrutinized it.
Something that might become interesting is because of "AI" the rate of code changes, and the amount of code we're generating may increase exponentially, which means the models themselves might have trouble keeping up with the the changes. That will be an interesting time. I also think that ChatGPT has inherited, pretty good , well structure code bases and examples to be trained on. So when you receive a solution, you can easily understand what's happening.
The more copy-pasta, less well structured code that ends up in the system, the more I think the code quality will degrade and become more difficult to scrutinize. So It might be important if whatever bot you're using can explain what the code is doing...
Like the human brain, the "knowledge" is stored in those weightings. There billions if not trillions of such weightings. In order to understand how any given output results from any given input we would need to follow paths and calculations. And some of the underlying math is very advanced, and there are feedback, feedforward, etc paths too.
Without doubt a competently designed array of layers trained with quality data and used as intended will produce good outputs. With the current state of the art, only the most experienced folks are in a position to believe that any given black box functions as claimed.
A black box AI algorithm that generates shirts in a video game? Sure, sign me up. Worst case scenario the shirt looks weird or the game crashes. I’ll live on the edge a little and file some bug reports.
A black box AI algorithm that handles critical safety systems? This requires more thoughtful scrutiny.
AI: John and Peter are good, I'm sending the money now. Firing Tom...
Human: Why did you decide on those?
AI: I don't know, seemed as a good idea at the moment.