What exactly does it mean? I often hear people say it to try to prove a point about AI safety, but isn't that the whole point of AI? Neural networks find features via gradient descent so humans don't have to encode all of the features.
Yes, but what are those features? What did it decide was important? Often, we aren't really sure. We often can't ask the model to justify its decision.
This is why it is very difficult for anyone to completely trust these AI systems in very high risk situations. You would not board an Airbus jumbo jet without any human pilots and only controlled by an AI to fly from A-to-B end-to-end. (Not the same thing as autopilot)
Just ask any of these Turing award winning AI experts (Hinton, LeCun and Bengio) and not even they know why these systems hallucinate given the opaqueness of these AI models.
No. A feature of Artificial Neural Networks is automating function approximation. To not understand the output, the virtual function and its approximation, is not a feature but an issue.
It has been called "the problem of transparency" by some authors.
Artificial Intelligence, in general, is the construction of automated problem solvers and can be (historically has been largely) deterministic.