We already have to deal with Tesla FSD hallucinations and regulating it in serious and safety critical applications such as autonomous transportation.
I don't think we need a LLM behind the wheel and hallucinating directions to end up leading the driver to drive off a cliff or running past a stop sign it got confused over.
> Seems like LLMs are very good at generalizing to random tasks that they're not necessarily trained for.
Some tools are useful for other applications, especially safety critical applications. LLMs are *absolutely* not useful for this use-case.
Where it could add possible value is to dream up use cases.
give me some unusual and/or weird uses cases a driver or autonomous vehicle might come across while driving( on the high way, at a cross walk) etc.