What happens after ChatGPT gets access to real world APIs?
I was just playing with the idea of ChatGPT replacing the backend of Alexa, Siri etc and I found some very dystopian scenarios.
Do APIs need a more restrictive permissions model? How do we protect the real world from hallucinating technologies that can interact with our physical infrastructure?
It already has that. There is nothing stopping you from getting an API key from OpenAI and using your own app to feed it info from another API. They call these plug-ins. In fact, that's a big part of the company's business model.
Imagine you had a team of experts at your disposal in most fields, all who were morally upstanding people without temprament or emotions that you had to manage, and they could just answer fairly complicated questions for you. Thats the end product of Chat GPT. It doesn't have a mind of its own, it just has some neat internal state machines that act to decompress the data based on your specification. Its not solving problems, its just pulling out the relevant data that its been trained on.
That's certainly an interesting question, that I don't have a direct answer to. But I'm curious about your thoughts on this:
> Do APIs need a more restrictive permissions model?
How would you make a distinction between my Python code making a request from your API endpoint, and a GPT-controlled Python program making the same request?
If you've ever talked to Google, Siri, or Alexa... you realize just how dumb they all are... would be amazing to have conversations with them like ChatGPT.