Has anyone taken this to the extreme yet? Like feeding realworld input into it, then using the responses to effect realworld actions? Theres limits in the amount of state it can hold but you can still fit alot of data in there even with the current version.
For example, you could pipe in realworld conversation transcripts and ask it for suggestions of music to match the tone of the conversation, then take the response and play it through loudspeakers.
There's a lot of money to be made in hooking up these "brains" into apps. I hope people get on it, since it will be quite convenient.
See:
Act-1: https://news.ycombinator.com/item?id=32842860
Open-Assistant: https://github.com/LAION-AI/Open-Assistant
Toolformer: https://arxiv.org/abs/2302.04761
Understanding HTML with LLMs: https://arxiv.org/abs/2210.03945