Is the experience chaotic? Somewhat like letting the co-driver control the car simultaneously?
There were countless times when we're discussing a bug and working on the same notebook, have a code snippet that reproduces the bug, and then propose a fix right there and then on the same notebook, and test that it works there. I love that recursion of using the tool to improve the tool itself.
So, that's the meta, unintended, use of the capability to improve the tool that provides it.
The principal reason we added that feature was to pair-program on the same notebook to refine, optimize, or troubleshoot ML code for our work for clients.
Soon, we'll use it in our hiring process given that the questions for some of our roles are about a machine learning problem. This way we'll see the applicant's experiments, models, iterations, etc. and they'll be able to deploy their models, monitor it, and use it seamlessly. It'll reduce friction for people with a more academic background, given this is one of the reasons we built this. We'll therefore be able to go back and forth with the candidate, see how they work through the problem.
It works well when diagnosing outages, but you aren't writing code in that case, more trying to understand it