Some potential scenarios:
- Agents from reputed big tech companies (e.g., Google, Microsoft, etc.): Would you trust code produced or executed by AI from such sources? Do these brands inspire confidence?
- Open AI agents: How do you feel about running code generated by fully open agents, where the agent shows the code and the user executes it? Does the ability to inspect and verify the code give you peace of mind?
- Audits and transparency: Does knowing that the AI agents and their outputs are audited by third-party experts change your comfort level? How important is the transparency of how the model operates?
What threshold do you set for yourself? How often do you break this threshold? Would love to hear everyone's thoughts and experiences!
We are fine with code generated to be executed immediately (has to be to be productive really), but once we are satisfied, we won't let the PR get get approved without reviews.
As a freelancer specializing in fixing and maintaining software I welcome LLM-generated code. I look forward to a prosperous future with an even larger pile of bad code to work on. I especially like the subtle security errors that seem to plague LLMs — those mean premium emergency rates for me.