Personally, I've found that the paradigm of "sharp knives in the drawer" can lead to some pretty nasty output from LLMs; it's felt to me like the higher the ambiguity the higher the variance in quality.
This has shifted my approach to A&D:
* Always enforce strict contracts, i.e. the ONLY way to do X is through Y.
* Fail loudly and fail often, assumptions and fallbacks only encourage AI to make larger assumptions.
* Boring is better, the less magic you implement the easier it is for LLMs to understand and extend.
Anyone else have some nuggets of truth they'd want to share as it pertains to A&D + AI?
I'm far more willing to try new libraries or new frameworks.
>Personally, I've found that the paradigm of "sharp knives in the drawer" can lead to some pretty nasty output from LLMs; it's felt to me like the higher the ambiguity the higher the variance in quality.
I learnt this one when trying to do video game programming. The models just arent as competent at more niche programming.
Something to also add to your list. Context management seems to make me want to break out to utility files. Keep your files small and the llm will agentically only be looking at smaller files. Using up less context overall. Rarely does an LLM need to have the entire project in context, but better yet the smaller you keep files, the more reliable it becomes. As accuracy is lost the higher the context because it tends to forget about context.