If you do, what workflows have worked best for you? What models do you trust? Do you believe that it may be a problem offloading thought you would do yourself otherwise to a model?
Also use them for completion to spec of specific functions/components, as a kind of pairing. Permits staying at high cognitive engagement again without getting bogged down.
A lot of work is local, qwen and mistral models are go tos.
I'm a senior web developer and help maintain hundreds of PHP repositories for work. I avoid the use of AI as best I can. When I do ask questions to an LLM, I feel that I'm partaking in a dirty habit that I should quit. I don't have anything for that installed on my machine/integrated in my IDE. I feel that using LLMs to understand and solve problems is unreliable, a barrier for my personal development, a threat to the future of the industry I work in, unfair to those who wrote the content it is trained on, and bad for the environment.
I also largely feel the same way about other AI products, such as image generation.
Recently tackled a huge project solo; something large enough with high enough performance that i wouldnt attempt it without a team to build it.
I have a working prototype in production thanks to ai, it's nearly entirely coded with AI. It's big enough of a project that im watching gemini pro take 5 minutes, and fail at replace strings. Which pisses me off because it's burning my limit on its own fails. It's also pretty vague with limits; im always surprised when i finally hit the limit.
Now, what ive achieved would have been utterly impossible for a non-dev to 'vibe code' no matter the quality of AI model.
>If you do, what workflows have worked best for you? What models do you trust? Do you believe that it may be a problem offloading thought you would do yourself otherwise to a model?
I use many different models. I trust them all frankly. GPT 20B is ultra fast and can be trusted. Different models for different purposes. Obviously big refactors, it's preferred to go with a big cloud pro model.