HACKER Q&A
📣 SeanAnderson

Is it possible to effectively discuss code architecture with AI yet?


I'm reasonably happy with the value I get out of talking with ChatGPT about functions, but I really want to discuss architecture. I've got a personal project that I'm working on mostly by myself. It's written in a language (Rust) and paradigm (ECS) which are both new to me. I frequently lament that I have nobody to discuss pros/cons of various architectural design choices with, but this isn't an issue at a function level. I want to be able to point my codebase (GitHub, IDE, or even concat'ing files) at a tool and talk to it about pros/cons of architecting in different ways. I want it to be able to judge the architecture decisions I've made and identify dumb mistakes where I'm going against the grain. I don't need it to design the architecture from scratch. I just want to know when I'm doing something obviously dumb at an architecture level.

I've tried a little with ChatGPT, but context length is an issue and it's a huge PITA trying to give it awareness of tons of small modules. I have copilot, but, to date, I've only really found myself using it as a glorified comment writer.

Does this exist yet? If so, what's a good approach? If not, is the limitation due to context length or is it fundamentally an issue that LLMs will struggle to intelligently discuss?


  👤 tj800x Accepted Answer ✓
I've been using LLMs to discuss fairly challenging topics and have had some success. Here are a few hints:

1) Assume that LLMs work like improvisational actors in that they are in the mode of saying, "Yes, and...". By that I mean that they will mostly agree with what you write without providing a real critique of your ideas unless so prompted. If you don't ask critical questions the AI will happily entertain crazy ideas. I bet I could get it to happily act as if it believed the moon is made of cheese with sufficient prompting effort.

2) Use multiple session to explore and discuss separate topics. Use your "main context" with purpose.

3) Ask a series of "warm up" questions. I've found that the quality of reasoning differs immensely whether the AI has been discussing a topic. Asking a question, then asking the AI to continue (several times), then asking it to reconsider it's responses seems to create the required knowledge graph network that is required for a pretty good response.

4) Some LLM's will eventually either stop working or start to slow down as their context becomes filled. I've found the best way around this is to ask one AI to generate concise prompts with minimal formatting to help another AI get up to speed. Don't let it get too meta though. Two AI's will happily "fluff" each other's ego and get nothing done over a series of dozens prompts.

5) Use "Less Meta." as a command when an AI talks about how to do something instead of actually doing it.

6) Pay attention when an AI says it cannot do something. It might start lying to you if you continue to push it.