HACKER Q&A
📣 samstave

Discernment Lattice [Prompting]


There were questions about gpt response quality, and this has been from my experience in wrangling gpts to function to spec over the past bit.

I am posting this as its own Ask HN as the topic has wrapped multiple comment threads and I'd like to ask the HN community about their understanding and use of Discernment Lattice | Domain concept as I just discovered it, I am interested to know if this is already what others are doing? What are your thoughts on this.

The following is kind of my discovery of the concept over the last bit - so forgive me if this is uninteresting to you, but it feels sound... Is this pedestrian? Or Interesting? Is everyone doing this already and I am new to the playground?

####

Discernment Lattice:

https://i.imgur.com/WHoAXUD.png

A discernment lattice is a conceptual framework for analyzing and comparing complex ideas, systems, or concepts. It's a multidimensional structure that helps identify similarities, differences, and relationships between entities.

---

@Bluestein https://i.imgur.com/lAULQys.png

Questioned if Discernment Lattice had any affect on the quality outcome of my prompt, so I thought about something I asked AI for regading the HN thread yesterday from ScreenShotBot and their "no db" architecture... and had it compare some things, and :

https://i.imgur.com/2okdT6K.png

---

I used this method, but in a more organic way when I was asking for an evaluation of Sam Altman from the perspective of an NSA cyber security profiler, and it was effective in that first time I used it.

https://i.imgur.com/Ij8qgsQ.png

..

https://i.imgur.com/Vp5dHaw.png

Cite its influences.

https://i.imgur.com/GGxqkEq.png

---

With that said, I then thought how to better use the Discernment Lattice as a premise to craft a prompt from:

>"provide a structured way to effectively frame a domain for a discernment lattice that can be used to better structure a prompt for an AI to effectively grok and perceive from all dimensions. Include key terms/direction that provide esoteric direction that an AI can benefit from knowing - effectively using, defining, querying AI Discernment Lattice Prompting"

https://i.imgur.com/VcPxKAx.png

---

So now I have a good little structure for framing a prompt concept to a domain:

https://i.imgur.com/UkmWKGV.png

So, as an example I check it for logic to evaluate a stock, NVIDIA in a structured way.

https://i.imgur.com/pOdc83j.png

But really what I am after is how to structure things into a Discernment Domain - What I want to do is CREATE a Discernment Domain, as a JSON profile and then feed that to a Crawlee library to use that as a structure to crawl...

But, to do that I want to serve that as a workflow to TXTAI library function that checks my Discernment Lattice Directory for instructions to crawl for:

https://i.imgur.com/kNiVT5J.png

This looks promising, lets take it to the next step for out:

https://i.imgur.com/Lh4luiL.png

--

https://i.imgur.com/BiWZM86.png

---

Closing, context window sizes are way smaller than you expect, and the project context directories are not we ll respected by the bots thus far, hallucinations and memory dumps, and other nefarious actions are rampant.

So - Discernment Domain Scaffolding and Lattice files to keep a bot in check.

So I thought out-loud the above, and I am going to attempt to use a library of Discernment Domain Lattice JSONs to try to keep a bot on topics. AI-ADHD vs Human ADHD is frustrating as F... Iteratively update the lattice templates for a given domain - then point the crawlee researcher to fold findings into a thing based on them... So for the stock example, then slice across things in interesting ways.


  👤 spacebacon Accepted Answer ✓
Are we operating on a similar question with two different approaches?

https://github.com/space-bacon/Semiotic-Analysis-Tool

I’m going to put some time into your question today and hopefully return with a more useful response.

I wanted to shamelessly get my early stage work on semiotic analysis in front of your eyeballs in the meantime as I see this as one of the more valuable pieces of content I’ve consumed today that could help improve my scripts direction as well.


👤 kingkongjaffa
When I see people evaluating LLMs in this way I can't help but think they are letting emotion get the better of them:

> I am using the incorrect phrasing, but Ive been heavily using claude's "Project Folders" (paid account) - and when I put "context" files into the project folders, it will "forget" that the files are there - and Ill call it out when it switches back to boilerplate response language -- and it apologizes and says "You are correct I SHOULD HAVE BEEN using the project files for context.

You should probably try to implement your own RAG system with free models locally (ollama, langchain, chromadb can do it, it's very straightforward) so you can understand the process a bit more under the hood.

> How/why is it occurring mid conversation?

I dunno, usually when something is retrieved it is added to context. But a key part of RAG is determining how to chunk up your content so that the prompt embedding matches up to information that is actually targeted and concise.

So if RAG is behaving suboptimally the first thing to check is, are the input documents targeted and concise? If too much context is being stuffed into the prompt context, then the results will be poor.

You can see this with even just very large prompts, the larger the context the worse the quality, despite model developers claiming ever larger context windows.

I don't think inventing your own terms (Which therefore have extremely weak embeddings to match with the embeddings of the models training content) is the right way to go.

If I chuck in what is a discernment lattice to gpt4o I get:

> A "discernment lattice" isn't a widely recognized term in most common fields of study, but it can be interpreted in a few ways depending on context. Here's a breakdown of potential meanings and applications:....

So it's not really giving the model valuable tokens to work with.


👤 Bluestein
> context windows in all the GPTs are a lie

The above is a very very bold claim, to say the least.-


👤 Jabrov
What is a discernment lattice and what are you smoking?