HACKER Q&A
📣 duringmath

Are we sure LLMs are that useful in a web search application?


Idly chatting with the search box doesn't strike me as the most productive use of my time.

Instant answers or whatever they're called already produce direct answers plus they cite sources and provide links which is what everyone seems to think is the solution to the "LLMs make stuff up" problem.

Not to mention they're faster and cheaper to run.

Only truly practical use case I can think of is summarizing articles or writing them which makes more sense as a word processor or browser add-ons


  👤 agentultra Accepted Answer ✓
I think there are plenty of people that remain skeptical of their utility for this application.

People who want to get rich will tell you it's the next greatest thing that will revolutionize the industry.

Personally, I've been annoyed at how confidently wrong ChatGPT can be. Even when you point out the error and ask it to correct the mistake it comes back with an even-more-wrong answer. And it frames it like the answer is completely, 100% correct and accurate. Because it's essentially really deep auto-complete, it's designed to generate text that sounds plausible. This isn't useful in a search context when you want to find sources and truth.

I think there are useful applications for this technology but I think we should leave that to the people who understand LLM's best and keep the charlatans out of it. LLM's are really interesting and have come a long way by leaps and bounds... but I don't see how replacing entire institutions and processes by something that is only well understood by a handful of people is a great idea. It's like watering plants with gatorade.


👤 visarga
On Google when I search for "X that does not Y" it would often return "X that does Y" instead. Hard to force Google to respect the query intent. LLMs on the other hand process intent very well, but they hallucinate. So the obvious solution is a combination of them.

Ideally Google search would have a flag to "follow my intent to the letter" and return empty if nothing is found. When you are searching for a specific thing, a response with other things feels like bullshit, Google trying to milk more clicks wasting your time. I don't mean exact phrase search, I mean exact semantic search.

This is causing issues when searching for bug fixes by ignoring the exact version, when shopping it will ignore some of your filters and respond with bad product suggestions, and when searching something specific that looks like some other popular keyword, it will give you the popular one, as if if has an exclusion zone and you cannot search for anything else around it.

"Minimum weight of a scooter with back suspension" -> matches information about carrying capacity. Of course more people discuss about max passenger weight than minimum scooter weight, but I really don't care about the other one.


👤 syats
Yes, but not in the form of chatbots.

Among other things, a LLM can be seen as a store which you query and get results from. A chatbot is cute because it formats output text to look like conversation, and the recent applications are nice because the query (now known as prompt) can be complicated and long, and can influence the format and length of the results.

But the cool stuff is being able to link the relatively small amount of text you input as a query, into many other chunks of texts that are semantically similar (waves hands around like helicopter blades). So, an LLM is a sort of "knowledge" store, that can be used for expanding queries, and search results, to make it more likely that a good result seems similar to the input query.

What do I mean by similar? well, the first iteration of this idea is vector similarity (e.g. https://github.com/facebookresearch/DPR). The second iteration is to store the results into the model itself, so that the search operation is performed by the model itself.

This second iteration will lead, IMHO, to a different sort of search engine. Not one over "all the pages" as, in theory at least, google and the like currently work. Instead, it will be restricted to the "well learnt pages", those which, because of volume of repetition, structure of text, or just availability to the training algorithm, get picked up and encoded into the weights.

To make an analogy, is like asking a human who are the Knights of the Round Table and getting back the usual "Percival, Lanceelot and Galahad", but just because the other thousand knights mentioned in some works are not popular enough for that given human to know them.

This is a different sort of search engine than we are used to, one which might be more useful for many (most?) applications. The biases and dangers of it are things we are only starting to imagine.


👤 softwaredoug
As someone that works in search - TBH - we don't really know. And the point isn't that people "know" its more about hedging that there likely is some kind of use case where it becomes important. Nobody knows until they try something. You're right to be skeptical.

However, it's been observed, people are using chatbots for informational searches. The kinds of searches where you want to learn about a specific fact. This isn't all searches, but it's an important subset for web search. For better or worse, with probably a high degree of inaccuracy, people (probably rightly) perceive this is how people will seek information.

There's also the generational use cases - "write me a program that does X". Is this something people would use a search bar for? We don't know, and wouldn't know, until its out there for a while.

For the longest time the one natural language interface was as a search bar. So search vendors surmise it's important to both defend their turf while also a natural way to get regular users familiar with this kind of informational interaction...


👤 ergonaught
It's all going to go horribly wrong because people, but, yes:

1) Integration with voice assistants. Links/sources are irrelevant.

2) Models tuned against a particular body of work don't care if links go stale, or websites get SlashdottedHackerNewshuggedDDOSed, or etc. Links/sources are irrelevant.

3) Inbound "service requests" processed by something that can better understand the question and the available answers/solutions. Links don't matter much.

4) When "Okay what are some good websites to read more about this?" can be answered, too, bang.

5) Ever asked somebody a question and just rolled with their answer instead of demanding citations? I mean, you're doing it here. So, again, yes.


👤 anonyfox
Sometimes I don't really know what exactly to type into google because its only something rough in my head. It then takes like multiple try&research iterations on the topic I have in mind until being able to formulate the actual question, at least if I wasn't derailed in the process (monkey brain). A Chatbot is a godsend for me here and I will happily pay money for that alone.

Another point is that I am either creative or productive at a time, but never both... at least aware of which state I am in. ChatGPT has proven to take over the other part surprisingly good. ie:

- when I am in a productive mood and stumble upon a thinking problem, generative AI is like on-the-spot creativity for "good enough" solutions, like naming a programming thingy or write some filler text around a few keywords instead of me looking for words.

- when I am in a creative mindset, I increasingly feed some code snippets into the bot and ask some questions to "fill in the gaps", like writing a specific function using library X, then to write a documentation explaining how it works, then to also write some unittests, and sometimes I even derail a bit or let the bot explain parts that stand out in some way so I can maybe learn a trick.

... And i used ChatGPT already in kinda emergency situations, like when I know 5 minutes in advance that I have to speak in front of a crowd/in a meeting and it gave me extremely useful outlines to quickly adapt to even when in a panicked mind state - calming me down through a given structure that sounded okay-ish, and it doesn't matter if the response is right or wrong.


👤 PaulHoule
Depends how you use them.

Generating bullshit text off the cuff is not the only use of LLMs. LLMs can perform very well at classification, regression, ranking, coloring proper names red, and other tasks. You could, for instance, use LLMs to encode a query and documents and rank them with a siamese network, something not too different for how a conventional search engine works.

If there is one thing wrong with the current crop of LLMs it is that these can only attend over a limited number of tokens. BERT can attend over 512 tokens, ChatGPT over 4096, where a token is shorter than a word on average. It easy to process the headline of an HN submission with BERT, but I classify a few hundred abstracts of scientific papers a day. A long abstract is about 1800 words which is too much for longformer but would fit in ChatGPT if there aren't too many $10 words.

Unless you can recast a problem as "does this document have a short window in it with this attribute?" (maybe "did the patient die?" in a clinical case report or "who won the game?" in a sports article) there is no way to cut a document up into pieces and feed it into an LLM, then combine the output vectors in a way that doesn't break the "magic" behavior the LLM was trained to do.

You'd imagine ChatGPT would produce accurate results if you could tell it "Write an review of topic T with citations" but if you try that you'll find it will write citations that look for real but don't actually exist if you look them up. You'd imagine at minimum that such a system would have to read the papers that it cites, maybe being able to attend over all of them at the same time which would take an attention window 100-1000x larger.

That's by no means cheap and it might be Kryptonite for Google in that Google's model involves indexing a huge amount of low quality content and financing it by ads that are a penny a click. A business or individual might get a larger amount of value per query out of a much smaller task-oriented document set.


👤 h2odragon
When you need an introduction to the subject area as much as you needed specific information from it; then an LLM explaining terms and offering options for further exploration can be a nice thing. (I assume they'll use it that way...)

When you're hunting for a particular fact, like "that bit of code i half remember seeing on a page 15 years ago", then I don't see anything for an LLM to add. Google had a pretty good index for that purpose about 15 years ago, but they've chosen to prioritize other goals since then. I dunno if anyone works "find things you're searching for" as a market now.

Which is an answer to your question: Does it matter if an LLM helps search the web? That's not what people are doing, that's not what these companies are selling.


👤 fnordpiglet
They don’t solve the web search problem better than web search engines do. But most people aren’t interested in a document retrieval exercise, they have a question and want an answer to it. The interface of posing a cryptic phrase to find a collection of “relevant” text blobs that you then have to sift and read to try to piece together an answer isn’t ideal for answering a question. When we have a question of a professor they sometimes answer with a list of papers, sometimes with a long form answer, and sometimes both. I think all three are useful, depending on the context. But the last one is clearly the most useful. It provides an intuition up front and an opportunity ask clarifying questions, as well as a way to find more detailed information and understanding from a known credible list.

LLMs have a chance to offer an oracle that doesn’t answer in cryptic or evasive ways, but attempts to just give an answer. The hallucinations are a huge flaw, but one that I’m confident will be addressed with other non-LLM AI approaches. But it’s the right user interface for answering questions - it answers questions with answers, not a pile of potentially relevant documents to sort. The augmentation with citations, especially if they’re semantically revenant rather than symbolically, is a huge plus.


👤 remarkEon
I just used it this morning to create meal plans, including grocery lists and detailed cooking instructions, for the next 2 weeks. What would've taken probably 30+ minutes - googling around, scrolling past the blogspam nonsense, writing everything down - took about 10. This morning was probably the first time I started to understand the utility of this thing. In a way, it's like I'm finally interacting with the computer on the USS Enterprise.

I don't know if that strictly complies with your definition of "web search application". It's definitely going to save time for me, and not seeing a bunch of ads during the process is wonderful - to the point that I really could see myself paying for it if they decide to go that route and take away the "free" version.


👤 thesuperbigfrog
Years ago there was an idea called "the semantic web": https://en.wikipedia.org/wiki/Semantic_Web

The basic idea was to have enough metadata about web sites so that you could get programs to do something approaching Prolog-style reasoning about the content and meaning of the web pages.

With more advanced LLMs it looks like a slightly different approach to achive something like the semantic web idea.

I think the idea is to constantly feed the model with updates from crawling the web and have the LLM "digest" the content, apply some filters to remove bad stuff, and then provide a meaningful result to whatever queries it might be asked.


👤 louthy
I think the summarisers are going to be valuable. As a Kagi subscriber I'm looking forward to them integrating their AI labs demo that was showcased recently. There's certainly potential for search to become an order of magnitude better over the next few years.

The issue I see with the chat approach is trust. I've seen so many examples of these models just making shit up now that I reckon regular use of them will eventually lead to mistrust between the human and the chat-bot. If you can't trust the answers and have to go and check yourself, it's dead as an idea IMHO.


👤 stevenbedrick
There was a great paper at CHIIR looking at this question from an information science perspective that I would definitely recommend anybody working or interested in this space read; here's the abstract:

> Search systems, like many other applications of machine learning, have become increasingly complex and opaque. The notions of relevance, usefulness, and trustworthiness with respect to information were already overloaded and often difficult to articulate, study, or implement. Newly surfaced proposals that aim to use large language models to generate relevant information for a user’s needs pose even greater threat to transparency, provenance, and user interactions in a search system. In this perspective paper we revisit the problem of search in the larger context of information seeking and argue that removing or reducing interactions in an effort to retrieve presumably more relevant information can be detrimental to many fundamental aspects of search, including information verification, information literacy, and serendipity. In addition to providing suggestions for counteracting some of the potential problems posed by such models, we present a vision for search systems that are intelligent and effective, while also providing greater transparency and accountability.

Shah & Bender (2022) "Situating Search". In Proc. CHIIR '22 https://dl.acm.org/doi/abs/10.1145/3498366.3505816


👤 karaterobot
I'm fairly sure it can be a useful part of a web search application. There are things it could aid in: summary, evaluation of results, and probably many other things I can't imagine. It's got some use. But I'm not at all sure an LLM could wholly replace search engines, as some of the headlines are proclaiming. I think if you just took the excitement in the median news article and dialed it back about half a turn, that's probably where we'll end up.

👤 foobarbecue
A major thing I find lacking in search engines is a way to disambiguate between homographs. "AI" search engines are one way to implement that capability. However, I want a search engine that gives me websites as a result, and is not capable of lying to me. So far every single "AI" search I have tried has given me incorrect, invented answers. This is a big step in the wrong direction.

👤 ankit219
I think LLMs are potentially a good addition. My searches are based on different use cases: one is when I am looking for a quick answer, two when I am looking for some suggestions, three when I am looking to do research. ( Of course ignoring the ones when I google for meanings or quick calculations or to go to a specific site. Thats just cos it's fast, not really a feature).

For quick overview answer, LLMs are great. It's not 100% correct, but mostly it is, and that is good enough for a quick answer. Currently google tries to show that and people object as it is stealing traffic from websites. i just need an answer, a coherent useable one. Eg: "What were the movies Scorsese got an Oscar nomination for?"

For suggestions, LLMs are just one more of those blogs and listicles that are already showing up in search. If LLM is updated that is. The difference is an LLM would customize the answer according to the query, unlike already pre written content. So, yes useful. Same goes with stuff like: "how to build an email list?" or "What is a effective sales strategy?"

For research, Google is more useful. I think we have all done that.

Another application which is not realized at this time because we never did it before is the ability to ask follow up questions (which a chat format enables well). Suppose you get an overview of how a quantum computer works, but it would take a lot of effort to ask a follow up question and get a direct answer via a search engine. Eg: "Why is there no point in going beyond thousand Qubits?"

There could be modifications like voice to text (a jarvis like interface), or a personal assistant thingy. But those are far fetched.

It will help immensely, and for places it does not, we will still google like we have done before.


👤 graypegg
It does make me wonder if google is maybe going in the wrong direction with its knee-jerk reaction to Bing’s changes recently.

While what they’re doing currently isn’t perfect, it does provide results that are at least traceable. I could imagine an alternate universe where they doubled down on marketing themselves as “the search engine that doesn’t lie to you” or “where answers are found, not stories”.


👤 mtlmtlmtlmtl
On average I think I would be far better served if search providers just fixed their operators.

👤 danso
Completely agree with this skepticism. It reminds me of when people claim that TikTok is now a "search engine", as if entertaining/slick videos (even if they're reliably surfaced via search query) are going to be more useful on the whole than information I can skim and read [0].

On the other hand, I do agree with people speculating that LLM-AI interfaces will seriously hurt Google's bottom line, e.g. reducing the space for search ads, which represent the majority of its revenue.

[0] https://www.nytimes.com/2022/09/16/technology/gen-z-tiktok-s...


👤 busyant
I'd like to see these things (eventually) used for automated phone systems like CVS and my bank.

Mostly this is just to calm me down because ChatGPT gives me the illusion that I'm interacting with a human. The current voice systems are infuriatingly bad.

It would be nice if CVS's phone system would actually listen to me and modify its output accordingly. "I already gave you my birth date. And NO, I don't need a COVID booster."

edit: I'd like to meet the person who sold CVS its prescription web-site and its voice system. Simply to marvel at them and the swindle they pulled off, delivering absolute trash and probably walking away with a king's ransom.


👤 efunneko
I see this technology becoming more of a 'content provider' rather than strictly an internet search engine. In the last few days, I have gone to Google to get answers to programming questions, do an image search for a quick pic for a presentation in addition to the usual type of internet queries. The results mostly led me to go on to another site to get the actual data, but more and more of it will be pulled into the initial page. Google has done this for years and this tech will just allow for more of it. Why visit a joke site for some quick one-liners, if the 'search engine' can just generate you 20 good-enough jokes specific to your needs?

I could imagine the interface being similar to what we have today, but with it being much better at taking in full descriptions of what you want. If you want pictures of teddy bears, it could provide search results and AI generated ones. If you want programming answers, it could link to StackOverflow or just give you an AI generated answer with an explanation. Perhaps I am looking for a lively bit of free music to add to an indie game - it could generate that too.

I feel that this will eventually end search as we know it, but it will hurt the sites that are behind the search results far more than it will hurt Google. Google (or Bing, ...) can become the one-stop-shop for so much more than it is today


👤 spamizbad
Yes, but not quite how some people are expecting.

Imagine asking someone with cursory knowledge of a subject matter to perform a google search for you. This person would dig through thousands of results and weed-out the junk/SEO/content farm sites, so you'd get information that's more relevant. LLMs could potentially do this quickly, separating the wheat from the chaff. Would it be perfect? No, but it would be a significant improvement over what you see on Google today.


👤 Swizec
Sometimes I want a search engine, sometimes I want a good enough answer. Two different use-cases.

When doing research I need a good search engine. Find me the official docs, not the SEO’d blogs. Find me that podcast episode. Find me the exact article I remember reading 10 years ago. Don’t try to guess and half-arse a result just because it has more ads or better SEO. I’ll do the hard work of synthesis because I’m looking to understand something deeply.

Current search engines have gotten meh at this use-case. Or at least Google has.

When looking for a quick answer, I need a smart-enough agent. How old is this celebrity? What’s the air speed of an unladen swallow? Give me a deity that starts with G. What the hell is a “nepo baby”? What does this random emoji mean when sent by a 20 year old and what is it by a 40 year old? Who’s that actor in that show with the thing?

I don’t care about the source and I’m not looking to do research. Just tell me a good enough answer so I can get back to my conversation or whatever. Current search engines are pretty okay with this, but GPT is better.

The two use-cases are fundamentally different and trying to merge them is where things went wrong.


👤 wruza
I think that it could shine in integrations. Maybe.

(Voice chat)

- LLM, find three articles from HN frontpage which I would find insightful based on my recent evaluations, summarize them in under half a minute each and then I’ll choose the order, while I commute.

- (…)

- Okay, read me a second one first.

- (…)

- That was a good one because well-written and compared alternatives. Now find funniest article of all time based on my long-time preferences.

- (…)

- - -

While it’s dumb enough to forget what and how you’ve evaluated recently, a hidden prompt could(?) fetch that out, e.g.:

- (system) please convert my previous article ratings into json objects consisting of article url, article id (…how to get it…), 1..10 rating, your summary and a string of tags.

Then these ratings may be saved for later and fed into a chat secretly as:

- (system) If I gave you this prompt: “” and had data looking like this “”, please list which tables and filters would you likely use to form an answer.

- (…)

- My recent ratings were (…).

- - -

I have no clue if this could work, but if it does, well, that would be useful.

Edit: it may be wrong, but we have enough mundane tasks which are better done wrong rather than not done at all. It has a great potential as an “occasionally bright secretary” archetype.


👤 nerdix
There are probably categories of searches were ChatGPT will be either one par or better than Google (and if the results are on par then ChatGPT is superior because it's one less click, one less ad spammy website to visit, no hunting for what you were looking for once you get to the site, etc)

As an example, someone on HN posted a tweet of a guy who asked ChatGPT to draft a letter announcing a layoff while also announcing several executive promotions and quoting MLK Jr. Obviously, the example is facetious but the results were actually pretty good. Certainly good enough for a starting point or template for a real layoff announcement.

I'm sure this is a miniscule amount of total search volume but there are a category of searches for letter templates (think cover letters, resignation letter, etc) that ChatGPT could seriously replace today. And ChatGPT is actually better because of how specific you can get (e.g. "with an MLK quote").

I don't think LLMs are a threat to traditional search today or even in the short term but what will ChatGPT 50 (or equivalent) look like in 20 years...


👤 sxvtemp
I agree that the Google business model doesn't fully accommodate conversational search UI, but LLMs can still play a valuable role in this area. In terms of a parallel, you can think of the "I'm Feeling Lucky Today" button as a way to bypass the standard search results and take a chance on a single, top result. Similarly, using LLMs in web search applications can provide a more conversational and personalized experience for users, offering a different way to interact with and find information on the web.

That being said, I don't think ChatGPT or any single LLM can replace mainstream Internet search use cases in the immediate future. They might enhance the search experience for users


👤 mathieuh
Anytime anything starts getting touted as a panacea I start getting suspicious. Maybe LLMs are going to have a large role in search in future, but at the moment there's so much propaganda flying around I just assume it's mostly lies.

👤 mikewarot
For any given word, there are multiple meanings that vary by context, along with possible misspellings, etc. So a search on a specific phrase can yield a fractal tree of possible interpretations/matches.

It would be useful if a search engine could find the top 10 different interpretations of the phrase in latent space, and offer the top results (with a means to pursue more) in each of those separate meanings.

For example: "hypertext markup" matches way too many things about HTML, and not enough things about marking up (annotating) hypertext

LLMs could make search much more powerful in this manner.

ChatGPT, on the other hand... is not a search engine, even before you consider its tendency towards BS.


👤 sberens
I think yes, at least for coding. Pre-ChatGPT, when I was working with an unfamiliar language or framework, my workflow would be: google something -> skim top links for what looks helpful -> click on one of them -> parse through mountains of SEO'd text -> get the answer.

The workflow I have now with ChatGPT and what I image it will look like in the new era of search is: query -> read result written for a human, not a search engine -> (10% of the time) check if the result is hallucinated.

Especially for basic questions where I know what the answer should look like, I'm really enjoying the new workflow.


👤 rvz
> Only truly practical use case I can think of is summarizing articles or writing them which makes more sense as a word processor or browser add-ons

That use case is the one that makes more sense. Given that ChatGPT frequently hallucinates the wrong answer and confidently tells you how it is correct with its inability to cite and transparently explain to you how it got to that answer tells you that it is results are untrustworthy and this AI bubble is again, pure hype created by VCs.

The only worthy AI hype that will change everything is open-source LLMs that are smaller models and are more transparent.


👤 lumost
Google Search results are often wrong or misleading. I already need to think critically, review sources, and corroborate when I'm reading google search results. ChatGPT saves me having to read thousands of words of unrelated/crappy content. If you need confirmation that it's not making something up just ask

"Can you provide a reference for that? or What should I google to confirm that this is accurate?"

It'll give you something pretty close to a final citation. This has saved me literal days of work traversing documentation in the last 2 months.


👤 patapong
As I said in another comment, there have been occasions where I was looking for the solution to a programming problem, was unable to find it on Google, but got it via ChatGPT on the first try.

I think it works amazingly well at least for instances when you can immediately verify whether the answer is correct (e.g. coding, drafting letters) and instances where it is a starting point for further research. These use cases are a significant portion of my searches, so I think it will be very useful.


👤 KrugerDunnings
Maybe not interfacing directly with a LLM ala ChatGPT but as an extension of vector search using word embeddings. It offers users a much more flexible interface to explore different combinations of dimensions vs a rigid facet UI interface. Well you can argue that the engineering has shifted to making the model, and that is true, but by keeping it fuzzy there is more room for creative prompt engineering. One challenge will be teaching the user what one doesn't know.

👤 seydor
For many things yes, most landing pages are dishonest SEO trying to get sales.

But i think that "suggest me something" is going to be a big selling point too. Decision-making is tiring, and people are willing to give that power to a machine. Look at how tiktok, youtube and even facebook now are working, it's slowly becoming like a TV stream that you passively watch. "Tell me what to do tomorrow" is going to be a common question in a year or so


👤 aww_dang
Perhaps tech hype cycles are more about valuations and less about utility. Everyone jumps on the bandwagon with the latest buzzwords, companies buy in. Nobody wants to be left behind. New tooling is demanded. Employees pad their resumes.

There's more to generative models than just the above, but how much of this hype cycle is substantive for end users or developers? You always had a query. Now you'll get answers in a different way.

Skeptical overall.


👤 throwaway1851
One of the most critical parts of nontrivial search tasks is evaluating the quality and credibility (and motives) of the sites asserting various facts or opinions. I wouldn’t outsource this part to a statistical model. Even if it cites its sources, you’re limited to the pool of sources it chose. Unless you decide to go Google it yourself - in which case, what was the point of querying the language model?

👤 whywhywhywhy
Because the information it gives back has to be so sanitized I'm starting to feel that if search boxes start returning LLM results it will end up more like some safety team at Google/MS is having to almost write web queries by hand.

Doesn't feel as simple as the search world where you can just up rank some approved sources, unless there is somehow a way to just generate the language from approved sources and ideas.


👤 AussieWog93
I asked ChatGPT the other day "Write some code to plot the contents of a circular buffer using ImGui".

It responded immediately with some working code.

With Google, I could have found the result but it would have taken a dozen clicks and probably 15 minutes of my time. (To be fair, I might have learned more in the process).

Siri or Google Assistant wouldn't have given an answer at all.


👤 jccalhoun
I think it could be useful in the followup to search queries or being smarter about using synonyms and others ways to phrase things to find better results but it reminds me of when facebook announced bots in there messenger. Lots of people tried it and showed all all these features and then they didn't really amount to much.

👤 solarmist
This is classic conflation. It’s a new UI method that seamlessly works with how humans interact. Everyone wants to believe it’ll work as well as interacting with a human.

People want to believe this is as amazing as it appears, but it’s window dressing we still can’t intentionally separate relevant information from requests.


👤 Gwiz462
I use the ChatGPT in Search extension and it absolutely helps to have the context to the side of my search results; I hope it never changes! I also use some SEO optimization extensions as well which is interesting for trends and categorical associations on some things in tandem with search results.

👤 dontreact
I think it's useful when you don't know what to search yet but are looking to solve a problem. For example, I am planning a wedding and I asked it to put together a rough budget for me and this helped me to prioritize where to spend my energies on the vendor search.

👤 flappyeagle
Another way to think about it: "the semantic web" is a dream that never came to be. Now we have a computer program that breaks all information down to semantics. So anything you thought was possible with the semantic web is now within reach.

👤 mola
I can see tremendous utility in chat gpt for end to end NLu tasks. Even for end to end artificial agents. But for end to end search? Not much better than what we have today, and worse in some ways.

👤 jstx1
Google search has been powered by a large language model (BERT) for years so we've all been successfully using LLMs for search for a while now, just not in the way you're thinking.

👤 mysterydip
I think the you're going to end up there either way eventually, as instant answers will start to pick up and cite more "LLMs make stuff up" generated content.

👤 smrtinsert
Find me all GPUs sorted by price desc, where the max depth is 12 inches that get great reviews. They should all be NVidia.

Good luck wasting hours of your time googling for that by hand.


👤 The_Colonel
I'm already using google to ask human like question "what's the second most populous country in africa". ChatGPT-like LLM can only improve that.

👤 sharemywin
most answers are "it depends". so you might need to offer more information to get a better answer. so a conversation would be better.

👤 braingenious
> Idly chatting with the search box doesn't strike me as the most productive use of my time.

That’s not using the search service for search


👤 JohnFen
I'm pretty sure that it's not. It's probably useful as a knowledge engine, but not a search engine.

👤 klyrs
It's wild to me that "Ask Jeeves on Steroids" is being heralded as the potential google-killer...

👤 syntaxfree
My answer to all these LLM threads is — play a little with the Python package “sentence-transformers”.

👤 bearmode
I think they will be, eventually, but have a fair way to go on the accuracy front first.

👤 meltyness
No. It's obviously better for templating, word-processing, organizing ideas.

👤 EGreg
Is this the Blockchain moment for HN and LLMs? Or will the love affair continue?

👤 sfx77
Yes

👤 s3p
yes

👤 the_third_wave
No, I was not convinced to start with and am getting less convinced all the time. To explain I'll go back to the middle ages. This was a time when the average person did not have access to books and the books which were in existence mostly were related to religion. That in itself was not that much of a problem since religion was the guiding line in most of people's lives but what was a problem was that those who wanted to know about what those books contained had to go through a middle man - an ecclesiastic - to get an interpretation. This gave those middle men enormous power which many of them wielded to their advantage, something which eventually led to a schism inside the ecclesiastical world when one of them nailed a pamphlet to a church door deriding the excesses of his world. When books became more generally available and with that literacy became a thing the world changed for what it seemed to be for good...

...until the rise of the new ecclesiastical class? These models are biased in several ways with the training set defining their world view and those who train them intervening in specific areas to bend the output to their will. They can be made to negate the old dictum of garbage in, garbage out to garbage in, gospel out - where the gospel follows whatever the (small-c) creator thinks the populace should know and (in extension) think.

In short I don't trust these models any further than I can throw them.