HACKER Q&A
📣 overclock351

Is it right to see "AI" just as a tool or there's more to it


This is a bit of a provocative question in a way... i feel kind of disillusioned on my own about a lot of stuff in the computer science field, in particular how the business world/the more "mainstream"/"non academic" side of CompSci seems to hook to the next trendy thing without a care in the world.

This question was born after a meeting that i had in my company where some higher ups (people above my managers, the untouchables/unreachables of sorts :D) where they where discussing about how the company was doing (you know... the usual Q4 graphy thinghies that i am too ignorant to understand).

One of the topics that popped up was... generative AI of course!

The talk was interesting in a way (after all i study this kind of things in uni (even if i suck at it being an eternal newbie :P)), they discussed about some internal prototypes that where some generic models that did some random stuff (chatbots, risk analisys etc... none of it deployed, just there for show).

After a while the discourse kind of degenerated and took a shift into the more "utopic/insane" sphere, sentences like "AI will be the last invention of humanity" or "GenAI will help build a workplace where we will just supervise stuff" or even worse "let's propose GenAI tools to our customers" (a sentence that shakes me to the core, but i'll get to that) etc...

I was sitting there stunned and the only thing that could go trough my mind was "what preemo crack are you smoking my dear friends?!".

My reaction (which stayed as an inner monologue) stemmed from 3 factors:

1) I still think we are steps away from deploying those systems fully (some of the prototypes shown scare the hell out of me from both an ethical and a legal standpoint). For instance one of my uni professors, that deals with AI in medical imaging, always told us that in medicine AI is generally used as an auxiliary tool of sorts; i imagine that everything that deals with either the physical and legal well-being of someone follows the same principles.

2) I don't feel like GenAI on it's own has enough proactivity: now my ignorance shines trough so i might ask you guys, since GenAI tends to "imitate" and reproduce stuff (my guess is that it uses a supervised learning approach of sorts but i have no idea how it would work in other ways) how could it proactively evolve to make "new" things, how can it proactively create stuff like detailed analysis heavily dependent on unobserved context or even context dependent code?

3) How come every time something new pops up (be it blockchains, the godforsaken metaverse, GenAI, and whatever buzzword pops up) it becomes this huge thing under thousands of limelights, gets squeezed like a lemon in a lemonade factory until only pulp and flesh remains? Seriously, i feel jaded most of the time because it's all people talk about, AI this, AI that, let's put AI in this thing that doesn't need it, let's shove it down the barrel of the musket until it explodes!

Jokes aside, my point of view is the following AI/GenAI/Machine learning (or whatever, it all seems interchangeable in buzzword haven) is nothing but a tool, like your framework for web applications, like your spanner in your toolbox etc... It is an extremely powerful tool especially if we dive in machine learning (since 9/10 times that's what we are talking about, again, when we speak in buzzword jargon); a tool that can be used to make a machine learn and gain experience to build programs that are both difficult to formalize and that would informally require experience.

I would really love to hear your thoughts from you guys thanks for reading.

P.S Sorry if it looks like a rant from an insane person, it's literally 4am here and i am contemplating abandoning everything and herding sheep :D


  👤 Terr_ Accepted Answer ✓
My current employer is in the hiring / initial candidate-contact space. The main use of LLMs (pre-trained) are things like categorizing incoming emails, or providing friendlier access to a company's FAQ information.

One of the "tracer bullets" I like to bring up involves legal compliance with data removal requests, and correcting flawed or malicious data in general. Both are issues which tend to lead towards a limited use of LLMs to accomplish small well-defined tasks, instead of any "you are a software suite which obeys the following requirements" nonsense.


👤 GianFabien
The situation you describe is NOT a technology issue. It is a people problem. Specifically a problem of persons who are under-informed, non-technical, lacking adequate knowledge of the workings of AI/ML etc. coming under the influence of salesperson's claims/promises.

👤 rabatooi
I think it'd be a fool's gambit to look at AI as just a tool given what openai just showed with o3 on the arc challenge. I'm open to critique but hell, alot of critics are laying low post the arc scores