I know we often over-estimate the value of our contributions. I know we often find that our functions can ultimately be automated in some respect. But I find in aggregate that the leading comments reflect a very arid conception of being a human connected to other humans.
For example in the discussion about AI Lawyers very little sense of the moral aspect of another human acting on behalf of a human client. In the discussions about the replacement of programming jobs by this kind of technology, not a great deal of confidence in the importance of human judgement in building human-focused systems.
Is this just reflective of our context as people that streamline and automate, or do HN readers just think a human isn't such a complex entity?
For me this is somewhat like the T-Shirt that says "I went outside once, but the graphics were crap"...except nobody's joking.
I completely agree with you that hackerland is depressingly myopic. And the new power elite of Silicon Valley are dangerously contemptuous of human institutions.
But aside from that, I think it's just people who get used to one paradigm getting confused by another.
To the automation-centric thinker, human institutions seem to be ill-specified and allow for many absurdities. What they're not getting is that human institutions are simple frameworks to enable agents with judgment. Automation is about complicated frameworks to constrain agents that have no judgment.
People who know human systems (the vast majority of the world) are similarly confused by automation, because their assumptions are flipped.
It’s like confusing a photo of a person with a person, and a photo generator with a human cloning machine, and saying, “But when I look in the mirror, that’s me! So I am equal to my visual reflection.” Language is the I/O of a cognition process that is itself only a small part of being human and alive.
A lot of activities and jobs do use humans as “cogs” in a machine, and in some cases an AI might make a better cog, but I think some commenters do underestimate the amount of context humans bring to various tasks. You don’t have to analyze the humans to see how complex the jobs are, just try to do the jobs by AI, or watch as others try. Like try to make a self-driving car; it’s hard. Try to replace a cook at Denny’s with a robot. Or a lawyer. See how long the tail of edge cases is.
I know within myself I don't have a completely coherent world view. And I don't feel a need to correct that. When I comment on things it's not always my view, not always thought out, reactive, perhaps insightful in a moment but not long term, or maybe a burst of clarity to my otherwise unclear mind.
Plus you get slices of people commenting on different articles. I didn't comment on that one, but I am on this one. So you're grouping everyone commenting together.
I'm someone who has made these kind of comments before. It may help you to place my such comments into the context that I am not someone who works in AI, but I am someone who studied philosophy and has both studied scientific literature on and thought deeply about the nature of the mind.
While we're not yet close to understanding the mind in entirety, something I was struck by as I read about the parts of the mind we do understand is just how many human capabilities do seem to be explainable on a physical neural network (as in an actual network of physical neurons, not the AI thing) basis without requiring any notion of conciousness or uniquely human (or even animal) capability.
My view is not that AIs are currently anywhere close to the capabilities of humans at the moment. But:
- I am somewhat agnostic on the question of whether they could match them in future. And I think other people should be too. We're not really in a position to know this yet.
- I think a lot of the limitation of AIs are limitations in IO capabilities: AIs can typically only consume text or images, and they can't typically influence the world themselves at all (one of the things that has come out of research into (human) perception is that it's generally very much an active process - activities that might naively seem passive like vision actually involve tight feedback loops and actively interacting with the world).
- To me the way modern "deep learning" models work does seem like computers genuinely learning from experience. That it's possible that it differs from human learning largely in scale and complexity rather than being fundementally different (it is of course possible that it's not the case, but I don't think this is obviously the case)
I would also agree with another commenter that part of the purpose of such comments is to provoke thought and break people out of their assumptions. Many people take the idea that human cognition is fundementally different to machine cognition (or even animal cognition!) for granted. And while that may ultimately end up being the case, I think it's valuable to question that belief.
I'm not sure where it comes from, I suspect it's just immaturity. I've seen it here but also in the real world, I'm not sure HN overindexes on it, maybe even the opposite
The other possibility is that some HNers can tend to look down on people in non-tech fields and so, maybe, HNers have a reductive view of some human beings. I don't see nearly as much excited discussion of programmers being replaced by AI as lawyers, for example.
One could call LLMs "nothing more than statistical plausibility generators", and then have a hell of a time distinguishing that description from the vast majority of a human's subjective conscious experience. ie: https://en.wikipedia.org/wiki/Left-brain_interpreter
I like to think that some of us are capable of more than that, sometimes, but of course that's just what my localized bundle of physics makes me think.
Dealing with those kinds of professionals as a client is often a dehumanizing and unpleasant experience for many people.
Much of the time, such dealings aren't particularly wanted to begin with. Those seeking the services of such professionals have often been forced into it in some way, many times by government or by government-imposed systems.
Not only are such dealings an unwanted burden, but they're often extremely costly (financially, and in terms of time and effort), with the clients sometimes receiving poor service, as well as little, if any, real benefit in the end.
It doesn't surprise me at all that people would be eager to see technologies that may help them avoid, or potentially reduce the cost of, having to deal with those kinds of professionals.
I feel like the continual Tik-Tok reduction of attention span and high-speed memetics of it all is massively reducing our "rich contextual knowledge" and we're becoming a bunch of flippant oafs.
Most HN readers will be receptive and maybe even in agreement about statements concerning the hardness of these problems, but not the magicalness of these problems. In your post, you used a lot of magical words, which the commentariat is correct to identify as non-constructive. Phrases like "human connected to other humans", "human judgement", "moral aspect".
There is nothing about humanness that makes these problems any less tractable. If they are hard and we don't know how to build machines that solve them as well as humans do, so be it. But they aren't hard for magical reasons relating to poorly defined terms like "morality" or "connectedness". At least that is the opinion of most scientifically minded people, and probably the commentariat.
Go into every thread with that understanding.
Machine learning tools like ChatGPT do not show their work. The HN crowd will especially realize that there is a tool that people will get answers from that provides no sources or references, no links, just an opaque box with magic algorithms basically carrying on from the lessons learned in the social media platforms to tweak society. These tools will start off in benevolent mode and gradually devolve into malevolent mode and by the time people realize it they will have bought into these tools and built financial dependencies and business models around them.
Social media drives out nuance and subtlety by design.
Blogs forced a long form reply and consideration, which allowed for a much richer, but much slower and less likely to get comments, channel for discussion.
I think most of us here have a rich set of opinions, and quite a bit to offer to a discussion, but when you're possibly getting downvoted for just uttering the wrong opinion, it causes a lot of self-censorship.
That said, life's fast, especially on news-keyed discussion forums and thoughtful, balanced comments on complex issues can take a really long time to compose as well as become very long and I think most do not bother with that (including myself; I view it as an unfortunate pathology of this site's general set-up as well as modern online life).
It can be tough at times, but it helps to remember that these voices are far from everyone's. In certain threads they suspiciously congregate though.
Other than that, the product might be getting a lot of hype when we know it's probably vaporware, or half baked. The self driving AI topic has gone this direction, I think we think it's awesome and has potential but when a company like Tesla starts selling it before it is done then we call BS. We know they haven't scratched the surface on the technical challenges that problem presents. It's near fraud to sell it for thousands of dollars.
An impoverished view of humanity whether it's true or not is the basis for the business models underpinning almost all activity in the industry so when those people turn their attention towards AI that is of course also what they see. If people really were to acknowledge that human beings are at the centre of technology then probably 90% of what's being built is unethical and anti-social in its very design.
It reminds me of a great article by Ted Chiang where he discussed this in the context of common fears of AI. https://www.buzzfeednews.com/article/tedchiang/the-real-dang...
However, Reductionism and Emergence are actually supposed to be complementary. A bit like the philosophical version of differentiation and integration if you will.
Reduction breaks complex things down into simpler parts which are easier to understand. Emergence takes simple parts and shows how they can form a complex system when organized.
If you do break a complex system down, you have to remember that you're only looking at some part of it. If you do have a bunch of parts, remember that they don't magically just form a complex system, you need to study the organization too.
If you want to fully understand a complex system (like eg. a single celled organism, a jellyfish, a human, or something even more complex like an entire ecosystem all at once), you're going to need both.
Some other arguments paraphrased:
Car on autopilot crashes itself in completely clear conditions - "many humans are bad drivers too"
Stack Overflow buried in confident confabulations - "many human answers are incorrect too"
prospect of next-gen GPT bots posting on news sites and forums - "if you can't tell the difference, why would you care if you're talking to a human?"
prospect of image generation models razing the entry-level art job market - "pictures made by clicking a button has just as much artistic value if they look good to me"
The only problem is that it might be banned for spewing too many falsehoods.
In other words, maybe those acquainted with software and AI see the things you mentioned - AI Lawyers and AI developers - as inevitabilities that we will simply have to face. This in turn leads HN'ers to think in terms of entrepreneurship or "how can this make me money in the future?", which means adopting those trends rather than rejecting them, because if you do, someone else will adopt them. Thus, the whole techno-entrepreneurial spirit of this forum leaves little space for viewpoints that offer no technological or entrepreneurial benefit or advancements such as rejecting AI.
Most definitely the former. Humans are most certainly complex. But some of the tasks we perform aren't.
Maybe I'm wrong, but it seems like the majority of the comments of this type I see are written by new accounts created minutes before either meant to be throwaways or otherwise.
Or said differently: when you have a hammer, everything is a nail.
And our belovèd HN, as amazing and as addictive as it is, is a community by for of “the software developer-entrepreneur” and this by definition, with the hammer of “your mind tries to reduce everything to algorithms” (the personality type which is attracted to writing software just for that very reason!) of course they will do that for humans as well.
Of course, I’d love an HN of poets but that would have the problem of the other extreme: empathy emoting so it would be hard to turn that into clear concise cutting and actionable insights…
What makes consciousness? The star trek episode asked is data sentient, but I think the mirror question is just as interesting, how are we anything more than a machine?
We need fuel like a car, there are byproducts to the expenditure of that energy and evidence of chemical reactions, there are all manner of chemical systems that regulate our subjective experiences, measurable and deterministic. We can inject drugs that will turn off the brains ability to form memories...
If there are physical/chemical rules that we are subject to, then it seems like discovering those rules that govern a machine and the rules that govern ourselves are a matter of degree of complexity and not intractability.
If you believe reality is deterministic, then I think don't think this "reductive" view of humans is that far fetched.
In Star Trek it was asked if Data has a soul, but I think it's just as reasonable to ask do we have souls. Do we have something beyond that which can be measured by physics?
Does Human = Machine or does Human = Machine + Soul?
I personally think Human = Machine.
Lunch has an effect on interview results. How "in control" do you think you are?
I feel like many of these reductive views are expressed in order to provoke unusual thoughts. This is useful.
Reductionist view on humans are still incorrect, but what changed is that AI are viewed with non-reductionist views.
Nobody takes Chomskian views of 'AIs will never be intelligent as they are just statistical parrots' seriously anymore.
I've never gotten a positive comment score for arguing that women aren't just optimizing for wealth, height, and appearance in a partner.
the current systems, the legal one, copyrights everywhere, tons of human systems and processes which are based on human generated text and images (soon, music, video, probably, voices), as some kind of guarantee, just got deprecated simultaneously.
Soon most of them will be targeted by scammers of some kind, exploiting the soft spot of someone or some process requiring or presuming that some output in text or image format, shoud have to be created by a human being.
Kids faking their home assignments are just the beginning, in a year or two, IF the LLMs similar to chatGPT get somehow permanently open to the public, ala Google Search, you'll find every process presuming text or image output as only a human capable task, owned.
Owned like in hacking systems, the society will hack itself to subsume to newer LLM resultant ways of doing stuff, all the legacy societal systems and processes.
Of course, things like copyrights are the canary here, and the status quo has been succesful till now in containing the LLM tsunami of societal change (there you have google, facebook, the chinese giant LLMs, doing nothing publicly with their even more advanced LLMs), but as openAI has demonstrated, the ability to translate LLM outputs to money is incredible.
Maybe the FAANG don't need 10 billions bucks, but for sure is a big planet with many players are just looking how openAI using fairly dated AI tech, has gotten themselves 10 billion dollars of VC, just by publishing some chatbot with autoscaling infra behind.
So it is easy to predict that money will lead the adoption of future LLMs, despite what the FAANG or even state nations can regulate, whoever will end having the upper hand by adopting LLMs or newer AI tech available in the near future, most probably will end changing faster their own societies, giving themselves a too good to discard edge over the rest of the planet.
So things are changing, these LLMs, as simple as they are, could be just like the first submarine cables sending telegraph messages under the Atlantic for the first time. 30 years later and you can't recognize most of the societal processes running into the total system of the world.
Literal human beings are being subject to the worst filth imaginable and are not compensated fairly nor reasonably protected from the harm. All so that people in rich countries can replace customer service reps and mess around with a chat bot.
It’s like sticking coal miners in a shaft with no safety gear, paying them as little as possible, and getting away with it.
a human being is a vessel for "consciousness" .. whatever that is.
it's not clear anything else in the universe possesses this quality, except perhaps some other advanced animals.
AI so far has been an interesting statistical optimizer, but clearly lacks this purely human feature.
The search for AGI will die quickly because of this.
I disagree with a lot of the pro-AI takes here while a being huge fan of AI but have never seen anything malicious or reductionist (other than the required in tech/IT separation from the impact of people losing their jobs because of the tools we create). My development and IT teams displaced many people while impacting exponentially more positively. Tech/IT people have to be dispassionate to that or guilt would prevent us from being effective.
Most of the pro AI people I have disagreed with have strongly humanist reasons for their position and they feel that they are promoting a boon for humanity while I am promoting leeching corporatist IP laws. I feel if we don't have a framework for rewarding creatives we will miss out on a ton of individual contributions that greatly benefit mankind.
As to lawyers, I went through the process. Lawyers are not acting on behalf of a human client. They are acting on behalf of least friction/average best outcomes. They care more about their relationship with a sitting judge and the prosecutors than with their client because those relationships can result in the most good, saving thousands of human years lost to prison, even if it might not result optimally for each individual client. I don't blame my lawyer for optimizing his resources for maximum outcome but I don't pretend that isn't what he is doing. American justice is a meat processing plant not a 'let 100 criminals go free to ensure no innocent man goes to prison' do anything it takes situation.
> [Confidence] indicate[s] how confident the model is of the result, not how likely the prediction is to be accurate.
The problem here is likely how "confidence" and "likelihood" are used. The words are overloaded. Maybe I should have said "not how probable the prediction is" but this could even be less clear. Most people think likelihood and probability are the same thing.
So there's a lot to why this is happening. Misreadings, ego, fooling ourselves, and more. I think there's only a few solutions though. First, we need to recognize that there's nothing wrong with being wrong. After all, we are all wrong. There is no absolute truth. Our perceptions are just a model of the world, not the world[1]. Second, we have to encourage a culture that encourages updating our opinions as we learn more. Third, maybe we don't need to comment on everything? We need to be careful because we might think we know more than we do, especially since we might know more than the average person and want to help this other person understand (but this doesn't mean we're an expert or even right!). Fourth, we need to recognize that language is actually really complicated and that miscommunication is quite frequent. The purpose of language is to communicate an idea from one brain and pass it to another brain. But this is done through lossy compression. Good faith speaking is doing our best to encode in a fashion that is most likely to be well interpreted by our listener's decoder ("speak to your audience" is hard on the internet. Large audiences have a large variance in priors!). Good faith listening is doing our best to make our decoder align with the _intent_ of the speaker's message. Good faith means we need to recognize the stochastic nature of language and that this is more difficult as the diversity of our audience increases (communication is easy with friends but harder with strangers).
I'm sure others have more points to make and I'd love to hear other possible solutions or disagreements with what I've claimed. Let's communicate to update all our priors.
(I know this was originally about ML, which I research, but I think the question was key on a broader concept. If we want to discuss stochastic parrots or other ML stuff we can definitely do so. Sorry if this was in fact a non sequitur)
Edit: I believe we're seeing this in real time in this thread[2]
[0] https://news.ycombinator.com/item?id=34608009
[1] https://hermiene.net/essays-trans/relativity_of_wrong.html