The philosophy comes from common questions about whether these AIs, with their human-like behavior, are actually capable of thought, understanding, or consciousness. And there are no definite answers there, because philosophy has long known that those words are ill-defined. If you look into the age-old "P-Zombie" thought experiment, it shows that we can only really decide if something is conscious based on its behavior, so if an AI behaves like a conscious being, we have to call it one. The question of "what does a conscious being behave like?" is more complicated, in part because we don't really have a clear understanding of what beings are conscious.
The only entities we're pretty sure are conscious are humans, so we often use ourselves as the measuring stick. These AIs don't quite behave like humans in many cases, but they do quite behave like humans in many others. And even in their failure cases, they often fail like humans or facsimiles of humans. It could be argued that consciousness and understanding are not binary, but actually continuous spectra -- a dog, for instance, may be conscious, but less conscious than a human, and that's certainly true of its understanding. So in such a framework, when taking the P-Zombie concept into account, it may be most honest to say that current AIs are, in fact, conscious and understanding entities, but perhaps less so than humans are. With that in mind, all research on the latest AIs seem to suggest that scaling them up (in terms of number of parameters/weights) increases performance without any foreseeable plateau, so there's really not much reason to think AIs won't soon reach the level of the human mind.
In terms of the economic consequences of AI... well, I'm no economist, so take anything I say on the topic with a pile of salt. But I do think the major advances have been happening in the last 5 or so years (since it seems almost every bleeding-edge AI these days incorporates a transformer somewhere in its architecture, if it's not entirely based on them), so it may be too soon to know for sure what the impact might be. That said, I tend to dismiss people who yell that "AIs are takin' ar' jerbs'!" on historical context alone. Every time there's a new technology, industries shift. People yell that they're losing their jobs, then eventually the tech ends up complementing their jobs instead. In the cases where it truly does remove jobs, it creates new jobs in a different industry. That's been true at least since the invention of the printing press, so I see no reason to think AI is any different than any other invented tool. DALL-E 2, for instance, probably won't replace artists, but it will speed up and enhance artists' ability to quickly prototype ideas and get stylistic inspiration. And more importantly, art can't die just because tools can make art; if art dies when it's no longer profitable, then I wouldn't consider it art in the first place, as I think art is meant to evoke pleasure or emotion, not to make a buck.
But honestly, in the end, I wonder if "it'll take our jobs!" is even a valid complaint. Obviously the world is far from ideal, but shouldn't the ideal be a world in which "jobs" are taken for pleasure rather than required for survival? If we have the tools to do the work, shouldn't that free us up to enjoy our lives instead of driving our every decision on "how many bits of data will your bank send to mine?" Again, I'm fully aware this is an incredible ideal that's unlikely to occur any time soon, but perhaps instead of being afraid that tools will take jobs away, we should use those events as a wake-up call that maybe society shouldn't require jobs for survival when we have tech to do it for us? At what point does "do something for me that our tools can't, or else you should starve on the street" fail to sound reasonable to us as a culture?
Anyway, that's my idealist rant over.