AI hype is a good place to start if you’re going to write about AI hype.
According to Mike Adams, proprietor of Brighteon.com:
“Now, because of the rise of AI, it's difficult for a lot of people to see, it's difficult to understand that a computer system, a machine, is smarter than a human by every measure of intelligence that we currently use”.
Elon Musk was yet more dramatic in his assessment of AI’s threat, not just to humanity’s IQ vis-à-vis computer software, but to humanity itself. According to a lawsuit filed by Elon Musk against Open AI, Musk alleges that Open AI’s GPT4 model has breached the threshold for Artificial General Intelligence (AGI) at which computers function at or above the level of human intelligence. The lawsuit added very matter-of-factly:
“Mr Musk has long recognised that AGI poses a grave threat to humanity – perhaps the greatest existential threat we face today”.
If human intelligence is the product of human consciousness and, if Adams’ and Musk’s assessment of AI surpassing human intelligence is correct, then we are bound to ask: will it surpass, or indeed has it surpassed, human consciousness? In Part I of this essay, I’ll take a look at some more AI hype before deconstructing it by trying to understand whether anyone really has a handle on consciousness. In Parts II and III, I will further debunk the AI hype by investigating the nature of intelligence – specifically ChatGPT’s ‘intelligence’ – along with psyops and our chimeric economy.
AI Hyperventilation
Zach Vorhies, Google whistleblower, has no doubt that AI will surpass human consciousness. He issued this warning of what is on the near horizon with ChatGPT5 (not yet released):
“It’s gonna exceed [human consciousness]… basically what’s going to happen is it’s just going to skyrocket above human intelligence. It’s basically going to have more intelligence than the total sum of human intelligence. And if you’re an elite, what’s your next move when you’ve got this powerful god-like intelligence system? Put in a beautiful robot and make the plebs worship it…I think literally they’re going to create a god, some sort of messiah with this artificial intelligence and they’re going to have some sort of narrative backstory for why it’s here and people are going to worship it…When I start seeing rumblings of a second coming or whatever then I know that the end is really near and that we’re coming to the end of our current cycle and we’re starting this brand new uncharted territory of what the elites plan to do with us.” [emphasis added. Time stamp 54:30]
Whoa! Or perhaps even phwoar! if you’re a member of an AI cult and find this sort of thing sexy. There’s a lot to take in there. By the end of this essay, I hope to have dissected the main claims in that warning.
“God-like intelligence system”…“the end is really near”…“greatest existential threat”…welcome to the world of Eschatological AI! Having used human consciousness to exponentially surpass human intelligence, AI will dispense with human flesh and bones with the coolness and uncompromising resolve of a brushed stainless steel Macbook Air M2…loaded of course with ChatGPT5. Not even ChatGPT666. ChatGPT5.
This typical AI-hype video starts by telling you in the most earnest of tones that “Conor Leahy is one of the leading minds in artificial intelligence”. During the video, the AI ‘guru’ uses an analogy that conveys his implicit belief that covid was a bona fide apocalyptic disease event and not the global biosecurity-state coup d’etat that I know it was. Am I prepared to be led by a mind that fell hook, line and sinker for the build-back-better scam? I think not. But I’ll try to be a little forgiving. If his ‘guru’ noggin is buried permanently inside AI’s inscrutable algorithmic maze, is it fair to expect him to float out and instantly decipher the real meaning of the covid pseudo-pandemic, itself perhaps the most daring and complex deception ever perpetrated on humanity?
After all, this leading mind in AI recognises that:
“Understanding the world is hard. Understanding complex topics and technology is hard, not just because they’re complicated but also because people have lives”.
Yes, people have lives, damn it! And having a life means putting a firewall around your pretty little head and not allowing anything complicated or important to interrupt normal programming. But forgiving the leading mind does not mean that we should forget how his error might make him less trustworthy in matters AI-related. Moreover, in unconsciously outing himself as a Covidian normie, this smug and disingenuously modest ‘leading mind’ has been bitten on the arse by a wee irony: his explanation of why ordinary mortals don’t get the complex issues of the day applies as much to him as it does to his fellow Covidians who haven’t yet ascended to ‘guru’ status.
And, in fact, experts and ‘gurus’ suffer from an additional limitation beyond that of ordinary Covidian normies – they are inherently prone to severely restricted peripheral vision. This greatly increases the probability of them failing to understand the import of major events that, given proper attention, could radically alter their thinking, for the better.
At any rate, Mr Leahy thinks that there is a chance, albeit slim, that we have only one year left to contain the AI genie. Twelve measly months to AI doomsday.
Clearly, there are plenty of ‘gurus’ who seriously think we could end up as comatose human batteries for the machines, in vast cold-storage human battery farms à la The Matrix. Or something like that. The precise details of our impending demise are not important. All you need to know is the end is nAI.
Running parallel to AI’s vastly overhyped algorithmic intellect is the contradictory acknowledgement that it routinely hallucinates. Yes, without the aid of mescalin. And, when presented with the extremely complex moral dilemma of misgendering Caitlin Jenner in order to avoid nuclear apocalypse, it would much prefer to incinerate the planet several times over than to hurt Jenner’s feelings. No matter. These minor bugs will soon be a thing of the past. ChatGPT5 will be mercilessly omniscient, omnipotent, error-free and cleverer than…the total sum of human intelligence. Resistance is futile.
My initial rational reaction to all this hype is: fancy silly old Sam Altman wanting to invest a cool $7 trillion in technology that’s intended to give his plutocratic pals even more control over humanity than they already have…and forgetting to fit a bright-red kill switch on the box.
Then there’s Economic AI. Step aside immigrants, there’s a new job thief in town. ‘Useless eaters’ is the pejorative coined by the Great Reset parasitic class to describe, well, just about everyone who isn’t a member of the parasitic class. Which is ironic because they’re the parasites, not us. And without us ‘uselessly’ eating and mindlessly consuming, they couldn’t have made their billions. But they have a cunning plan to complete a smash-and-grab of the world’s entire wealth and simultaneously dispense with us. Or most of us. They’ll still need a few slaves, pending roll-out of Musk’s humanoid robots.
AI is slated by all the experts to spearhead the Fourth Industrial Revolution (4IR). It’s already underway, and rumour has it that stuff is not going to matter anymore. Information will be the new oil and AI will be….the engine that uses the oil? I’m struggling for an analogy, but it appears that farming, food, planes, trains, cars, roads, offices…that’s all going. In 20 years’ time you will have to visit a museum to see a car or plane. It’s just going to be smart phones, Central Bank Digital Currencies, 5G, data, ‘pandemics’ (lots of them), ‘vaccines’ (lots of them too…apparently they go really well with ‘pandemics’), brain chips, smart meters, smart cities, carbon rationing, metaverses, internet ‘safety’, and of course humming away in the background and powering it all, AI. Oh, and heaps of cancer, which will spoil the fun a bit, but no one said this 4IR lark would be all kicks and giggles.
For those of you worried that a humanoid robot might steal your job, you have nothing to fear. The humanoid robots themselves were rolled out at a press conference in Geneva where they stated that they would not take away people’s jobs or revolt against humanity. The linked video is titled “AI-Powered humanoid robots have learned to predict the future of humanity”. No laughing at the back please. The video content can’t possibly be regarded as more AI hype because it’s, erm, straight from the robot’s mouth. And the MSM journalists, those intrepid guardians of freedom and truth, grilled the robots with the toughest questions they could muster from their powerful and fearless intellects.
And finally there is Control AI, the mere mention of which has spoilt the denouement to this essay. What can I say? I am not here to entertain you. It’s strictly business here.
Speaking of business, my primary aim in this piece is to de-hype AI. Hype flourishes in an atmosphere of confusion, and the more confused we are about terms, the more AI’s claims will seem credible. I start from the position that human intelligence derives its uniqueness from our being conscious – we are intelligent because we are conscious. So I’ll examine AI’s claim to intelligence first by trying to understand what consciousness and intelligence are and whether anyone in the hallowed halls of reason and erudition really has a grip on them. I’ll also kick the tyres of the fundamental model on which AI is built – the brain-as-computer model. Is it flawed and, if so, what is the implication for AI’s claim to building a functional simulation of that most mysterious facet of the brain – thought and intelligence?
I’ll even deconstruct the word “artificial”, which you may think a tad pedantic, but I think is worth doing because we are in new territory when it comes to making artificial products. Making an artificial abstract has added a whole new and confusing dimension to the hype.
In the process of doing this, I’ll try to form a view as to whether AI really is intelligent. In other words, do we have real AI? Ultimately, if I succeed in de-hyping AI, the question goes from: do we have real AI? to: what’s AI’s real game? Here is the framework for the whole discussion:
In the remainder of Part I, I’ll discuss the ‘artificial’ in Artificial Intelligence, and how it really is different this time. I’ll also try to understand, or understand why we don’t understand, consciousness, which I think is the font of human intelligence.
In Part II, we’ll define intelligence. We’ll take a look at the significance of a massive ChatGPT howler and then compare the way humans think (cognition) with the way AI ‘thinks’ (algorithms). We’ll examine the key function of intelligence and we’ll round off by discussing the problem with the brain-as-computer model on which AI is so obviously predicated.
In Part III, we’ll discuss two paradigms for judging whether AI is intelligent – the relativist paradigm and the absolutist paradigm. I think this explains a big part of why there is so much confusion and hype surrounding AI’s intelligence. We’ll then take a look at AI’s role in the West’s chimeric economy and round off with what I think is the meaning of AI, or certainly one meaning.
What is the ‘artificial’ in AI?
Much of the hype surrounding AI stems from people talking about it as though it has all the actual properties of the thing it’s trying to mimic. Which can’t be the case, for obvious reasons. This is further complicated by the fact that possibly for the first time in history, man (or woman) is using technology not just to transmit abstraction, but to try to create something that can independently recreate abstraction. We are trying to manufacture a process that manufactures thought.
Not knowing how or why we are conscious, we have nevertheless decided that we can re-create this consciousness in a machine. We are now in new and confusing territory because humans are trying to make thought itself originate outside our heads. This is big. It may be a fool’s errand, but, if so, it’s a big one. So, part of the process of de-hyping AI involves reminding ourselves that there is this word “artificial” preceding the word “intelligence”, and understanding why the artificialness really is different this time.
We also forget that thought has a biological origin. Whether it is partly or wholly biological is a debate we’ll get into later. But the crux of artificiality is that it refers to the man-made replication of the functionality of something that occurs in the natural world. By calling it artificial, the makers are admitting that they are trying to replicate a natural (occurring in nature) phenomenon. There is no such thing as an artificial car or an artificial kettle because these things do not occur in nature. You have either produced a car or a kettle, or you haven’t. A prosthesis, on the other hand, is an artificial body part which affords the functionality of the lost limb. Likewise, an artificial heart must succeed in pumping blood around the body to keep you alive, or it is not an artificial heart – certainly not one that you’d pay for!
Artificial – meaning made or produced by human beings rather than occurring naturally – is an adjective that must always be paired with a noun that describes something occurring naturally. And whether the two words combined have the power to signify something meaningful depends on whether the artificial thing delivers enough of the functionality of the real thing to serve as a practical substitute.
Admittedly, because the AI boffins are trying to create a functional simulation of intelligence, and because intelligence is an abstract, it may not matter that human thought and intelligence has a biological substrate. But, if there is something peculiar about thought that renders its biological origin important when we come to define intelligence, then it might matter.
Nor should we confuse “artificial” with “cloned”. A clone is an exact copy, both materially and functionally, of the naturally occurring thing. In the strict biological sense of the word, a successful clone is indistinguishable from the real thing. Since intelligence, while abstract, is derived to some extent from biology (specifically, but perhaps not limited to, brain biology), I think it’s safe to assume that we are some way off considering whether we can clone human intelligence without cloning a human. But because we are dealing with something that is abstract, it is that much easier for people to claim that a functional simulation of intelligence which exceeds human intelligence has been achieved, without having to rely on a clone. The claim may be valid but it is harder to measure because the yardsticks are themselves abstract.
I think the discussion of artificiality is a convenient juncture to introduce the idea of applying a relativist paradigm or an absolutist paradigm to the definition of intelligence.
Under a relativist paradigm, the degree to which the substitute mimics, as opposed to actually delivers, the functionality of the real thing determines whether we are prepared to grant it artificial status. The relativist paradigm thus introduces an element of deception. While this is understandable and to an extent forgivable – after all, abstract thought is being manufactured – it is problematic. With artificial limbs and hearts, it is a relatively straightforward matter to decide whether the artificial product deserves its status as a replica. One rates it by obvious considerations of functionality – does it work and, if so, how well does it work? You can more or less come up with a fixed score card on which the product either passes or fails. It is more difficult to pin down an abstract like intelligence, because its success or failure is more susceptible to the perception of the viewer. Placing it on shifting sand is not only tempting but inevitable. The question is: should we?
Whether thought or intelligence can be pinned down is another matter but that’s what an absolutist approach to AI tries to do. We don’t want it to be like beauty – something dependent on the eye of the beholder. That’s what feeds the hype! Absolutism demands that the definition of intelligence we use holds fixed for all seasons. This too is problematic because it runs the risk of trying to hold an artificial product to an unfairly high standard. After all, it is not purporting to be the real thing. Some degree of artifice, maybe even deception, must surely be permitted, but how much? What are we prepared to overlook when concluding whether AI is intelligent?
Consciousness – the source of human intelligence
The discussion of human intelligence is sticky enough, but it becomes a quagmire when you introduce the word “consciousness”. Philosophical musings about AI are not complete without inserting consciousness into the conversation, and I believe that’s because intelligence and consciousness are inextricably linked. Consciousness, or subjective experience, is not scientifically understood. Yet, if you believe, as I do, that we are intelligent because we are conscious, then it must follow that we don’t fully understand intelligence either.
But first let’s try to formulate a relationship between human intelligence and consciousness. After all, I am asserting that human intelligence derives its uniqueness from consciousness. If this is not true, we ought to skip any debate about consciousness and go straight to intelligence. Many AI theorists do just that in the hopes of solidifying the link between human intelligence and artificial intelligence. Restricting AI to a narrow definition of intelligence is an attempt to arrive at something that can be more readily transferred to a machine. In doing so, I believe the theorists come unstuck because adopting an intellectually honest definition of intelligence involves the incorporation of consciousness.
So what is consciousness? After a futile search for a lucid answer to this question, I chanced upon an article in Scientific American tantalisingly titled “What Is Consciousness”. At last! I thought. The hardest part of my essay is finally done. I was disappointed. The article devoted two sentences to a definition right at the start:
“Consciousness is everything you experience. It is the tune stuck in your head, the sweetness of chocolate mousse, the throbbing pain of a toothache, the fierce love for your child and the bitter knowledge that eventually all feelings will end.”
The author then spent the next two paragraphs explaining that he had decided a very long time ago to “set aside philosophical discussions on consciousness” because consciousness was a riddle wrapped in a mystery inside an enigma. Everyone’s time would therefore be far better spent “search[ing] for its physical footprints”. To assure the world that he and his colleagues mean serious business, the footprints have been given a serious name – the Neuronal Correlates of Consciousness, more of which in a minute.
Taking awareness of experience as the starting point, I see consciousness as the interplay of two types of awareness – the awareness of the inner workings of one’s own mind and the awareness of the world we interact with. These two awarenesses, inner and outer, on their own are insufficient to do justice to the concept of consciousness. However, the interplay between the two creates a whole which is greater than the sum of its parts.
We don’t just think. We are aware that we think. And we use language to express thought and thoughts about thought. Consciousness gives rise to a search for superficial and deep meaning. It creates the sense of separation of self from other. That embodies the creation of subjective experience, out of which arise two important by-products. The first is potentiality – the ability to grasp different possibilities in the future and to also reach back into the past, drawing on memory. The second, perhaps closely related, is choice or agency. Perceiving different possibilities creates choice. The more possibilities that are perceived, the greater the sense of agency.
Intelligence – whether you adopt a more restrictive version of it for machine ‘learning’ that combines memory, logical processing, and problem solving, or whether you adopt a more expansive one that includes cognition and adaptation – occurs within the boundary of consciousness. The two merge infuriatingly such that it’s extremely difficult to figure out where intelligence ends and consciousness begins. For example, you could say that one purpose of introspection (observing one’s thoughts) is to learn (intelligence), as encapsulated by that famous maxim about the unexamined life. Or you could say that the ability to grasp potential (consciousness of time) enhances adaptation (a goal of intelligence).
My purpose in discussing the complexity of consciousness and its relationship to intelligence and the brain is to attempt, even if only at a superficial level, to define the bar that AI is aiming for and to then decide whether it has functionally reached that bar.
Let’s turn to the question of whether science has a good grip on consciousness. You can know a thing by understanding its source.
The materialist viewpoint holds that the brain is the sole source of consciousness. There is only one small problem with this – it hasn’t yet been scientifically proven. The brain scientists who insist that the brain gives rise to consciousness are still looking for what they call the Neuronal Correlates of Consciousness. Science’s ‘best minds’, to borrow a phrase used by Jonathan Cook to describe science’s heroic climate alarmists, are working day and night to find these correlates. Permit me to channel the late great George Harrison to express what I think it’s going to take to succeed in this venture, and also to express how seriously we should take their efforts. It’s gonna take money, a whole lotta spending money… and a whole lotta precious time, to do it right.
They simply do not know precisely where consciousness resides in the brain but, “excitingly” (!), they’re working on a consciousness meter. Ultimately, they admit that they still don’t have “a satisfying scientific theory of consciousness that predicts under which conditions any particular physical system – whether it is a complex circuit of neurons or silicon transistors – has experiences.”
There are of course theories bouncing around, like the global neuronal workspace (GNW) theory that posits incoming sensory information is first “inscribed” onto a “blackboard” – an area of the brain yet to be precisely located – and then “broadcast” to multiple cognitive systems, making the information conscious, that is to say, making the individual aware of it. And because the brain scientists cleave to the brain-as-computer model, we should not be surprised to learn that “GNW posits that computers of the future will be conscious.” And as indicated by the hype described at the start of this piece, there are many who think we’re either there already or that ChatGPT5 is going to be the AI god we’ll be worshipping.
The widely accepted scientific dogma that brain matter = consciousness has the effect of rendering the distinction between consciousness and intelligence irrelevant. It’s all the same thing, they say, because it’s all in your brain. When you combine that dogma with the brain-as-computer model (which we’ll discuss in Part II), something quite complex and not understood by the scientific community, is conveniently boiled down into something far more simple – brute digital computer logic. Except that what is still missing is an explanation of the mechanism by which this information processing translates into subjective awareness; in other words, we don’t know how biological-electromechanical circuits (or silicon-electromechanical circuits in the case of AGI) “give rise to general intelligence, reasoning capability, good judgement…creativity…the creative genius or intuitive leaps of Euclid’s invention of geometry, Da Vinci’s Mona Lisa, Beethoven’s Fifth Symphony, or Einstein’s E = MC².” [i]
In short, the mechanism for how the mystifying ability to simultaneously fathom and creatively interact with the material world around us while also being aware of the inner workings of our own minds – the whole gamut of subjective experience – remains unexplained.
While materialists search for the mysterious mechanism by which matter in the brain gives rise to subjective experience, central to their theory is that matter itself is not conscious. Here they encounter what philosophers of mind call the “hard problem” of consciousness – if the brain is matter, and matter is not conscious, how are we conscious?
This can get really complicated because you will recall that the materialist brain scientists who believe that consciousness arises in the brain are looking for the neuronal correlates of consciousness inside the brain. So they clearly believe in consciousness because they say they’re looking for it. But they only believe in it to the extent that it is a by-product of brain functioning. They hold that it’s a sort of illusion – a trick the brain plays on you. You could say they’re actually looking for the source of the illusion of consciousness. Which may be why they can’t find it.
Anyway, the reality is that the brain, while formed of matter, is capable of transcending time by enabling us to see possibilities in the future and reach back into its memory bank of the past. How does matter, specifically the brain’s grey goo, acquire this property?
Seeing into the future – not in the sense of prognosticating but rather in the sense of grasping potentiality and then making a choice between potentials – is but one of many essences of consciousness and intelligence. It’s what also gives rise to free will – choosing one potential future instead of another. This makes the hard problem of consciousness even more intractable for materialists. If matter is not conscious and our brains are just matter, this leads some material fundamentalists to full-on deny the existence of consciousness. So there are two schools of materialist – those who think consciousness is an illusory by-product of brain matter and therefore worth hunting down if only to prove it’s an illusion, and those who see such a search as a total waste of time because consciousness can’t exist if matter is not conscious.
For the material fundamentalist, if consciousness does not exist, then nor does free will, supposedly derived from consciousness. There can be no recognition of potentiality – of possible futures – because that is an illusion or trick of the mind. So, really, there are no choices. You just foolishly think there are. The problem with that line of thinking is that they are asking you to consciously choose to adopt their materialist world-view and reject Rupert Sheldrake’s theories of non-local mind. And as Sheldrake adroitly points out, choice and free will do exist in the mind of the material fundamentalist, but only when you’re being asked to…erm…reject them in favour of no choice and no free will.
Be assured that the citadels of science are working on these problems. The linked SciAm article concludes that “this effort will take decades, given the byzantine complexity of the central nervous system”. In the meantime, pending their grand yet elusive eureka moment, they reserve the right to dismiss all other theories as woo woo. And, while formulating theories of how different parts of the brain work as “blackboards”, while others “broadcast”, they also reserve the right not to explain how people with only 5% of the brain matter (the mushy blackboards and broadcasters) of the average person manage to remain perfectly conscious, as we shall see in Part II.
There is another school of thought which posits that consciousness is mediated by the brain, like an antenna, rather than being solely derived from it. A growing number of quantum physicists and brain scientists not only take this viewpoint seriously, but also view this paradigm of consciousness as having the potential to be a scientifically provable phenomenon.
The mere fact that humans are trying to create intelligence outside of the human brain is an admission that it is not derived, at least not wholly, from brain matter. If thought originates solely in the brain, as materialists claim, how could a machine think? This question is a philosophical double-edged sword. On the one hand, it suggests that you can’t claim to be a materialist in matters of intelligence and then also claim that AI is intelligent. On the other hand, it opens up the possibility that if intelligence is not solely derived from matter, then perhaps AI really can be intelligent!
Ultimately, consciousness remains the unsolved human riddle among many. For that reason alone, you could argue that there can never be an AI that equals or surpasses human consciousness until the mystery of human consciousness is solved. After all, how can humans claim to build something predicated on a phenomenon we don’t understand? What are you building if you don’t understand, or even have, the blueprint?
I shall allow people with far more grey matter in their heads than me to sum it up:
“Science’s biggest mystery is the nature of consciousness. It is not that we possess bad or imperfect theories of human awareness; we simply have no such theories at all.” – Nick Herbert, PhD, author of Quantum Reality: Beyond the New Physics [ii]
“The scientific study of consciousness is in the embarrassing position of having no scientific theory of consciousness.” – Donald D Hoffman, cognitive scientist, University of California, Irvine. [iii]
The unconscious purpose of materialism is to seek certainty, not to arrive at the truth. Matter, with the aid of microscopes, is available to our relatively limited perception and, if matter is the cause of everything, it follows that everything must be explainable. We might not have explained everything just yet, but with materialism as the sole arbiter of truth, complete certainty is within our grasp, or so the thinking goes. Bottom line is: materialists aren’t great at coping with uncertainty. The mere fact that potentiality in the guise of ideas of the mind plays a role in the creation of matter tells you that matter is not all there is. Precisely what else there is, we don’t know, and perhaps never will. Perhaps we’ll never know because we ourselves are only a product of consciousness and therefore, as a tiny fragment of the whole, we cannot ever understand it. But we must try.
The ideas here are obviously speculation, not truth. But this kind of speculation is, in my opinion, more scientific than materialist fundamentalism because it invites exploration of uncertainty, and that’s a big part of my understanding of what science is.
We’re certainly not yet able to explain consciousness by reference to brain matter alone. And yet the goal of Artificial General Intelligence (AGI) is to create a functional simulation of the intelligent human brain. But we humans don’t understand the human brain, let alone consciousness. Puzzlingly, Musk and other tech billionaires are proclaiming the existence of AGI, when they should also know that we don’t even have adequate theories, let alone proof, of how intelligence arises in the brain.
When you distil it like that, you begin to understand the source of hype in the AI business. In effect, they’re saying, with absolute certainty, that they have succeeded in bottling up intelligence in a computer when all the intelligence experts say they don’t yet fully understand intelligence.
One tech industry approach to the consciousness conundrum is to brush it under the carpet and stay tightly focused on a very narrow view of intelligence. AI is then based on a mirage of what they think or say intelligence is. There is therefore a powerful incentive to talk about intelligence in very limited or disingenuous ways to serve the hype they’re desperately trying to create to carve out a lucrative niche as the new robber barons of the Fourth Industrial Revolution.
To sum up the whole mess: I believe we are intelligent because we are conscious, notwithstanding that we don’t know what the source of consciousness is. That absence of a universally agreed understanding of consciousness (and therefore intelligence) may be why AI is neither conscious nor intelligent now, and may never be. This is not to conclude yet on whether AI is intelligent because in Part II, we will humour AI’s promoters by hiving off intelligence from consciousness to see if a definition of intelligence can be transferred to AI. At that point, we may find that AI is intelligent after all.
[i] Joseph Selbie, Break Through the Limits of the Brain, New Page Books, 2022, Chapter 2, Pg. 34
[ii] Joseph Selbie, Break Through the Limits of the Brain, New Page Books, 2022, Chapter 2, Pg. 38
[iii] Joseph Selbie, Break Through the Limits of the Brain, New Page Books, 2022, Chapter 2, Pg. 39
Do you think it's possible that our brains are ALREADY a form of AI? That we could already have been tampered with in the past? How would we even know?
Lol, thank you 2020 for leading me to wonder if absolutely anything is possible!
Hallucinations are a feature not a bug.
When they did that 60 minutes Google bard puff piece, they left in the clear hallucination of books that never existed.
Why would they leave in a thing that makes people question the AI? Because they want us to believe that it's dangerous.
The truth is, AI will never become self aware because it's lacking the exact thing that brought us consciousness (or the shadow of it). This is movement, which requires intelligence and planning.
Evolutionary biologist Robert Sapolsky told Joe Rogan that the difference between us and monkeys is that they predict 5 minutes into the future, while we predict 50+ years to our death.
Last funny point about current AI... It's currently being engineered by people who are human but may be lacking in awareness. Remember, most of them fell for the safe and effective scam. 😂 They also are highly disconnected from their physical bodies. Even if they work out, think about how stupid people are with that where they work out so hard that they hurt themselves. That's the perfect example of a disconnected person as much as the couch potato.