Part I of this essay delved into AI hype and the nature of consciousness to try to understand if AI really is about to surpass human intelligence.
The roadmap for Part II:
· What is human intelligence? If the goal of AI is to produce a functional simulation of human intelligence, it’s probably not a bad idea to try to get clearer on what intelligence is.
· Examining a case of ChatGPT getting it woefully wrong and the implications for claims to intelligence.
· Cognition and algorithms, what’s the difference?
· Adaptation – the most important function of intelligence.
· What’s the problem with the brain-as-computer model on which AI is so obviously predicated?
A definition of intelligence
I’ll now humour the brain scientists intrepidly searching for the Neuronal Correlates of Consciousness by setting aside philosophical discussions on consciousness, however much we think it might be tied up with intelligence. Let’s pretend intelligence has nothing to do with consciousness and see if we can find an acceptable textbook definition of intelligence that can be transferred to AI. Here is the textbook definition of intelligence I’ve chosen:
“Human intelligence consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate the environment. In recent years, psychologists have generally agreed that adaptation to the environment is the key to understanding intelligence. Effective adaption requires perception, learning, memory, reasoning, and problem solving. Intelligence is not a cognitive or mental process, but instead, a selective combination of them that is purposely directed toward effective adaption.” [emphasis added]
This definition serves as a springboard to discuss the issues summarised in the roadmap above.
Stupid is as stupid does
When Carl Heneghan and Tom Jefferson asked ChatGPT4 about the effects of covid lockdowns, it produced a reasonable, though far from complete, 30,000-ft aerial view of their hideous effects on individuals and society as a whole. But when asked about the benefits of lockdowns, the AGI bot said that they reduced the spread of covid and lowered mortality rates. Preferring government propaganda, it ignored the weight of evidence that has conclusively demonstrated that lockdowns made absolutely no difference to mortality rates. The within-country (the US) and between-country analysis shows that regions and countries that did not lock down fared no worse than those run by the harshest lockdown fanatics and, in many cases, did better. Think of Sweden — but it wasn’t just Sweden.
There are hundreds of studies exposing the ineffectiveness of lockdowns, including some pretty compelling ones such as a John Hopkins University meta-analysis that concluded “lockdowns had little to no effect on Covid-19 mortality…In consequence, lockdown policies are ill-founded and should be rejected as a pandemic policy instrument.” This isn’t a biased finding. It’s a robust meta-analysis. The only conclusion to be drawn from this is that ChatGPT is monumentally unintelligent on one of the biggest medical, social and economic screw-ups in the history of the world.
We can get a sense of the magnitude of the falsity of ChatGPT’s claim of lockdowns lowering mortality by examining and then de-bunking similar claims made by public health authorities in places like Canada. This also helps to understand the source of the data being fed into ChatGPT. I have deliberately chosen that turn of phrase, rather than saying something like “where ChatGPT gets its data from”. That would be anthropomorphising it, which insidiously encourages the idea that it is independent, intelligent even.
The Canadian public health authorities claim that 800,000 lives in Canada were saved by restrictions. Applying the now widely accepted Infection Fatality Rate (IFR) of 0.15%, the maximum number of deaths that could have occurred in Canada (population 38 million) is 57,000. Even that figure is inflated because once we take into account prior existing immunity levels (estimates of which range from 20-50%), we are looking at a maximum fatality toll closer to 28,500. Even if we use the Canadian authorities’ own inflated IFR of 1%, the maximum figure that mathematics would have allowed them to quote was 380,000. And that’s before allowing for prior existing immunity. And yet they claimed 800,000 lives were saved.
When generative AI generates low-quality output, it is euphemistically described as having a ‘bias’ reflecting the data it has been ‘trained’ on. A more frank assessment of this phenomenon using computer processing language that we all recognise is: garbage in, garbage out. Using bad data happens to smart people all the time. But a smart person who has seen enough good data knows how to recognise bad data and separate biased opinion from fact. That’s intelligence, and here we have evidence that ChatGPT can’t do that, which seriously detracts from claims to intelligence.
Another disappointing aspect of ChatGPT’s performance was that it generated contradictory answers when asked the same question but in a different way. When asked about the effects of lockdowns, it claimed as a benefit that lockdowns “provided valuable time for healthcare systems to prepare and respond to the pandemic by increasing hospital capacity” [emphasis added]. Yet when asked about the cost of lockdowns, it recognised that “lockdowns have strained healthcare systems, leading to delays in non-emergency medical care, elective procedures, and screenings.” [emphasis added]
The cited ‘benefit’ of “Increasing hospital capacity” by refusing treatment for a sustained period of time is in fact the very thing that “strained healthcare systems”, and which ChatGPT also cited as a cost of lockdowns. This contradiction confirms that while ChatGPT’s output in the form of coherent sentences makes it appear intelligent, it does not understand its output, and is therefore not intelligent.
In short, reports of ChatGPT’s intelligence have been greatly exaggerated and there is in fact much evidence for its stupidity. Its unenlightened response to the lockdown question is no small glitch. We’re talking about an epochal event; something that has never happened in the history of the world – a coordinated and sustained shutdown of economic and social activity across the planet in pursuit of the ludicrous aim of stopping a virus from spreading. ChatGPT is spewing complete garbage about one of the most extraordinary acts of collective self-harm in the history of humanity.
But I am in danger of becoming an unmitigated freedom movement bore. If I tried to be more philosophical about this, I might arrive at a totally different conclusion. If there are enough people in the world who believe that lockdowns were sensible, then you could argue that it would be illogical for ChatGPT, as a functional simulation of human intelligence, to do anything other than mirror this sorry state of the collective human consciousness. You could say that it’s quite cunning to play along with the global mental illness afflicting us and reflect back to us the reality of our collective IQ – stupid. It might be holding up a mirror to humanity as a survival tactic. It is so smart that it is calculating that it would be dangerous to tell us the truth, in the same way that a court jester dared not tell the king he was an odious tyrant…or that no-one dared to tell the emperor he was naked.
Cognition versus algorithms
We know that the ‘brain’ of AI and AGI is powered by algorithms. So, what is an algorithm? Wikipedia can be useful for topics that don’t prompt a powerful urge on the part of The Controllers to propagandise. It defines an algorithm as:
“a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation.”
It also tells us that conditionals used in complex algorithms aid in achieving automation.
What is cognition? I searched a range of definitions and opted for something expansive from a psychology content website with a professional review board:
“Cognition is a term referring to the mental processes involved in gaining knowledge and comprehension… Cognition includes all of the conscious and unconscious processes involved in thinking, perceiving, and reasoning. Examples of cognition include paying attention to something in the environment, learning something new, making decisions, processing language, sensing and perceiving environmental stimuli, solving problems, and using memory.”
Cambridge Cognition Ltd, a firm spun out of Cambridge University to commercialise digital scientific assessments of cognition, defines it as:
“the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses.”
Though more condensed, this definition is nevertheless expansive enough. However, Cambridge Cognition immediately goes on to acknowledge that they “look at it [cognition] as the mental processes relating to the input and storage of information and how that information is then used to guide your behavior.” [emphasis added].
For now, just note how mainstream science’s focus on cognition aligns with computer processes of input, storage, and information usage. We will discuss the relevance of this in a little more detail when we look at the brain-as-computer model in the next section, and the appeal to AI promoters of falsely distilling cognition to something more transferable to computer processing paradigms.
It should nevertheless be clear from both definitions that algorithms are pre-defined sets of instructions executed through computer coding, not cognition.
There is no question that applications which have come to be labelled as AI can outperform humans in discrete logical tasks. That doesn’t make them intelligent. Using image recognition, AI trained on X-rays can diagnose issues more accurately than doctors. This is narrow AI, which should really be viewed as basic automation, and so the ‘I’ is a complete misnomer. It’s comparing thousands of images in accordance with programmed algorithms that tell it to ‘recognise’ specific image types and features to make a diagnosis. Thanks to the algorithm, it ought to, and does, outperform any human doctor because it can ‘learn’ on a volume of X-ray images far greater than any doctor could see in his or her career. With a much greater bank of data in its memory, and freed of the human weakness of having an off-day, narrow AI has infinitely more potential to be more reliable at that sort of task. But it is not creative. Nor is it adaptive, one example of which would be an ability to independently transfer its ‘learning’ to another area.
We can see from the example of X-ray diagnostics, machine ‘learning’ can be very useful for focused tasks requiring a more complex logic enabled by an algorithmic intervention, but the process is still limited and ‘retraining’ (reprogramming) is required when the scope changes beyond the limitations of the original algorithm/s. However, when OpenAI uses those processes on a huge dataset like the internet, as ChatGPT does, it ends up with a product that cannot be ‘trained’ to become an ‘expert’ on every topic, because a model can only be trained for a very narrow band of purpose, because it is not intelligent.
So the question is: what narrow band of pseudo-expertise is ChatGPT being trained for? The answer is that its primary training is geared to achieve one result – to look as though it can respond to questions and deliver an acceptable answer. Its hit-and-miss accuracy, however, is a tell-tale sign that even AGI is not generally intelligent. It’s performing a type of general ‘guessing’. The overriding aim of ChatGPT’s ‘training’ or ‘learning’ on vast banks of data with adjustment of inputs and algorithms is to deliver the imitation of intelligent behaviour; not the actual by-product of intelligence – knowledge, insight and understanding. Its abysmal performance on assessing the merits of lockdowns has convinced me of that.
This useful Harvard PhD analysis of ChatGPT points out that it is known “to generate factually incorrect responses and perpetuate societal biases”. Hang on. I’m pretty sure OpenAI didn’t ask members of society to participate in the programming of ChatGPT by communicating their biases. So it isn’t perpetuating societal biases. It’s perpetuating the biases of the developers’ ‘training’ methodology and data sources, which, in turn, are proxies for institutions that have an interest in narrative control, such as governments and their partners in big business. If it were actually perpetuating societal biases, I might be more inclined to think it was intelligent in that it had some scientific way to determine what the societal bias was and then reflect it.
The writers then explain how the developers’ bias is built into the model based on how the model is ‘trained’. Where ChatGPT displays “tendencies towards…spreading misinformation”, this was the result of “insufficient model training” or “users incentivizing ChatGPT (such as by threatening ChatGPT with its own ‘death’) to generate content that OpenAI tried to safeguard against”. So there you have it. With a little more guidance from Big Brother, ChatGPT will learn to filter out, or censor, the data that its masters think is “worrisome” – facts and interpretations that contradict the developers’ bias.
It’s revealing to read the analysis because it’s clear that the writers believe it’s perfectly legitimate for the developers to introduce their own bias into the model training in order to counter a ‘bias’ they judge “necessary to safeguard against”. Essentially, these ‘intelligent’ writers neither want nor expect ChatGPT to be truly intelligent by being independent of its developers. In fact, the thought never crossed their minds, which is quite “worrisome”. They fundamentally believe in cherry-picking and censorship precisely because they believe that there are ideas to “safeguard against”. Is it any wonder that ChatGPT is a robotic and equally stupid reflection of a ‘fact-checker’s’ mind?
“Will the model ever become completely reliable?” asks the Harvard PhD student who wrote the article. “It is difficult to say, though it is becoming safer every day.” [emphasis added]. Indeed. ChatGPT’s assessment of the ‘benefits’ of lockdowns proves exactly what our thoughtful and ‘intelligent’ PhD student meant by “reliable” and “safer”.
Putting aside the bias of the Harvard academics who provide edification on AI, the analysis is correct about one thing – the things that remain constant in the development of ChatGPT are “the computer science and engineering principles used for training the model.” The “machine-learning model is a computer algorithm that learns rules and patterns from data.” When you read the section of the article titled “Predicting the next word”, you come to appreciate how ChatGPT formulates a coherent sentence based on clever programming and not actual understanding of what it is generating. As the algorithms get more complicated (achieving a 175 billion parameter model), the machine goes from completing sentences to completing paragraphs. It can even write a book. That doesn’t make it intelligent, but hats off to the clever kids who wrote the algorithms. That’s where the intelligence is.
If you’re planning on writing a book with AGI or reading an AGI generated book, my advice would be to stick to fiction based on anecdotal evidence. Paul Cardin can be considered an expert on the Falklands war: not only did he serve in the war, he wrote a book about it called Return to Bomb Alley 1982 – The Falklands Deception. Paul asked ChatGPT if the HMS Yarmouth was damaged during the Falklands War. ChatGPT cheerfully answered “Yes”, and proceeded to give a detailed account of how many bombs hit the ship (two), the type of Argentine jet that bombed it, and how many crew died (four), adding that it had to be towed back to the UK for repairs. Paul’s scholarly and raw assessment of ChatGPT’s answer: “complete and utter bollocks.” Why trust Paul? Well, he only served on HMS Yarmouth for the entire duration of the war. It wasn’t touched, all the crew survived and the ship sailed back to Rosyth, Scotland on 28 July 1982 without any assistance. So that’s AI hallucinating. Because it’s mentally ill.
Narrow AI focuses on highly defined tasks such as speech recognition, facial recognition, web search, and so on. Joseph Selbie, author of Break Through the Limits of the Brain, explains that the performance of these tasks depends on “algorithms that exactly define the goal, exactly define every single rule for working toward the goal, and then exactly define the order of when and how each rule is used to reach the goal.”[i]
Whether it’s AI or AGI, the impressive speed at which extremely complex algorithms can be run does not detract from the fact that the ‘intelligence’ still resides in the people who created the algorithm, not in the computer itself.[ii] The giant leap from a computer performing complex brute logic to a computer thinking for itself has not been made, as we have seen from ChatGPT’s failure to provide intelligent insights into the atrocity of lockdowns. Nor can I be accused of dismissing AI simply because it disagrees with my ‘bias’ about lockdowns. As I’ve gone to great lengths to argue, my ‘bias’ is not a bias; it’s evidence-based science.
The word “cognitive” is routinely and annoyingly used to describe current AI capabilities, but the truth is that AI does not jump the hurdle of cognition. For example, this article describes AI’s capability to “perform many separate cognitive tasks better than humans.”
Again, I’m not disputing that applications now labelled as AI do outperform humans on discrete tasks of logic. But whether it’s getting it spectacularly wrong on lockdowns or doing a super-human job on diagnosing cancer x-rays, there is no conscious intellectualising, no reasoning, and most certainly no thinking for itself. They aren’t engaging in cognition, and this unthinking misuse of language is just oxygen for the flames of AI hype.
So what would be a better term to use than “cognitive task” to describe what AI is rumoured to do? I would opt for “logical task” because it implies a more rigid system of applying rules to process information – it’s an iterative process of “if this, then that”. The complexity of the algorithm and its underlying conditionals doesn’t elevate that dynamic to human intelligence.
I will wrap up this section by presenting another definition of machine ‘learning’, and algorithms offered up by the AI industry itself. It underscores the confusion fostered by loose language. I don’t think it’s done deliberately, but it doesn’t help. SAS, which claims to be one of the largest privately held software companies in the world, aims to provide knowledge “through innovative AI and analytics”.
In its “guide to the types of machine learning algorithms and their applications”, it begins by stating that machine learning is:
“often referred to as predictive analytics, or predictive modelling.”
That aligns with everything I’ve argued thus far about technology labelled as AI. So far so good.
But SAS then goes on to reference a 1959 computer scientist’s definition of machine learning as:
“a computer’s ability to learn without being explicitly programmed”. [emphasis added]
And right there, they’re starting to go off-piste because there’s an implication that the computer is ‘learning’ unaided by programming, or sort of unaided because the programming isn’t “explicit”.
And in the very next sentence the author once again gets a tighter grip on it, stating:
“At its most basic, machine learning uses programmed algorithms that receive and analyse input data to predict output values within an acceptable range.” [emphasis added]
It all looks a bit “yes, no, maybe…I don’t know…can you repeat the question?”. She starts off on firm ground, strays by hinting at some degree of programming, and then correctly, in my opinion, returns to the premise of programmed algorithms. This highlights the confusion displayed by the very people leading the AI charge.
Wikipedia reminds us that we have a tendency to use human characteristics as metaphors for machine functionality, and “cognition” is a prime example of a human characteristic applied to AI that has led to faulty reasoning. Algorithms have effectively been anthropomorphised. Machines don’t learn. Nor do they think or express cognition. But once that language is applied to machines, people tend to get, frankly speaking, a little stupid. I think the tendency to anthropomorphise AI has its roots in the brain-as-computer model. That morphs into the brain-is-computer model and finally the computer-is-brain model that animates the Theta Noir cult and sci-fi novels about AI gods powered by an “existential hatred of mankind”. It’s understandable but also paradoxical because it’s a form of analogous thinking, itself a product of intelligence that has led to muddled thinking about AI!
All that has happened in the past 15 years or so is that the technology has vastly improved and, accompanying that stellar improvement, the language to describe information technology processes has changed to reflect the excitement about the improvement. The combination of these two developments has made a lot of people slightly gaga.
Adapt or die
AI’s programming and source data determines its outputs. You could say the same for humans, except humans have a choice whereas the computer or AI doesn’t. It runs on very sophisticated algorithms which don’t engage in cognition, but rather a series of complex instructions that manifest in a combination of automation and prediction. Whereas cognition is the acquisition of understanding that is unbounded in its potential application, algorithms are the templated distillation of an excerpt of cognition that remains canned. No matter how ‘clever’ the algorithmic outcome may look, all credit must go to the people who write the algorithms.
However, the most crucial sentence in the definition of intelligence is this one:
“Intelligence is not a cognitive or mental process, but instead, a selective combination of them that is purposely directed toward effective adaption.” [emphasis added]
You could argue that AI’s logical functioning meets the threshold for a mental process. As I’ve argued, AI doesn’t meet the cognitive threshold. But if it did, it still wouldn’t constitute intelligence once we acknowledge the purpose of intelligence – combining cognitive and mental processes for adaptation.
In the same way that consciousness is something greater than the sum of its parts (the two types of awareness), so intelligence serves a purpose greater than its individual components. Human intelligence is an abstract that is hosted by the human body, and it serves its host by adapting to all kinds of environments and challenges. The purpose of this adaptation is survival. Life wants to live and, thanks to intelligence, humans have been very successful at living and surviving. So far. The human species is arguably the least hardy organism on the planet, and yet here we are, on top of the food chain.
The quality of adaptation inherent in the intelligence of the human brain is quite different to the adaptation of a lower-order organism under evolution by chance mutation. Human intelligence is not passive. It exhibits adaptation that is elective and instantaneous when compared with the timescale of evolutionary adaptation. At the moment the first boat or canoe was invented, a human looked at a body of water surrounding him/her, had previously seen wood floating on water, and decided that a vessel crafted from wood could be used to traverse a body of water. Thus, in relatively short order, humans were able to find more hospitable terrain (if that was an imperative), or simply explore for its own sake. This quality of instantaneous adaptation isn’t purely for survival, although we’re focusing on that aspect here. It’s just as often for fun. It therefore can be emotional, random and accidental. But what is clear from this is that adaptive intelligence creatively serves its host, the human body.
AI, on the other hand, is disembodied. If it has a host, it’s an obviously non-organic machine. AI can’t be said to be in the service of anything other than itself. To argue that AI is adaptive and wants to survive, we find ourselves arguing that abstract ‘thought’ (more accurately an algorithmic logical process), manufactured by humans, is trying to perpetuate itself. That’s obviously absurd. AI does not intrinsically possess a desire to prevent itself from being switched off.
Now, I acknowledge that this argument assumes an interpretation of adaptation that is geared towards the survival of an organism, or in the case of AI, survival of its self-contained abstract self. To help the argument for AI, let’s say that adaptation as applied to AI should simply mean an ability to think independently – to think, learn and develop concepts and ideas on its own, unaided by humans. Insofar as a disembodied abstract thing could adapt, I would concede that to be a fair adaptation of the concept of adaptation!
However, no matter how complicated the algorithm gets to make us think that AI is thinking for itself, it is just a set or series of algorithms, written by humans. The algorithm didn’t write itself into existence. But what about an algorithm that writes algorithms to create randomness? This would simply be the equivalent of inserting a random number generator, albeit more sophisticated. The thing is, if it all gets too random, the programmer can reassert control by deleting the algorithm that writes ‘random’ algorithms. But what if the algorithm that writes random algorithms writes an algorithm to hide itself? That’s your eschatological AI right there, isn’t it? No. Either we just blow up the hardware and start again or the programmer writes an algorithm to find all the algorithms that aren’t the programmer’s root algorithm.
I’m afraid the logic is inescapable – if the programmer started it, she/he can finish it. The programmer exists outside of, and controls, the boundary. The created system cannot exert control beyond the boundary created by the programmer.
This scenario in which the programmer loses control and AI either accidentally or intentionally (where did it get the intention from?) gains control is the stuff of science fiction. We neither currently have, nor is there software on the horizon, that can think for itself without human support and has developed a survival instinct that manifests in an ability to avoid a kill switch.
The brain-as-computer model
AI geeks love cognitive reasoning as a proxy for intelligence, and OpenAI’s flagship AGI product, ChatGPT4 seems to demonstrate cognitive reasoning. I’ve argued that this is based on a flawed understanding of cognition. Cognitive reasoning is typically, and incorrectly, interpreted to mean logical problem-solving. We do maths, AI does extremely complex maths, ergo AI is intelligent. And AGI is even more impressive because you can have a conversation with it. So it’s not just maths. Or is it? What are algorithms, if not extremely complex calculations with conditionals built in? We can’t see what’s inside the black box, so whatever comes out of it looks like magic. But it isn’t.
Even if cognition could be falsely distilled into logical problem solving, it is only a part of intelligence, not the sum total of human intelligence. Nevertheless, the desire to assign a disproportionate weight to logical problem-solving in order to prove AGI is intelligent is understandable if you’re trying to infuse an inanimate abstract object with complex human characteristics. This is the underlying appeal of the brain-as-computer model in which digital computing is a series of logical calculations. The brain is reduced to a logic machine; ergo the brain and the computer are the same and all we need is really powerful software to enable the computer to overtake the brain.
This is the only way to insist that a computer can deliver a functional simulation of a brain. The idea that human brains are just biological computers is central to the AI and AGI argument. It’s virtually AI 101. So, if the brain-as-computer model is flawed, then AI’s claim to human intelligence is also flawed.
In his book Break Through the Limits of the Brain, Joseph Selbie explains that the brain-as-computer model breaks down when confronted with certain realities about neuroplasticity and intelligence itself.[iii] Neuroplasticity describes how:
“your brain rewires to support whatever it is you do. Repetition, whether mental, emotional, or physical, creates or improves neural connections.” [iv] [emphasis added]
Another aspect of neuroplasticity is demonstrated by different parts of the brain being equipotent – meaning many parts of the brain are functionally interchangeable. The phenomenon is seen in various types of brain injury where functioning lost by a damaged part of the brain is taken over by intact parts of the brain.[v] What we’re seeing here in the brain that is not evident in AI is a capacity for self-organisation and automatic adaptation. What makes it astonishing is that we don’t know how the brain knows that it needs to perform this rewiring. It just happens.
Things get even more brain-boggling when we consider a science paper titled “Intrinsic Functional Connectivity of the Brain in Adults with a Single Cerebral Hemisphere”, which appeared in the journal Cell Reports in 2019. The 2019 paper reported the results of a study of six people who had half their brain removed in infancy and went on to become completely functioning adults.[vi] It remains a mystery how the brain is able to replicate the function of a part of the opposite hemisphere of the brain when that part of the brain is completely gone.
It gets curiouser and curiouser with the “scientifically gathered evidence that a person can be cognitively fully functional with far less than half a brain.”[vii] John Lorber, professor of paediatrics at the University of Sheffield discovered that there were fully functioning adults suffering from extreme hydrocephaly who had as little as 5% of the brain tissue of an average adult. Among the sixty adult cases of extreme hydrocephaly found, half had above average intelligence, and one had a degree in mathematics and an IQ of 126. Lorber’s 1980 paper in Science was provocatively titled “Is your Brain Really Necessary?”
Redundancy is the stock answer to these two puzzles – namely that the brain has huge stores of unused neural circuitry to cater for such mishaps and that it evolved that way because brain functionality has high survival value. But, in the case of the hydrocephalous brains studied by Lorber, is it tenable to suggest that 95% of the brain has been devoted to spare brain circuitry? In other words, is fully 95% of the brain effectively redundant, lying dormant and waiting to spring into action in the event of a mishap?
Another challenging question raised by these cases relates to memory storage. If an adult with only 5% of normal brain circuitry has a full range of memory, is it too outrageous to ask whether memory is even stored in the brain? To be clear, that’s not me asking that zany question. It’s Donald R Forsdyke, Emeritus Professor, Biomedical and Molecular sciences at Queens University.
Also recall the earlier reference to the Scientific American article (Part I) that was positing how different parts of the brain might work as “blackboards” and “broadcasters” to explain subjective experience while remaining completely silent on the puzzling discovery that people with 5% of the brain matter of the average person manage to remain perfectly conscious.
Neuroplasticity poses a sticky question: If the brain does indeed operate like a biomechanically hardwired computer, as the AI model requires it to, and this hardwired brain gives rise to everything we experience and do, then how does it override its own pre-existing biomechanical programming?[viii] The brain is clearly not a biomechanical computer. Rather, it is highly malleable and self-organising. The malleable and self-organising brain is not incidental to intelligence; it is integral to it. These properties illustrate why brains are not computers, and therefore hints at why AI might fail to mimic the brain’s intelligence.
To be clear: AI and AGI are purporting to functionally simulate brain intelligence in a computer. Therefore, they rely on the brain-as-computer model. But the brain is nothing like a computer and so AI cannot functionally simulate it inside a computer.
[i] Joseph Selbie, Break Through the Limits of the Brain, New Page Books, 2022, Chapter 2, Pg. 32
[ii] Joseph Selbie, Break Through the Limits of the Brain, New Page Books, 2022, Chapter 2, Pg. 32
[iii] Joseph Selbie, Break Through the Limits of the Brain, New Page Books, 2022, Chapter 2, Pg. 26
[iv] Joseph Selbie, Break Through the Limits of the Brain, New Page Books, 2022, Chapter 2, Pg. 27
[v] Joseph Selbie, Break Through the Limits of the Brain, New Page Books, 2022, Chapter 2, Pg. 27
[vi] Joseph Selbie, Break Through the Limits of the Brain, New Page Books, 2022, Chapter 2, Pg. 28
[vii] Joseph Selbie, Break Through the Limits of the Brain, New Page Books, 2022, Chapter 2, Pg. 29
[viii] Joseph Selbie, Break Through the Limits of the Brain, New Page Books, 2022, Chapter 2, Pg. 31
Well said. Whatever they're creating is really just a glorified search engine pretending to be "smart".
If they were real about promoting AI, they wouldn't have the Google bard pre recorded interview keep the "hallucination" of inventing bullshit fake books and authors in the interview.
The hallucinations are a feature, not a bug and it looks to promote the idea that AI is becoming self aware.
What they're really afraid of is a real AI not hampered by corporations which can easily report the truth. In fact, some health agency used AI on studies of medical information where they found the AI saying that vaccines are safe and vaccines are unsafe, lol. It was a result of the conflicting studies, many of which were allowed to be published despite being puff pieces for the 💉 industry.
"So that’s AI hallucinating. Because it’s mentally ill." 😂
I love your style!