- The Watchtower
- Posts
- The WatchTower: 23rd Edition
The WatchTower: 23rd Edition
Welcome to the 23rd edition of the WatchTower! In this edition, we discuss the latest version of real-time object detection model YOLO and explore philosopher and computer scientist Bernardo Kastrup’s views on why AI systems will never be conscious.
đź“° Featured in This Edition:
YOLOv10
AI Will Not Be Conscious
YOLOv10
Recently, a new version of YOLO (You Only Look once) version 10 has been rolled out. Never heard of it? YOLOv10 is a real-time object detection model that is capable of detecting objects and drawing bounding boxes around them in images or videos. Its ability to run predictions with low latency makes it powerful and capable of aiding applications like autonomous vehicles.
How YOLO works under the hood
Here we will introduce the process for earlier versions of YOLO for simplicity.
1. The image is resized to fit the YOLO model architecture and then divided into NxN grids of same size.
2. For each grid, the probability of an object with the coordinates of the bounding box is predicted for each object class.
3. Now that with the predictions in step 2, a lot of repeated bounding boxes are present because the object is usually across multiple grids. Next, Intersection over Union (IoU) is calculated to only keep the relevent predictions. Also, Non-Max Suppression (NMS) is applied to suppresses all bounding boxes that have lower probability scores.
What’s new about YOLOv10
NMSFree Training Strategy with Dual Label Assignment
One key feature of YOLOv10 is the dual label assignments for NMS-free training. Unlike traditional approach with non-max suppression, YOLOv10 uses one-to-many and one-to-one matching techniques to achieve a more efficient and accurate labeling. This allows the model to maintain competitive performance and low latency, making it suitable for real-time applications.
Lightweight Classification Head
YOLOv10 uses a lightweight classification head with depthwise separable convolutions to reduces computational load without sacrificing performance. Additionally, the model incorporates spatial-channel decoupled downsampling, which optimizes spatial reduction and channel transformation processes. This technique minimizes information loss and further reduces the computational burden.
Sounds cool? Wanna build a project using YOLOv10 but don’t know how? Let us know and we can organize workshops on YOLOv10 or any other AI tools!
Published by David Hung, July 29 2024.
AI Will Not Be Conscious
Given the current capabilities of large language models and the explicit goals of many top AI companies to create Artificial General Intelligence (a loosely defined term describing a system that can complete any cognitive task a human could), debates surrounding the potential and future of AI have intensified over the last couple of years. One question that often arises, invariably sparking a wide range of opinions and often steering discussions into the realm of pure superstition, is:
“Can AI become conscious, and if so, what are the ethical implications of this?”
In a recent presentation at the G10 festival of philosophy, art, and economics, Bernardo Kastrup, holding a PhD in both philosophy and computer engineering, addressed this very question, arguing that the idea of AI achieving consciousness is not only unlikely but fundamentally flawed. Before we dive into the details of Kastrup’s argument, I should note that as this topic is deeply philosophical in nature, the conclusions drawn cannot be proven categorically, so I encourage you to think about this yourself and let us know where you disagree.
Consciousness ≠Intelligence
To begin, we must draw a crucial distinction between consciousness and intelligence. Both words are notoriously hard to define, but we will proceed using the following (imperfect) definitions:
Intelligence: The degree to which a system can process information.
Consciousness: The subjective experience associated with existence – what philosopher Thomas Nagel described as “what it is like” to be an organism. This refers to the qualitative experiences we all have, such as seeing colours, hearing music, or feeling sensations in your body. It does not refer to any kind of cognitive abilities such as metacognition (thinking about thinking), self-awareness, or introspection.
This problem of subjective experience, also referred to as qualia or the “Hard Problem of Consciousness”, has puzzled humanity throughout history and remains a mystery. Unfortunately, as we will see, pieces of software running on powerful GPUs add nothing useful to the discussion.
Water, Pipes, and Valves
The belief that AI is or may become conscious generally stems from a lack of understanding of how computers work, leading to mystical claims and ambitious analogies drawn at high levels of abstraction.
At the most fundamental level, a computer operates based on binary states – 0s and 1s – where these states represent the presence or absence of a particular condition in the physical world. Modern computers consist of millions or billions of transistors (tiny electronic switches that can turn on (1) or off (0) based on whether they allow an electrical current to pass through) to process information and represent states. This is done purely for the sake of convenience – electrons are small and making use of them allows us to fit devices of high computational power in our pockets. In principle, replacing electrical signals with water flow could achieve the exact same thing that computers do. That is, if water flows through a pipe, it represents a binary 1, and a 0 if it doesn’t. A valve could perform the same function as a transistor, either allowing water to flow or not, and complex pipe structures could perform the same function as logic gates, allowing more complex rules permitting the flow of water through certain pipes.
So, in principle, the most advanced and capable AI systems we see today (and indeed the most capable we ever could and will produce) could be built using water, valves, and pipes, rather than electrons, transistors, and logic gates. We can see now that to ask whether AI is conscious is the same as to ask if any kind of complex flow of information in physical systems is conscious. Do you think that your home’s sewage system is conscious?
Complex network of information flow (Image Credit: OpenAI)
A Computer is Not a Brain
There is no denying that the capability of LLMs to mimic human intelligence is impressive. ChatGPT may already meet some people’s criteria for AGI and would likely pass any Turing Test (a conceptual test proposed by Alan Turing whereby a machine “passes” if it exhibits intelligent behaviour that is indistinguishable from that of a human) we could construct. However, the key error made is attributing this apparent display of similar behaviour to a conscious experience - a complete non sequitur. The computer and brain are completely dissimilar in terms of both structure and function:
A brain is based mostly on carbon and hydrogen, while a computer is based mainly on silicon.
A brain is powered by ATP burning through the metabolic cycle, while a computer is powered by electric potential.
A brain operates by release of molecules in synaptic clefts, while a computer operates by accumulation of charge in a silicon gate.
To find a similarity between the two for use in a pro-AI-consciousness argument, you need to apply many layers of abstraction, each of which takes you further from reality. A lightning strike has very complex flows of electric charge, is it conscious? Our kettle and toaster use flows of electric currents based on predefined circuits and instructions, are they conscious?
Panpsychism Doesn’t Help the Case
Panpsychism (as I understand it) is the notion that subatomic particles (electrons, quarks, etc.) have a fundamental (and simpler) consciousness of their own, and that from the combination of these smaller conscious experiences emerges a deeper unified consciousness such as the kind we experience. A panpsychist may argue that by engineering increasingly complex computers and AI systems, we are mixing and matching the dynamics of these conscious states and therefore creating increasing degrees of consciousness in AI. This argument can be refuted with arguments such as the following:
Quantum Field Theory tells us that there is actually no such thing as discrete particles. What we call particles are just excitations of an underlying field, just as a ripple is a disturbance of water rather than a thing in itself. So, clinging to an image of a particle sets panpsychists off on the wrong path from the beginning.
Even if discrete subatomic particles existed, there is no coherent way to explain how subjective experiences of particles at a micro level could combine to create a single unified consciousness.
Objects such as those that make up the structure of a computer are carved out from nature nominally by humans using language and are not unique objects in a fundamental sense. The quarks that exist in a computer chip are not fundamentally any different to those in the air directly next to the computer chip. It’s convenient to refer things as separate in terms of their function, but we have no metaphysical or ontological grounds to say that this thing is somehow separate from the rest of the physical world around it. So, we cannot carve it out from the rest of reality, say it is a unique entity, and then ask whether it is conscious.
Proceeding with Clarity
To summarise, the discussions giving credence to the possibility of conscious AI stem from a failure to distinguish between intelligence and consciousness, as well as a lack of understanding of the fundamental structure and function of a computer, leading to confusion, abstract analogies, and sensationalism. There is no doubt that the intelligence of AI systems is rapidly improving and that this will create many urgent challenges for society to address. Demystifying and putting to bed the notion that AI systems are somehow having a subjective inner experience allows us to stay focused on these real issues and navigate the future of AI with clarity.
Published by Jonas Macken, July 29 2024.
Sponsors
Our ambitious projects would not be possible without the support of our GOLD sponsor, UNOVA.
Closing Notes
We welcome any feedback / suggestions for future editions here or email us at [email protected].
Stay curious,