Dark Mode Light Mode

Why consciousness can’t be lowered to code


Today’s arguments about consciousness often get stuck between two firm camps. One is computational functionalism, which says thinking can be fully described as abstract information processing. If a system has the right functional organization (regardless of the material it runs on), it should produce consciousness. The other is biological naturalism, which argues the opposite. It says consciousness cannot be separated from the special features of living brains and bodies because biology is not just a container for cognition, it is part of cognition itself. Both views capture real insights, but the deadlock suggests an important piece is still missing.

In our new paper, we propose a different approach: biological computationalism. The label is meant to be provocative, but also to sharpen the conversation. Our main argument is that the standard computational framework is broken, or at least poorly suited to how brains actually work. For a long time, it has been tempting to picture the mind as software running on neural hardware, with the brain “computing” in roughly the way a conventional computer does. But real brains are not von Neumann machines, and forcing that comparison leads to shaky metaphors and fragile explanations. If we want a serious account of how brains compute, and what it would take to build minds in other substrates, we first need a broader definition of what “computation” can be.

Biological computation, as we describe it, has three core features.

Hybrid Brain Computation in Real Time

First, biological computation is hybrid. It mixes discrete events with continuous dynamics. Neurons fire spikes, synapses release neurotransmitters, and networks shift through event-like states. At the same time, these events unfold within constantly changing physical conditions such as voltage fields, chemical gradients, ionic diffusion, and time-varying conductances. The brain is not purely digital, and it is not simply an analog machine either. Instead, it works as a multi-layered system where continuous processes influence discrete events, and discrete events reshape the continuous background, over and over, in an ongoing feedback loop.

Why Brain Computation Cannot Be Separated by Scale

Second, biological computation is scale-inseparable. In conventional computing, it is often possible to cleanly separate software from hardware, or a “functional level” from an “implementation level.” In the brain, that kind of separation breaks down. There is no neat dividing line where you can point to the algorithm on one side and the physical mechanism on the other. Cause and effect run across many scales at once, from ion channels to dendrites to circuits to whole-brain dynamics, and these levels do not behave like independent modules stacked in layers. In biological systems, changing the “implementation” changes the “computation,” because the two are tightly intertwined.

Metabolism and Energy Constraints Shape Intelligence

Third, biological computation is metabolically grounded. The brain operates under strict energy limits, and those limits shape its structure and function everywhere. This is not just an engineering detail. Energy constraints influence what the brain can represent, how it learns, which patterns remain stable, and how information is coordinated and routed. From this perspective, the tight coupling across levels is not accidental complexity. It is an energy optimization strategy that supports robust, flexible intelligence under severe metabolic limits.

The Algorithm Is the Substrate

Taken together, these three features point to a conclusion that can feel strange if you are used to classical computing ideas. Computation in the brain is not abstract symbol manipulation. It is not simply about moving representations around according to formal rules while the physical medium is treated as “mere implementation.” In biological computation, the algorithm is the substrate. The physical organization does not just enable the computation, it is what the computation consists of. Brains do not merely run a program. They are a specific kind of physical process that computes by unfolding through time.

What This Means for AI and Synthetic Minds

This view also exposes a limitation in how people often describe modern AI. Even powerful systems mostly simulate functions. They learn mappings from inputs to outputs, sometimes with impressive generalization, but the computation is still a digital procedure running on hardware built for a very different style of computing. Brains, by contrast, carry out computation in physical time. Continuous fields, ion flows, dendritic integration, local oscillatory coupling, and emergent electromagnetic interactions are not just biological “details” that can be ignored while extracting an abstract algorithm. In our view, these are the computational primitives of the system. They are the mechanisms that enable real-time integration, resilience, and adaptive control.

Not Biology Only, But Biology Like Computation

This does not mean we think consciousness is somehow restricted to carbon-based life. We are not arguing “biology or nothing.” Our claim is narrower and more practical. If consciousness (or mind-like cognition) depends on this kind of computation, then it may require biological-style computational organization, even if it is built in new substrates. The key issue is not whether the substrate is literally biological, but whether the system instantiates the right kind of hybrid, scale-inseparable, metabolically (or more generally energetically) grounded computation.

A Different Target for Building Conscious Machines

That reframes the goal for anyone trying to build synthetic minds. If brain computation cannot be separated from how it is physically realized, then scaling digital AI alone may not be enough. This is not because digital systems cannot become more capable, but because capability is only part of the puzzle. The deeper risk is that we may be optimizing the wrong thing by improving algorithms while leaving the underlying computational ontology unchanged. Biological computationalism suggests that building truly mind-like systems may require new kinds of physical machines whose computation is not organized as software on hardware, but spread across levels, dynamically linked, and shaped by the constraints of real-time physics and energy.

So if we want something like synthetic consciousness, the central question may not be, “What algorithm should we run?” It may be, “What kind of physical system must exist for that algorithm to be inseparable from its own dynamics?” What features are required, including hybrid event-field interactions, multi-scale coupling without clean interfaces, and energetic constraints that shape inference and learning, so that computation is not an abstract description layered on top but an intrinsic property of the system itself?

That is the shift biological computationalism calls for. It moves the challenge from finding the right program to finding the right kind of computing matter.



Source link

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Pinterest Customers Are Bored with All of the AI Slop

Next Post
The 14 top agtech, food tech startups from Disrupt Startup Battlefield

The 14 prime agtech, meals tech startups from Disrupt Startup Battlefield