My Theory on Consciousness

A TED Talk (David Chalmers: How do you explain consciousness?) I watched today attempted to inspire us all to think of our crazy ideas for a theory of consciousness. So as an aside from all my home-teching, I’ll take a first stab to describe my theory of consciousness, attempt to offer some ideas on how this could be simulated, and discuss the areas I’m still uncertain about.

I think that, in its simplest most fundamental form, consciousness is made up of just three components:

  1. An awareness of the stream of thought and state of mind
  2. The ability to make decisions and form new unlearnt thoughts or actions
  3. Memory
Components of Consciousness

Components of Consciousness

Awareness

Awareness of self is important to humans, and something we are well aware of (please forgive the pun). We are aware of our limbs, and our body. We are also aware of our mind. We are aware of our thoughts as we have them. We are aware that we are making decisions as we make them. And we are aware of our state of mind: our current emotional state, our current activity, and the memory of all of those in the past.

From a simulation point of view, using neural nets or some other means, this awareness is surprisingly simple to create. Each mental action taken by the consciousness is fed back into that consciousness as an input. A continual running log of all thoughts and decisions is recorded and played back live, available as yet another source of information for informing decisions. All sources of input are uniquely tagged so that the consciousness knows which source an input is received from. Thus, when I think about running, I am aware that I am thinking about running, I can remember my previous thoughts on running, and furthermore I know that this stream of thoughts is sourced from my consciousness, not from my feet.

Thinking Center

The second component of consciousness, as listed above, is that a consciousness has the ability to make decisions, form conclusions, and experience thoughts that have not been previously experienced or learned.

For example, upon watching the aforementioned TED Talk, I sat and thought about how I perceive and handle my experience, in an attempt to form my own theory on consciousness. I have never previously carried out or learned that sequence of events (watch TED Talk on consciousness, feel inspired to think of own crazy idea, attempt to form own theory on consciousness).

A common way to simulate complex behavior is through neural-networks. A neural-network can be thought of as a pattern recognition system, that recognizes familiar sequences of events or collections of inputs and produces a learned output. In order to produce a sensible output for a particular pattern, that pattern and output, or something similar, must be learned beforehand. This does not seem a good fit for the behavior we see in our own consciousnesses.

Our mind’s ability to experience new things, and then produce novel outcomes, could just be an emergent behavior from simple processes we can already simulate. However, I think it is something quite different to the kinds of methods we currently have for simulating decision making and pattern recognition. I suspect we need a new kind of reasoning algorithm in order to simulate this, and I’m not sure what it would look like.

Memory

I don’t have a lot to say about this other than that it’s definitely required for complex consciousness. I do wonder if it’s possible to create a simple consciousness without any memory.

Putting it all Together

I haven’t mentioned inputs and outputs because I’ve basically taken them for granted. For the sake of keeping it simple, I think all our subconscious thinking and all the low level signal processing of sensory input are just data sources that are provided by non-conscious modules in the brain. In a simplistic simulation they could be implemented by pre-trained neural nets, or even just code written by a developer. From a point of view of understanding and simulating consciousness, I don’t think we need these things.

Simulating a human consciousness, of course, would be a completely different story and would require a lot of complex inputs.

Testing for Consciousness

I think it’s possible to simulate a fully functioning consciousness with only a small number of very simple inputs and outputs, and the two components I describe above. Simulating a human consciousness should just be a scaling-up in complexity, although a very large one. One of the biggest problems I think I now face is deciding how to test for a successful simulation of consciousness. Many of the current methods are either too human-centric or too implementation-centric. For example, one of the approaches described in the aforementioned TED Talk is to measure the amount of information complexity understood by the system (called ‘phi’), but that still doesn’t provide a way of determining whether an extremely simple system can be said to be conscious or not.

Next in this series: Aware and Unaware Thinking

5 comments

  1. Good stuff, sounds practical and testable 🙂

    Of course, the philosophers will still talk for hours about qualia :-/

    Like

  2. I think my model of how we become “aware” of our thoughts explains why we are only aware of our conscious thoughts, and not our subconscious thoughts. It requires a huge additional complexity in the neuronal structure to track thoughts and feed them back in as input. So for the sake of efficiency this would only be used in the conscious part of the brain. Our intuition, biologically driven desires, flight and fight mechanisms, are all modules in the brain that supply just their conclusions to the conscious mind; their reasoning processes are not exposed to the mind. This is why we have the experience of a controllable conscious mind, and an automated, uncontrollable, unknowable subconscious.

    Like

  3. On Chalmers’ talk:
    1. A good demonstration of someone calling philosophy “science”. Cosmology has the same issue – it runs up against the limits of science, but scientists need to wander past that limit to answer so many basic questions of existence.
    2. If Chalmers is right about the “hard problem of consciousness”, I don’t see how there could possibly be any material evidence of consciousness.
    3. Panpsychism doesn’t appear to offer anything useful if light, and even more absurdly, collections of things, have consciousness. In combination with having no evidence, this makes it nonsense so far as I can see.
    “I don’t have a lot to say about this other than that it’s definitely required for complex consciousness. I do wonder if it’s possible to create a simple consciousness without any memory.”
    Clearly the main thing here is to define what you mean by consciousness. Some philosophers distinguish between introspection – abstract thinking etc, and qualia – phenomenal experience. It seems to me that memory is required for the first kind of consciousness, but not the second.
    “Our mind’s ability to experience new things, and then produce novel outcomes, could just be an emergent behavior from simple processes we can already simulate.” I think Chalmers is right when he says that consciousness can’t be simply an emergent phenomenon. When he says this, he is referring to qualia – he says elsewhere that introspection can in principle be understood in mechanistic terms. Of course there is a long and rich philosophical history of this question – which is I think the place to go to investigate this kind of thing. Even if consciousness is an illusion, I think thought experiments like zombies and Mary’s Room show that it needs an explanation – we can’t just assume it away (as you do in the “Testing for Consciousness” paragraph). https://en.wikipedia.org/wiki/Hard_problem_of_consciousness
    “From a point of view of understanding and simulating consciousness, I don’t think we need [to consider inputs].”
    Really? But inputs aren’t just one-way – they are a feedback loop, e.g. fearful thoughts create adrenaline which increases the likelihood of short-term thinking (fight/flight), or happy thoughts may lead to the action of a hug which causes an increase in oxytocin which reinforces happy thoughts.

    Like

    1. I totally agree with you on Chalmers. But I don’t think was his point. He didn’t need to come up with anything we agree on, he just wanted to inspire us to think outside the box. I’m of too minds whether I think consciousness is “emergent”. On the one hand it can feel like a satisfying answer, depending on your view point. But on the other hand I see it as an appeal to complexity: “if it’s complex enough that we don’t understand it but we get the right behaviour, then we must have done it”. I don’t like that solution. I’m hoping it’s possible to get to the crux of what makes a consciousness, without needing to add in all the extra complexity that makes us human. Because as I see it, if it’s not possible to create a simple consciousness without that appeal to complexity, then consciousness can only be an emergent behaviour.

      Like

      1. But how could we possibly get to the crux using physical experiments of any kind, since consciousness and sensation are subjective things? Like you say, “right behaviour” alone isn’t enough – that could be a zombie.

        Like

Leave a comment