"The hard problem of consciousness (Chalmers 1995) is the problem of explaining the relationship between physical phenomena, such as brain processes, and experience (i.e., phenomenal consciousness, or mental states/events with phenomenal qualities or qualia). Why are physical processes ever accompanied by experience? And why does a given physical process generate the specific experience it does—why an experience of red rather than green, for example?" (Hard problem of consciousness - Scholarpedia)
I think that a helpful example of the hard problem of consciousness is what I call the "Cosmic Consciousness" analogy. Imagine there was a religion that claimed that 1) every time a comet strikes the Earth, the universe feels pain, sadness, or fear (depending on which continent it strikes), further 2) every time a meteor strikes Mars, the universe feels joy, hope, excitement, and pleasure (depending on which area of Mars is struck). This proposition should strike the skeptic as quite implausible since 1) there is no evidence that the universe has a conscious mind capable of feeling emotions, 2) nor a functional explanation for where this cosmic mind would reside, 3) nor a functional connection between the collision of space rocks and the universe's consciousness, 4) nor is there a logical explanation for why different types of collisions produce drastically different feelings for the universe, often of polar opposite natures, 5) nor is there a logical reason for why the universe would need to experience these feelings.
This gap between collisions and consciousness is the issue highlighted by the hard problem of consciousness. When cosmic collisions are invoked, we would not assume that that necessarily produces consciousness, yet when neurotransmitters collide with neurons in the brain, some people refuse to bat an eye, claiming that this is a normal phenomenon. There is nothing normal about one set of atoms colliding with another set of atoms to magically produce feelings. Are the feelings located in the atoms at the source of the collision, or are they located in a higher-order self floating around? How are these feelings transported to the higher-order self? Why does one set of atoms produce one feeling and another set of atoms produce another feeling? Are hydrogen and oxygen atoms constantly feeling pain in half of their collisions, which eventually organizes up into a neuron-neurotransmitter interaction? Is flowing water a sea of emotions in all of the atomic collisions? It seems quite absurd to assume that collisions naturally produce feelings, yet this is exactly what we are forced to accept by the existence of consciousness in the brain.
We have reason to believe that feelings are constructed for certain motivational purposes. Ideally, an organism must run an image processing algorithm on visual data, and then simultaneously run an audio processing algorithm, then compare the results of the algorithms against data in the organism's memory in order to identify certain objects in their environment. Then they need to check that object against a threat or opportunity function to see if this is a good object or a bad object. Only after all of these calculations have been done can an organism appropriately activate the feeling of "fear" or "excitement" depending on if it is a good or bad object. So, theoretically the purpose of feelings is to provide us with an intelligent motivation. This intelligent motivation is a type of utility or benefit to the creature, helping it survive and thrive by making better decisions.
But there is no reason to believe that biological creatures need intelligent feelings in order to make good decisions. We make many intelligent decisions unconsciously. There is no reason why we can't make ALL intelligent decisions unconsciously. The phenomenon of "blindsight" is evidence of this. Blindsight occurs when the channel between the visual cortex and "consciousness" is severed, yet the channel between the visual cortex and the "subconscious" remains intact. What occurs is the astounding phenomenon of people who claim to be blind, completely unable to experience the visual qualia of sight, yet when queried as to the nature of objects in their visual space, they are able to correctly answer visual questions when pushed to guess based on their gut feel.
Additionally, this "utility" function of feelings highlights the need for organisms to produce feelings intelligently rather than randomly. This means that feelings can't be the result of arbitrary collisions, but rather the meticulous result of intentional processes or algorithms. If random feelings are the result of random collisions, then we can't use them intelligently. In order to survive and thrive, we need to be able to biologically trigger feelings in an intelligent way. This means that only certain collisions are able to produce feelings and we can biologically control when these certain collisions activate. Further, in order to have a level of nuance in our feelings, we must be able to combine different elemental feelings into more complex feelings. This means that there is an architecture to our feelings. But knowing that we need an intelligent architect to construct our feelings doesn't explain how collisions are able to become the bricks of this architected experience.
The problem is, from a materialistic standpoint, the brain is just atoms moving around deterministically. Perhaps certain algorithms of matter movement in the brain produce intelligence, just like certain algorithms of matter movement in machines produces AI intelligence. But in a machine, at no point does electrons moving around produce a deep rich world of powerful sensations. When AI processes colors, it is just ones and zeros moving around. The sensation of the color red is never comprehended. Why is it that when atoms move around in our head, suddenly sensations appear? Nothing in atoms or molecules gives us the idea that they have the capacity for consciousness. Nothing about atoms seems like it has the ability to communicate feelings. Yet somehow, instead of unconscious zombies that deterministically operate on our brain atoms, suddenly we are awake and processing the feelings of the information in our brains.
Like, colors don't even exist. In the material world, colors are just frequencies of energy. There is no "redness" in the external world. Redness is invented by our brains. But redness is never communicated at any point in the neurology. When "red" frequency light hits your eyes, it merely performs a chemical reaction with atoms in your eye that are sensitive to red frequencies. That is just an energy exchange. The molecules reorganize their pattern in the face of a certain frequency. Perhaps certain parts of your eye are able to absorb the energy from red light. So, a bunch of electrons in your eye get energized by absorbing the red light. Absorbing the red light doesn't mean the atoms doing the absorbing are experiencing the color red. We know that your brain performs many calculations before it allows you to see color (visual illusion evidence). The atoms absorbing the photon merely steal its energy. Where is the redness? Your eye passes the energy from energized electrons in your eye down a chain of reactions that stimulate the visual cortex for processing. Perhaps the eye is energized, which then it sends energy to some neurons which then pass some energy down a chain of neurons eventually landing in the visual cortex. What is being processed? A collection of atoms in the form of a neurotransmitters (basically a bunch of oxygen and hydrogen). Again, a mere energy transfer via chemical reactions, electric potentials, and neurotransmitter bindings. There is no redness anywhere in this process.
We don't even need awareness of redness. We can function perfectly successfully as meat machines without ever feeling the essence of redness. Our algorithms can trigger fear in the face of this redness without needed to comprehend its pseudo-essence. We can react unconsciously to experiences just like a robot acts unconsciously in the face of danger detected by its algorithms. 95% of our behavior is already unconscious. Why are we conscious of that 5%? How do we feel sensations when it's mere atoms and energy moving around?
In the binary code of a computer, 10001110 is passed around to signify the color red. Yet nothing about this information conveys the essence of what it feels like to perceive the color red. If you told me that AI algorithms are consciously perceiving the color red every time 10001110 is passed around, I would think you are insane. Yet when we replace the ones and zeros with molecular configurations of atoms in neurotransmitters, suddenly people are willing to accept that configurations of atoms can produce the qualia of redness? If something as benign as color can't be explained, how do you explain pain, pleasure, love, hate, sound, taste, smell, and all the other varieties of sensations? What if we evolve to get a new sensation? How can that be achieved? If we wanted to evolve to see x-rays, what color would they be? How do you invent a new color for a new category of sensation? What atomic configuration is going to get you that?
Computers have sensors - they can use cameras to analyze colors and objects and process behavioral responses to visual stimuli (self-driving cars). Yet, never in this analysis of visual stimuli do they actually feel the stimuli, they merely convert stimuli into numbers for processing. I don't find any reasonable argument for how consciousness can emerge from complexity. Software can be very complex, yet there is no justification in saying that it has conscious feelings.
The natural emergence of phenomena makes sense when natural forces create a systemic result. But consciousness is such an extreme phenomenon than I don't understand how electromagnetic forces between brain cells can produce it without imputing a panpsychic consciousness to energy in general, which seems absurd. Consciousness is a higher-order emergent phenomenon than what can be expected from its constituent parts. Life as brain-dead zombie automatons would make sense to me as an emergent phenomenon of organizations of matter. Consciousness is one step beyond the mindless automaton - it does not make sense to me as an emergent phenomenon of atoms organizing.
To me, saying consciousness emerges from the organization of atoms is as absurd as saying God emerges out of the organization of energy. The problem is that we have evidence of consciousness, so I'm left to being utterly bewildered.
Yet, it's obvious that consciousness DOES come from the benign processes of the brain.
"We literally cut consciousness in half with a knife. You can create entirely separate units of consciousness between the two hemispheres of the brain. The right hemisphere can be aware of the word "key" and the left hemisphere is aware of the word "ring" yet neither of the hemispheres is aware of the word "keyring". We find that the personalities are different; the right hemisphere often has a different personality from the left. In one study, the left hemisphere believed in God, while the right hemisphere was an atheist. The left hemisphere likes to generate stories. The right hemisphere seems to be more in tune with reality and a little less happy." - Donald Hoffman
Neuroscience shows that consciousness is definitely dependent on brains. Panpsychists usually believe that all matter has consciousness, but I don't find this argument very intuitive. Using your own body as an experimental test subject, you can know that parts of your body are more conscious than others. Your skin, eyes, ears, brain, and injuries are very active within your consciousness. Your muscles, joints, lungs, heart, intestines, and stomach are very minorly active within your consciousness. Your senses, when stimulated, are super conscious. Yet your hair, fingernails, kidneys, and bones are NOT conscious. Your cells are not individually conscious. When you cut off a limb or a piece of skin, all of that material loses consciousness by becoming disconnected from your brain via nerves. We can infer that consciousness is dependent on a connection to the brain, and stimulation along that connection.
We can also infer that the more similar a nervous system is to our own nervous system, the more similar the conscious experience. Yet with AI software, we have no evidence to say that they are conscious, since nothing about their structure even remotely resembles our nervous system.
The information integration theory of consciousness (IITC) posits that consciousness comes merely from complex integrations of information, and hence perhaps AI can become conscious merely by becoming complex, but this seems wrong, since 1) we can remove the complexity from the brain (brain damage/surgery) and the brain still conscious (the entire cortex can be removed), 2) when we are asleep we lose consciousness yet maintain the same level of complexity in brain activity, 3) certain areas of the brain have large amounts of complexity, yet have zero impact on consciousness (cerebellum).
There is another theory about consciousness as a general workspace (intentional emergent phenomenon) for the brain to produce as a place to send information. This GW hypothesis almost views the brain as having a little man inside acting as an agent to receive all the information and then make decisions. Clinical Neurologist Steven Novella believes that this theory has been debunked after many years of effort to identify the location of this agent. The resulting conclusion is that there is no central agent, but rather the entire brain functions like a committee, each part contributing an aspect to this consciousness.
While subjective experiences of consciousness may come from every part of the brain, by treating patients with all types of brain damage, Neuropsychologist Mark Solms claims to have identified the place in the brain where, if damaged, "the lights [of consciousness] go off". This root of consciousness seems within the brainstem, specifically the Reticular Activating System (RAS). Perhaps complex conscious experience is produced via the interaction between algorithms in the cortex and the feeling generation within the brainstem (Mark Solm's work - The Source of Consciousness - Mark Solms | TranscendentPhilosop (wixsite.com)).
The most interesting argument for explaining the hard problem of consciousness that I have come across is the argument for the "geometry" of consciousness. In the "Neural Manifolds" video below, they make an argument for how the shape of the network of stimulated neurons designs the "feeling". For example, in simple terms, they argue that a circular structure of neurons in the brain is responsible for a rat's understanding of the 360 degrees of space around them. Based on this circular structure of neurons, whichever neuron is stimulated, that is the direction the rat is facing. If the neuron at 0 degrees is active, they are facing "north". If the neuron at 90 degrees is active, they are facing "east". If the neuron at 180 degrees is active, they are facing "south". If the neuron at 270 degrees is active, they are facing "west". This argument implies that some conscious "agent" within us is able to feel the shape of electricity in our brain. This doesn't fully explain 1) where the agent comes from, nor 2) why electricity is feelable, nor 3) how electricity can be adapted to a large array of different types of feelings, both good and bad. But it is a very interesting direction to pursue in further answering questions about consciousness.
Does the hard problem of consciousness philosophically force us into a corner? Are we obligated to choose between irrational dualism and absurd panpsychism? Is it reasonable to assume that chemical reactions, electric potentials, or neurotransmitter bindings can produce consciousness? If we make such assumptions, are we forced to admit that all chemical reactions, electric potentials, or neurotransmitter bindings produce consciousness? Doesn't this have remarkable ethical implications? Is a chemistry lab also a torture chamber for the molecules involved? Or how can we know which types of physical phenomena produce which types of conscious phenomena? I hope we can get more answers about the nature of consciousness in the coming years.
My further thoughts on metaphysical idealism: Metaphysical Idealism Debunked | TranscendentPhilosop
For more information:
Blindsight: the strangest form of consciousness - BBC Future
https://www.bbc.com/future/article/20150925-blindsight-the-strangest-form-of-consciousness