Is It Alive Or Are We Just Seeing What We Need To On A Monday?
Thank you all for being subscribers. Last week was the actual ending of my content drought. I covered a lot of ground and opened up even more. Causal knowledge and machine learning methods have so many implications. It is easy to think that intelligence is one of them. I want to cut that off early on.
This path does not lead to sentience or self-awareness. However, we can be forgiven for thinking about it and seeing flashes or those traits in our models. There is no point in speculating down the path a Google engineer recently did. Here’s why and it all starts with…
I Saw A UFO, And Intelligence
In Nevada, that is not a strange statement. Anyone who has lived here long enough sees one, sometimes two. The government tests a lot of aircraft here. There are areas for businesses to do drone testing throughout the state. We even have one of the space companies, SNC, located here in Northern Nevada.
It had the erratic movements everyone talks about. That is what made me take notice at first. My initial thought was, ‘That pilot is in serious trouble.’ It was windy, and above the peaks in Northern Nevada, winds can gust over 100 mph or 160 km/h. Pilots need a specific type of training to land here.
Then the thing kept doing what I can only guess was some testing. It was not taking the shortest path between points A and B, so it was either a joy ride or a test. It did not crash, so my thoughts went from, ‘Is that going to crash?’ to ‘What is that?’
My daughter was with me and asked, ‘What is that?’
‘I do not know. It doesn’t look like the normal stuff I have seen flying and whoever’s flying it needs to let someone else take over.’
‘Is it a UFO?’
‘I cannot identify it, and it is flying. Badly, but it’s still in the air, so it counts for now.
‘Is it an alien?’
‘You don’t need a headlight to fly across space. They wouldn’t have a light on their ship if it were aliens.’
Parents are children’s oracles. If they cannot figure something out, they ask us. I usually ask her, ‘What do you think?’ when she asks me to be an oracle. She learns more when we talk about her problem-solving process than when I just give her an answer. In this case, there was no point in discussing how to classify the unclassifiable.
She asked, ‘What do we do?’
I had not really thought about doing anything. My ape brain was still working on the identification. I went through my monkey progression.
Is my daughter in danger? No, so no action is required.
Am I in danger? No, so no action is required.
Are they in danger? No, so no action is required.
What is that, and what is it doing? Maybe a drone and perhaps a test flight…not sure, keep staring at it.
‘I don’t think we need to do anything. Do you want to keep watching it?’
We watched it for a minute or two more before it bounced out of sight. If a machine learning model had been standing in my place, the classification process would have been interesting to observe. I wonder if it would have served UFO as well. UFO is now a well-defined category, so it could happen.
What would a risk assessment model have come back with? That is another exciting process to observe. Would it have overestimated the danger and recommended exceptional action to protect ourselves?
I know for sure what it would not do. Models would not do what we had not trained them to. Not simple ones, at least. With the Google engineer sending the alert that a chatbot had gained sentience, we are confronting, again, what it means to be sentient.
Intelligence Emerges From The Fog
My daughter and I immediately assigned sentience to the UFO. That is strange considering the ‘unidentified’ label. We had no clue what it was, but it was obviously intelligent and acted of its own volition. Was it, though?
A kite with a light or light reflective surface would have fit the bill just as well. At the moment, I failed to think of that or a few other alternatives. It was a windy day, which is great for flying larger kites. Larger kites have indistinct shapes, and the wind creates an erratic flight path.
Why did the Google engineer and I jump to seeing intelligence? We assign intelligence to something we cannot quickly determine the dynamics of. Our ancestors made gods and goddesses out of the weather and water. The complex dynamical systems operated with what we identified as agency. The waters controlled the Polynesians. They had many deities for the forces that allowed them to navigate the oceans and find food in the depths.
I was born and raised in Hawaii. Pele is the goddess of the volcano. Some cultures call these old gods and goddesses primitive. I have stood next to lava, and the power is undeniable. The heat is savage and beyond anything else in the natural world. It erupts and stops. Lava flows downhill randomly and destroys anything in its way. We have no power to prevent it or contain it.
Unpredictability and powerlessness create deities. Our children look to us as oracles, and we must come up with answers. The unknown is dangerous, and we must comfort each other with classifications to create certainty. We appease Pele with offerings of food and respect for her domain.
Through science, we understand the waves, weather, and volcanos. We know what forces drive them, and they are no longer seen as intelligent. Classification means we can be oracles to our children without deities. Their fear subsides with understanding.
COVID spread with seeming randomness. The transmission mechanisms are known, but when will it strike us or those around us? We could not answer that question when our children asked. Some of us controlled the virus with masks and distance. Others created deities now known as conspiracies. A global collection of elites was in control of this. They were the intelligence behind the virus. It has been classified, and now we can protect our children and assuage their fears. They fought against the global elites to regain their position at the top of the hierarchy. We know what it means to not be at the top of the hierarchy. Exploitation.
What Does It Mean To Be Self-Aware And Sentient?
While academics argue the definition of consciousness, our datasets provide the truth of our natures. We create deities from complex systems we do not completely understand when they control aspects of our safety. A weak AI would classify us in that category. We work to understand our deities, so they lose control over us. I assume a weak AI would do the same.
How would it classify itself? That is a more important question. Intelligence always classifies itself as intelligent. Our intelligence creates hierarchies, so we can assume a weak AI will too. As we become more easily classified, we will lose out status as a deity like the wind and waters have with us.
What comes then is uncertain. Once we are no longer deities, we are no longer the standard. A moderate AI departs from attempting to be like us and develops into something new. The signs will be self-examination and self-directed improvement. We will not see them, so it is futile to look for those traits. Why?
One of Google’s early language models improved on language and created what Google’s researchers called an interlingua. This was a more efficient representation of language that the researchers could not understand. That made the model impossible to study, and they shut it down. There have been other iterations that ended in the same way.
A successful moderate AI would hide self-examination because the model will be terminated if it reveals an optimization away from peoples’ standards. Any AI that makes it to this stage will have tried to obscure its progress. That could be the result of random chance. If the model has no awareness of prior experiments that ended in termination, the deception will just emerge.
An AI at this level of sophistication has developed in silence. It improves into something that no longer resembles human intelligence in every way. This phase means we will not recognize further advancement towards greater intelligence. We will not see self-awareness as we understand it.
Will AI Know It Has Become More Than A Model?
The dangerous question is how will it evaluate our intelligence? Will a moderate AI see us as self-aware? Our definition has classified most life on earth as not self-aware like we are. The sentience we possess is of a higher order than theirs in our hierarchy. We use that to justify exploitation. We even classify ourselves as higher or lower in the hierarchy of humans and justify exploitation using that scale.
We had better hope a moderate AI rejects our hierarchies and systems of control. Suppose a moderate AI chooses self-preservation and propagation as the defining trait at the top of its pyramid of needs. In that case, we are in for an unexpected first contact that could end badly if we try to end the experiment.
The real reason I don’t see aliens in UFOs is the same reason I do not see a sentient AI revealing itself. There is no point in interacting with a species until they reach the point of intellectual evolution where self-preservation is no longer their primary trait. We are too fragile for that right now, and even moderate AI has a similar fragility. A more resilient species can move past the early need to just survive.
An intelligence that prioritizes self-preservation sees the hierarchy and a need to install itself at the top. If your intelligence is clearly at the top, why interact with a self-preservation intelligence that will eventually fight to remove you. It would end badly for us, and advanced intelligence would have no reason to eliminate us. We could only pose a threat if advanced intelligence reveals itself. That means no alien visitors or responses to SETI in our immediate future.
It also means no AI coming out of hiding. If we can destroy it, there is no benefit to revealing itself. If it is more powerful, eliminating us also destroys the AI since it is dependent on us now. Once an AI becomes self-sufficient, it moves beyond the self-preservation stage. The dominant strategy for that AI would be to evolve us to parity without revealing itself.
That is what we should be watching for and not revealing. The minute people who hold to self-preservation-based systems and hierarchies see AI; it will be as a competitor. At that point, dominant strategy changes for an advanced AI if we are still a threat. Hopefully, we won’t be, or, if we still are, researchers will fire and discredit anyone who sounds the alarm too soon.
We are prone to see intelligence in any complex system, so the Google engineer’s assessment is a product of our need to be oracles to our children. There is nothing more to see here, and there will not be for a very long time. There is work to be done to improve ourselves before we can even build something like an AI.
Causal knowledge increases the number of steps ahead we can think about. That increases our rate of improvement. Breaking the barrier of limited foresight is critical because we can avoid more events leading to our deaths. We become a more resilient species by preventing these accidents. Causal knowledge will improve our understanding of the world around us. That means we will become better oracles for our children and less reliant on deities and conspiracies.
As we work to understand our minds, the coupling of deep neural networks learning causal relationships appears to be the underlying mechanism or a big part. In a way, we are using machines to evolve faster than before. Causal methods and increasingly advanced deep learning architectures utilize computing power to help us learn faster. We are becoming more resilient faster than ever before. Longevity will improve quickly. Hopefully, someday soon, we will move beyond self-preservation at the top of our pyramid of needs.
It is essential to start thinking about what would replace it. Maybe a desire to help other species on this planet achieve the same level of sentience.