Twenty-seven years ago, I sat in a maternity ward studying optics while our newborn slept contentedly in the arms of my wife, Jo. It was Melbourne, and it was where everything I ever knew had happened.
Seven days ago, I was waiting patiently in a maternity ward as that same newborn was wheeled into the room bearing her own newborn. It is Brisbane in 2020, and we are scattered across a country ravished with Covid-19.
Closer but further apart?
We stream, we sanitise, we contact trace, we waggle elbows at each other, we post baby photos on social networks that presciently thrust the latest in breast pumps in our face. Our teenagers repeatedly bootstrap Great Grandparents on Zoom, and meanwhile, social distancing rails against our irresistible urge to bond.
We condemn the government for inaction, and yet we derail action based on privacy. We don’t trust the government for fear of an authoritarian agenda, and yet our social fabric is violated through big data, Hadoop algorithms and weapons-grade psychometrics.
Are we complacent or is it just acceptable progress that “Manifest Destiny” collides with the “Belt and Road”.
I feel a need to provide a contextual blanket around this new human as her blank page is indelibly written upon. To shield her from the world, I try to drink in all of the info-glut. Thoughts fracture as I try to filter and regurgitate meaning, but as the pandemic of Covid-19 crosses state boundaries, I sense something far worse: the contagion of the Dunning-Kruger effect.
The Dunning Kruger effect
Defined by Psychology Today:
The Dunning-Kruger effect is a cognitive bias in which people wrongly overestimate their knowledge or ability in a specific area. This tends to occur because a lack of self-awareness prevents them from accurately assessing their own skills.
Imagine a world where we could elect a buffoon. Nope, too easy.
Imagine a leader with the mental acuity to be able to absorb and reflect the info-glut. Someone who could stand at the mantle of humanity and know, intuitively, what must be done. Nope, too hard.
Imagine a social structure. A system that incorporates areas of specialisation that advise up a chain of command. One that balances military, science, diplomacy and economics. Yep, that’s Star Trek.
So let’s get away from the easy, the hard, and the fiction.
Right now we talk to devices that automate our lives and navigate our cars, planes, ships and combine harvesters. We have expert systems that provide medical diagnosis, legal advice and stock brokerage. Sure the Artificial Intelligence we have is a far cry from “Artificial Consciousness” (even if we could define it), and the harbingers of a dystopian future have voices we should heed but, and here’s the rub: do we really have a choice?
The argument that AI is dangerous because it is impersonal and lacks any real empathy is precisely the reason it should perhaps be embraced. Of course, I realise I have just opened a can of worms, but let’s play out some scenarios.
AI becomes malevolent.
Most likely because it constructs a chain of logic that implies that “evil-A>evil-B” concluding that choices leading to “evil-B” (the lesser of the two evils) need to be taken.
This is perhaps plausible and obviously undesirable because AI is currently fragmented into niche expert systems.
An example is an autonomous vehicle. Can it really understand *all* scenarios?
So let’s consider this a little further…
AI niche systems lack the context to make holistic decisions.
Let’s put a hypothetical “economic rationalist AI bot” in charge.
It may conclude that it is more important to keep the wheels of industry turning than to stem the blowout of Covid-19.
Supporting logic is that the debt of providing pensions to the elderly may be avoided if the virus attacked that demographic.
It doesn’t need AI to make that utilitarian decision, and it has obvious false economies, but let’s look beyond this scenario.
AI niche systems that operate in a cooperative.
This is described by many dystopian Sci-Fi futures.
Consider “Terminator” or “Matrix” or [enter armageddon movie], but in none of those scenarios does the AI system truly benefit by trying to rule the World.
The key-word here is “cooperative”.
It is likely to result from “Web3.0“. This is where the semantic web links different expert systems to evolve a holistic understanding. I’m not going there, you can read about it elsewhere.
Why would such a system need humans? Simply: we have hands.
The AI system, that operates in a cooperative, with robotics.
OK so now I’m getting ahead of myself. Maybe this is where humanity becomes superfluous, but it’s a looooong way off.
By then, maybe we have defined consciousness.
By then, maybe the AI and semantic web have combined themselves into a conscious entity.
By then, maybe we are more enlightened at the levels of both international corporation and geopolitics. And perhaps these expert systems help us get there.
By then, maybe we will be half underwater and begging for a solution.
By then, maybe we can upload consciousness into a quantum computer.
Frankly, such scenarios are more likely than any attempt to stop the freight train.
We recognise that we all suffer from the Dunning-Kruger effect, and we resolve the problem by unleashing the transcendent mind.
The movie “Transcendence” comes close.
But what if such transition was less dramatic? Sure, it may not be a blockbuster movie, but it may be a good story.
So, this is where my thoughts have meandered since beginning to write my book “Pilgrim’s Ark”. Now, I just need to find a way to get it out there.
Comments