In short, now that we don’t need to fight to survive, we will focus on life: living longer, being happy and controlling everything. This means dropping religion and focusing on growth and power, but there is only so many resources (materials or energy), right?
Knowledge is a limitless resource
Originally humanity believed,
Religious Knowledge = Scripture x Logic
Scientific Knowledge = Data x Maths
But it lacked ethics. Now we have another route in addition thanks to the new ethical Humanism:
Humanist Knowledge = Experiences x Sensitivity
Humanism is not a organised religion either. There are common themes about divinity and how the human experience is above all, but the branches define human experience differently. Yuval describes it as three main branches:
- The Orthodox — each human is unique and should be celebrated individually; the more liberty, the more beauty, the more meaning to each individual.
- The Socialist — The collective human experience knows best; the more you align people the better life will be.
- The Evolutionist — humankind must prolong; the best should only succeed.
Imagine four scenarios that Yuval gives: hearing an Opera house, hearing rock and roll, hearing a chorus of hunter gatherers in a rainforest, and finally a wolf hearing a wolf howl. Which is valuable?
The Orthodox Humanist will say they are all equal, except from the wolf experience, as the wolf doesn’t experience life like a human.
The Socialist will say that the Opera is the more complicated experience, the rock and roll highlights breaking from norms and hence is a bad encouragement, the chorus of hunter gatherers are not organised and hence don’t have the complexity, but do agree that the wolf is the least valuable as there is no human.
The Evolutionist will point out that the others are happy to say humans are better than wolves, because we are more advanced, then why not do the same for different humans.
This brutal example may seem a bit much but this has been a battle over the last 100 years. Socialist berated Orthodox Humanists (liberals) for accepting injustice and being indecisive in action — “Everyone is free to starve”. Evolutionist then struck that the best humans would drown if all are freely able to grow.
World War 2 started and the Orthodox joined with the Socialist to defeat the Nazi party which had strong routes in evolutionary thinking, with much thanks to socialism.
Next, Socialism truly grew as a winning strategy for growth. USSR, Chairman Mao, Che Guevara all grew popular globally as they showed that alignment truly helped growth.
Yuval states that Liberalism / Orthodox Humanism was only protected under nuclear deterrent. But then, out of a 60 year beat-down, it reinvented itself.
The ability for a state to make all decisions was less able than individuals making all decisions — especially when industrialisation bred more variation in resources, exports and knowledge.
In fact, state direction of activities led to many awful famines and atrocities. It was apparent, the best method to grow was liberalism, but more specifically capitalism.
Only recently has a new method emerged that could challenge liberal capitalism. China is neither democracy nor is it communist truly, it is not fully controlled market but it is not free. This unnamed method is prospering and could be the birthplace of the new ‘better’ method for growth. It certainly doesn’t look likely to come from fundamental religious sects.
When society and science clash again
As we said earlier, religions find ethical statements, create factual ones and then science proves them wrong.
There are a few points that will clash with liberalism:
- Free choice isn’t free: how do we decide on things?
- Is manufactured euphoria equal to euphoria?
- Our memory is not rational: how do we make decisions?
- If we aren’t needed to work, what do we do?
Free will is sold out
Science is now taking aim at Liberalism. “Humans have free will” is a core fact that explains the ethical reasoning that you should respect another’s choices.
Science now tells us that all decisions are a mix of randomness and deterministic processes — not free will. Science has proved the ability to predict your actions already — before you have made your mind up.
If we define free will as this deterministic and random occurrence which feels like it is free, then all animals have it too.
If human beings are organic algorithms, doesn’t it stand to reason that if something knew your biology and knew your actions, it would know what you will do?
While researching brain computer interfaces for spinal cord injuries, research found they could use reward stimulation in the mind to remote control rats. The rats feel no discomfort, instead they live in constant mental ecstacy when they complete goals set for them. The rat is unaware it is being controlled.
Do these rats have free will? Is this ethical even if there is no suffering?
Our two selves: Experiencing and Narrating
As Science starts chipping away at Liberal beliefs we will need to reassess our morals and ourselves.
Yuval points to pain studies to show the duplicacy or failure of our human awareness. In a colonoscopy study, the treatments that went on the longest were considered less painful than. Our brain remembers averages of feelings over an event, it doesn’t sum them.
But this memory is part of the narrating self, there is also an experiencing self. Balancing which will be the dominant is hard. For example, getting up at 5am for the narrating self is great, for the experiencing self it is not, and each morning there is a fight.
Side note: This is why I track each hour as it makes myself think of the future, and the benefit the narrating self will have.
Our Boys didn’t die in vain.
In this reassessment other failings of individuals and society will need review.
“Our boys didn’t die in vain” is a syndrome that manifests individually and in society. It is essentially our ability to become more focused on something when we experience great loss. Our psychology clings to it to avoid the ability of accepting the mistake.
Yuval points out that religion does this too. Sacrificial offerings will invoke this as otherwise how can you excuse the behaviour.
He also points out we do the same in business and with money — investing more and more to complete a task instead of stopping at a certain point.
Our brains do this as to rationalise and simplify all that we experience, weaving a narrative from distinct events. Yuval points to this as why we feel we need meaning in our life — as otherwise there is no narrative to make.
Religion, communism, liberalism et. al provide an answer. Science says this is false and gives examples of a mess of influences being experienced as freewill.
With this in mind, Yuval questions the future of democracy, free markets and human rights.
Intelligence is decoupling from conscience
In short, machines are now becoming intelligent without needing conscience — and why would they need it Hollywood!
We focus on efficiency and as such human operators will be thrown out as they are more costly, devaluing humans further.
This applies to all careers in someway, and Yuval gives the example of Stock Brokers. Robobrokers are algorithms that take in all sorts of data and then invest on the stock market. These are always on, consuming more information than a human can. There are already prevalent in use and have been the reason behind multiple crashes already — but essentially focus on how short term the stock market does trades. This has lead to the birth of Eric Ries’ Long Term Stock Exchange.
This day in history: Hacked AP tweet about White House explosions triggers panic
Five years ago today, two explosions rocked the White House and President Barack Obama was injured, according to the…
2010 flash crash
The May 6, 2010, flash crash, also known as the crash of 2:45 or simply the flash crash, was a United States…
He proceeds to describe Lawyers, Doctors, Teachers, and more all falling victim of a non-conscious but more efficient replacement taking their place. Obviously, they will not disappear overnight, but similar to how armies have shrunk in size but grown in efficiency, the same will happen for these.
Google bets this too and has spun up life science studies to map humans like they have mapped the world:
Here at Project Baseline, we see the opportunity to bridge the gap between clinical research and clinical care. Imagine…
As the tech giants know us better and better scenarios emerge where they know who you will vote for, what will make you happy and how to make you do things.
Suddenly the narrative self will be able to affect the experiencing self with the support of these products. Imagine your virtual assistant spotting you are browsing fast food and then suggests a nice healthier alternative.
Similarly though, if the virtual agents master is not you, imagine it spotting you are browsing fast food and it suggesting a food company associated with the virtual agents brand.
You may say “It is still my decision though”, but now imagine that it times that notification when it knows your are at your weakest, with an offer it knows would exactly convince you, while having advertised to you on every screen you have looked at to set that desire subliminally. Remember there is no free will, and the algorithms will be personally targeting you.
What will we all do?
Historically we had three main sectors: agriculture/food, industry/things, services/help.
In the industrial revolution, we scratched our heads and scrambled to find things that we can do better than machines which led to a movement away from agriculture for a large % of the workforce.
Yuval gives 3 scenarios:
- Humanity will lose all it’s value
- Humanity’s data will be valuable but controlled and monitored
- Only the few will be Homo Deus
Humanity will lose all its value
There has always been things that humans are better at than machines, but that sentence is past tense. Machines best us at physical repetition and soon they will best us at specific cognitive repetition. Then in the future they may best us at all cognition.
Science states that organisms are algorithms of biology, where machines are algorithms of metal.
Whatsmore, machines can’t currently do 99% of what we do, but the tasks that we do don’t need 99% of what we do either and hence the barrier to entry of machines is low.
Indicators seem to push to high complexity, low profit work: archaeologists, art and design, and caring.
But even here art and design may not be safe — computers already compose music better than most of us and if art is meant to only please our mind couldn’t Science reverse engineer that? Can that be applied to all jobs provided enough time?
Learning could save us?
For the short term, seeming we have no idea what work will be available in the future, lifelong learning will be needed. You main go from bank teller, to bank teller robot trainer, to virtual bank designer, to then needing to find a new job.
Whether people will be able to adjust is a scary question. All these jobs will require traits that must surpass what machines can’t do, until they can.
Humanity’s data will be valuable
Perhaps we will get to most jobs being needless but the goals of humanity (immortality, happiness and divinity) will not be complete.
As such, may be we will have all our needs satisfied, accompanied by virtual assistants: a mix of all things we enjoy, always in manufactured bliss, with every need met.
But the cost would be that you lose privacy and freedom. That is required for study, computation and ultimately to upgrade humans.
If all jobs can be done by a machine, this infers there will be a large ‘useless’ class which would be more of a hindrance if they did try to help, let alone be unemployable.
Would the most ethical thing be to have us playing games and taking drugs to feel content? Science doesn’t know.
Only the few
Potentially, instead of an equal roll out of human upgrades they will be aligned to only the rich and powerful.
Liberalism couldn’t exist if there were God’s among men as the Homo Deus would not be equal to the Homo Sapien.
Some may argue that most medical tech filters down. That may be true but when you are upgrading instead of healing we don’t know that this is the case. For example, the car helps a huge amount in life but is not what we expect when we saying improving life.
Also, we don’t even know if improving life of the poor will be continued. In the past, the more health people that existed in the state meant the productivity and growth a state would expect. As machines populate the workforce this benefit in kind will not occur and universal healthcare would stunt growth. If your quality of life is affected would the society still have the same view.
Then later, will the elite see themselves as a new branch of Homo and view the rights of Homo Sapiens like we view the rights of animals — arguable.
What will the social dogma be?
Yuval points to the leadership that comes from tech hubs like Silicon Valley as their origin, but looks to two main types of ‘techno-religion” that he expects:
- Techno-Humanism, and
These both align to the outcomes described above.
Techno-humanism looks keep humans and biology at the top.
Dataism is the otherside — humans are not useful anymore.
Part 5 is here:
The next step: Homo Deus or other. [part5]
In short, we will focus on life now we don’t need to just survive. Resources, energy and knowledge will be focused on…
Or skip to the summary: