Thanks to the technological advancements of the last decades, artificial intelligence has become a reality we can’t avoid. Pop culture has often highlighted its possible dangers (think for instance of The Matrix). Yet, the potential it holds is unquestionable. If done well, artificial intelligence could transform the world we live in for the better. At Tomorrow we are excited to see what doors AI will open for humanity in the coming centuries. We are laying the foundations of the future. A future in which, thanks to cryopreservation, we could possibly choose how long we would like to live.
To have a better understanding of AI’s potential, we interviewed one of our brilliant Biostasis2021 speakers. Rafael Hostettler is the founder and CEO of the humanoid telepresence robotics company Devanthro. Him and his team are working on the creation of robodies: artificial bodies that have all the qualities of a human body without its fragility. The aim is to go beyond the human limit set by biological death. As our body degrades, our mind can be digitally uploaded to a sort of Cloud. From there, it can be connected to robodies, allowing us to be wherever we want, controlling a body that has a wide variety of capabilities that we are able to create. Let’s have a look at Rafael Hostettler’s vision of AI.
I have always been curious about how things work. One of my life’s goals and motivations to stay alive indefinitely is to be able to create a model of every possible thing. Cognition and intelligence are especially interesting in this regard for two reasons. First, I like meta – and understanding how to use intelligence to build a model of intelligence is as meta as it gets. Second, intelligence (including embodied intelligence) is a truly marvelous process, in that it is coming up with the highest density of novel artifacts ever. If you look around you, everything – table, house, monitor, cup, car, but also movies, software, and even world politics – are artifacts created by our intelligence. We combine model building (e.g. understanding the physics of something) with artifact generation (applying a combination of many models to build a new thing). For a curious person that is continuously looking for novelty, the creating power of intelligence is a very tempting subject of study. There was therefore not one person that has sparked my interest in intelligence, nor a specific moment in time - but the nature of intelligence itself has been manifesting my interest in it over time.
Muscles, skin and their insane level of integration. Muscle tissue is a fascinating material. It allows us to build arbitrarily shaped “motors” at arbitrary sizes. At their core, a fly’s muscles and a blue whale’s muscles work the same way, despite the difference in size and force generated. Then the muscles are very finely enervated, allowing for fine-grained control of their contraction. In contrast, when building a classical robot, there are a lot of rigid components that need to be fitted somewhere: the motors themselves, their control electronics, cables, gearing, fasteners. This then leads to a lot of rigid parts to transmit the forces generated in the motors to the joints, which in turn drives most of the design requirements and leads to the boring tin-can robots of today.
If you compare this with the muscle setup in the human face, there’s simply no way of generating this set of contractions with any motor technology we can build in this small space. (All the robots with moving faces use the space in the skull where we have our brains to put motors). And then comes the integration on top. We don’t just have a complex web of muscles, but we have them covered with a smart material that senses touch, pressure, temperature and damage: skin. All that sensor data is transmitted, filtered, analysed, made sense of (not just in the brain) and then based on it, new control signals are generated for the muscles. And this setup works for every animal.
Being uploaded would create a cambric explosion of new experiences and possibilities. Apart from becoming timeless, we will be free to choose every aspect of the physical representation of ourselves – one day a dolphin shaped body to explore our ocean’s depths, the next a human and tomorrow a spaceship? With direct-homunculus-access we can transform how we perceive reality. Today, we perceive a very narrow sliver of reality. Light from around 400 to 700 nm in wavelength, pressure waves in the air oscillating between 20 to 20’000 Hz, smell 400 types of chemicals and taste 5. What if we could perceive a much wider spectrum of everything and what about magnetic fields? Also, we could leverage artificial synaesthesia, feeding information from one sensing modality to the processing of another. You could smell the colors of a rainbow or hear the smell of freshly grinded coffee. Furthermore, as a biological system, the ecological niche we can survive in is very slim. Unprotected, we need environmental temperatures to stay within a few degrees of our body temperature, a breathable atmosphere with a quite specific composition, as well as a regular supply of water and a very complex mix of elements as nutrition - not very practical to survive in space, underwater or on far away planets.
Another important aspect is that we would have vastly more powerful interfacing to brain-unlike computational resources (classical computers, quantum computers, …), which would have a massive impact on how we think.
Imagine you want to build a complex object (like a robody or a house). You see it clearly with your inner eye, but the steps necessary to go from this imagination to a detailed set of instructions contain a lot of calculations, modeling, simulations etc. These are all activities our brains are really bad at. That’s what we’ve built computers (and more recently AI) for: as a support tool to expand the capabilities of our brain. But controlling these tools is extremely complex. What we want is to transfer data (the imagined object) and questions about that data (“if we build this out of wood, will this hold?”) from one computational system (your brain) to another (your PC). For this we need to translate our brain’s representation into something that the computer can work with (e.g. a model in a CAD software). How? By forming a closed perception-action loop of looking at physical objects (screens) and creating electric signals in other physical objects (keyboards, mouses). The screens emit light in a structured way, which is captured by our eyes, then parsed in our visual cortex and into shapes, symbols, text, meaning, feeding an internal model of how we imagine the CAD software to behave given mouse-motions key-presses. We then predict what action will get the current state of the model in the CAD software closer to our imagined object and then what motion of our body is necessary to trigger this action. We then move our hands and this way, we translate what we imagine into something the software can work with to answer our initial question. It's a very inefficient process. An uploaded brain will just imagine and have answers to its questions seamlessly integrated in the thinking process. So if you’re asked what the 10312th prime number is, this will be just as easy to answer as 1+1.
How these fundamental changes to our experience of reality will impact how we see ourselves, society and the world, is very hard to predict. I’d expect the transition period - where only a few are uploaded - to be very shaky. A working upload will create a striking imbalance in individuals’ capabilities and needs, that goes beyond anything we’ve known so far. It will also give answers to questions many don’t want to have answered, with high societal stakes attached to them... The societal questions definitely pose a wicked challenge. But I don’t see them as “cons” of upload, but rather as accompanying symptoms that will need wise and foresight planning to be resolved.
The one thing that is interesting about current AI is that it yields unexpected results that were thought to be impossible before. Prior to AlphaGo, the predictions for a computer program beating the best human at Go were between “20 years” and never. This is especially interesting, as the fundamental ideas behind AI have been known for many decades. It shows how bad we are at extrapolation. Given humanities’ track record on extrapolation, I will not voice an opinion on the matter.
However, I think we’ve yet to see the true value of the current AI approaches. I am very excited about AlphaFold – that radically changed how we see protein folding. And we will continue to see a lot of progress in such “AI augmented” science and methods. Hopefully also with breakthroughs for the current limitations regarding cryonics. Therefore, I see a lot of positive potential in AI for our society.
At the same time, there are also risks. AI’s curating information (e.g. in social media) in a way polarizes society: it’s target is to increase the time spent on a platform – and the most effective answer is creating echo chambers of small in-groups, which creates polarization as a side-effect. Or that bias in training data is perpetuating injustice. Or that AI might make certain jobs obsolete, triggering a retraining challenge for society. Given the vast number of AI boards and AI ethics endeavours however, I think we’re seeing a quite effective self-control mechanism of our society.
I don’t think we will all reach immortality. In my experience, it’s an absolute minority who wants an indefinite health span, and that’s OK. I think we should all be able to live as long as we want, and not as long as our biology limits us to.
How the future will look like I don’t know. There has of course been a plenitude of speculation in the form of science fiction literature, movies, series, and games – both utopian and dystopian. But we’ll see which one we manage to create. However, aspiring to live indefinitely is a very strong moral driver to create a society worth living in, because who would want to live indefinitely in a dystopia. That makes me hopeful it will be one of the utopian futures.
I think we will be seeing different types of AI. Some will be very task oriented and optimize to solve a certain task well. They will be machine-like and coexistence is not unlike today: the humans dictate the goal and the AI will be optimizing for it.
Then there will be more human-like AIs, where it will get very interesting, because my current best understanding of consciousness is that it is gradual, and it will be very hard to draw a clear line between AI-as-a-machine and AI-as-a-being. Especially, as already today we see unconscious AI’s producing outcomes we are hard-pressed to detect as machine-made, if presented without the context.
I am not particularly worried by an AI ruling humanity. Whatever that AI is, it will require a computational substrate to run on, and as such even if it is smarter than us, and self-improving, it will still be bound to physical laws. There are bounds to its self-improvement. I am much more worried by humans abusing the power of AI to hurt other humans. For example it is quite worrying that humanity fails to agree on banning autonomous weaponry.
For each very complex technology, the vast majority has a very poor understanding of how it works, and therefore what the true limits, dangers and potentials are. It’s often also not simple to predict for even the most studied experts in the technology. Because there’s so many complex technologies, one can have a decent understanding in only very few of them. So what happens is that for most technologies everyone has to rely on extreme simplifications, hearsay, dreams/fear and salient best/worst-case scenarios communicated by others and not our own understanding to assess them. This leads to a public opinion that is shaped by those who manage to capture most attention. This is especially pronounced in AI, as it combines an extremely complex technology with a potentially unbounded but impossible to predict impact and a vague definition of what “AI” is in the first place (in my experience, what generally is being assumed to be “AI” is not what “AI” actually is and a lot of misconceptions and overinflated expectations remain.) Which is the perfect recipe for big dreams and fears alike.
Another aspect that prevents these misconceptions from being corrected is that the large majority doesn’t come in direct contact with AI but with artifacts influenced by AI at best. A curated social media feed, a chatbot answering your support request, voice assistants – the influence of learning algorithms is subtle and the improvements gradual. Furthermore, the services often start sub-par, where a few enthusiasts embrace them and only with time they become good. By then we have gotten used to them (or do you still think Amazon Alexa’s ability to understand your commands is magical?). The vast majority of AI is invisible and we only hear about it, when something goes wrong enough to make salient headlines.
By that dynamic, I think we will keep seeing slowly growing concern and increasing adoption and normalisation at the same time.
I think the strongest driver for equality has been digitalisation in general. No matter how rich I am, the best phone I can buy is either an iPhone or an Android phone, and we all get almost the same Netflix subscription. The internet, its ubiquitous availability, and the effort- and cost-less replication of data, have shrunk inequalities. The things we spend most of our leisure time with, treat everyone relatively equal. This has also led to a more diverse offering as I am discovering, to name one, asian Sci-Fi movies, as an AI is suggesting them to me.
I am not sure how AI can create a more equal society on a fundamental level. While the general scientific progress, and this includes “AI augmented” progress, will lift the overall quality of life, AI is just a puzzle piece in this.
We’re living in unprecedented times. A pandemic, looming climate change – we’re challenged as a species. I feel that as humanity, we lack a joint narrative to look forward to. A future we want to build and live in. I would like everyone to imagine this future and then start building it, tackling and embracing these challenges, instead of ignoring them for as long as we can.
Rafael’s talk at the Biostasis2021 was definitely one of the most anticipated and perhaps controversial of the whole conference. He spoke about wealth management while in cryostasis. In fact, when patients are declared legally dead, they lose the ownership of their possessions. What strategies could we use to try to accumulate wealth that could be returned to us after revival? If you want to know, check the full speech in the video below.
Artificial intelligence is currently changing the world we live in. Every field from healthcare, transportation, agriculture and entertainment to name a few, is implementing some AI solutions. We are at the very beginning of a technological revolution.
What incredible possibilities will the development of AI give us? How will it affect the field of cryonics specifically? Could it perhaps help in the reintegration of members after their revival?