The Privacy Singularity

Philip Reuchlin
5 min readMar 20, 2016

Privacy and the Singularity

Remember the good old Bond movies, where the message with the “kill- target” would auto-destruct in 10 seconds after reading? Nobody raised concerns that Q, the one who built the auto-destruction mechanism, was hiding information. Apples fight with the FBI over its resistance to build a back door into the encryption provided by the iPhone however, does raise privacy concerns. Currently, the iPhone will auto-destruct the information on it after 10 wrong attempts to crack its access — code. (For more info watch John Oliver last week https://www.youtube.com/watch?v=zsjZ2r9Ygzw)

However, Apples’ fight also raises a question about technological progress as such. Increasingly it is possible to actually provide a refuge, hiding place, etc.. to be sure that the government cannot enter. Before this was not possible. Any hiding place was a physical one and could therefore be made out. In the digital world, the physical laws no longer apply thus making hiding easier (the number of hiding places eventually becomes infinite).

The most valuable item to the government, in Apple’s case the code to the terrorists iPhone, is increasingly becoming information: a bit/byte in someones head. Likewise, all value transacted on stock exchanges is pieces of code, aka information. Information is power and thus valuable.

Thus, with the two converging trends (of digitization and value originating from pieces of information) interacting with our deep rooted desire for privacy, a dilemma at the heart of technological progress may inadvertently be ignored at our peril. Oxford Philospoher Nick Bostrom, who theorizes about technology and existential risks, writes about the fact that one should try to build a fail-safe before the technology itself: i.e. the ability to turn off. In atomic reactors an example would be re-inserting the reactor rods in water to cool them. In toasters we pull the plug. The danger Bostrom is primarily aiming his precautionary principle is super-intelligence, where it could be very hard to build an “off switch” before inventing the technology itself. The moment of singularity, beyond which one cannot predict the consequences of something, could arrive with such a super-intelligence.

In his view intelligence can be defined as an “optimisation process” that steers the future into a particular set of configurations. Super intelligence is extremely good at using the available means and resources to achieve a state in which its goal realized. That means there is no connection between being highly intelligent and having an objective that we humans would find meaningful. If we give the AI the task, say, to solve a complex mathematical problem, it may well decide that the most effective way to do solve this problem would be to turn the whole world into a computer so as to increase its thinking capacity, and that human resistance to this is a threat and detrimental to solving the math problem.

Admittedly this is a cartoonish example, but it points to a larger issue: if you create a powerful process to solve for problem X, you have to make sure that X incorporates everything you care about. With privacy and technological progress this is all the more salient. If X is total privacy, then we could very soon have the situation where total privacy becomes enabled by technology, with no kill switch, back-door, etc.. Hiding information behind a hardware and software layer, all protected by a biological layer aka the brain, can be achieved very easily.

It is thus conceivable to hide the most valuable item (information) in the most hidden away place (the digital universe), where the owner may die but the information can never be retrieved, sucked into the black hole of the digital universe, from where nothing escapes. Bitcoins exhibit similar characteristics, whereby if one loses the “personal key” the corresponding Bitcoin remains suspended and inactive, with no one able to ever again to transact it, a zombie body that has lost its original soul owner.

One should be able to see the current situation of Apple as one whereby we are dealing with a robot (i-Phone) that contains information vital to us humans, but will not give it to us (consciously or consciously), despite our best efforts. Admittedly we are not there yet, since Apple says it can crack it with 8 engineers in 4 weeks. However, in a couple of years a next generation robot, augmented with self-learning algorithms programmed to defend itself against human attackers, may have accelerated past our limits to crack it. It may come up with creative blocks that patch up any potential leaks or ways into the system. If it can beat us at our games, such as chess and GO, there is no reason to believe it wouldn’t be able to beat us at its own game.

In such a future, James Bond, had he not been paying attention to the message in a moment of distraction, would not be able to disable the auto-destruct feature, thus loosing the vital key to world peace. When applied to humanity, the outlook darkens. A sophisticated techno-sphere oriented towards absolute encryption and privacy, could absorb and “protect” all the vital information to humans, not give it to us and leave us suspended in an information void. Any attempt to hack it would be pre-empted and thwarted. In such a future, where we are so dependent on information and digitization, our own creation (sophisticated privacy) would lead to societal collapse. A technological singularity may well be upon us through the backdoor of our own need for privacy.

--

--

Philip Reuchlin

The world is changing. Head of Climate at Pioneers.io- decarbonisation strategy and startups.