Cryptography: Taking on Any Adversary
Benjamin Skuse
To those outside the field, cryptography will always be synonymous with hiding secret military information. Peak cryptography remains, in many people’s eyes, the story of the how the Enigma encryption machine’s stubborn cipher – used for secret communications by the Nazis in World War II – was eventually cracked by the Allies. And the truth is that cryptography has indeed for most of its history been concerned with ensuring military and other communications remain confidential.
However, fast-forward to today and confidential communication is a largely solved problem (until quantum computers start to have a say, see below). What many modern cryptographers are much more interested in, is trustworthy computation.
In Computation We Do Not Automatically Trust
Yael Tauman Kalai (ACM Prize in Computing – 2022) is at the absolute bleeding edge of developing and applying cryptography to trustworthy computation – more specifically, in developing cryptographic tools and techniques that succinctly ‘prove,’ or certify the correctness of what could be a huge computation.
The most prominent example of the utility of succinct proofs is blockchain technology. As many will be aware, the blockchain is a decentralised, distributed ledger that stores the record of ownership of digital assets, from cryptocurrency to NFTs. Data appearing on the blockchain cannot be modified, with new transactions objectively authorised by a consensus algorithm.
Until recently, most consensus algorithms have been proof-of-work protocols, where computers that are connected to the network race to solve complicated cryptographic puzzles (known as ‘mining’) in order to validate transactions. This is extremely computationally intensive. Moreover, every node of a blockchain network stores a copy of the entire data chain and processes every transaction, adding to the immense computational power required.
“Anyone who uses cryptocurrency will know that verifying that a transaction is valid is a huge computational burden,” said Kalai in a press conference at the 11th Heidelberg Laureate Forum. “Many companies, such as Ethereum, now add a little certificate that certifies that a transaction was valid, so that people don’t need to verify [using proof-of-work algorithms] … and a certificate to certify the digest of all the history of spending, because the entire blockchain is now too big to verify.” These certificates overcome key obstacles in blockchain’s scalability, enabling faster and more reliable transactions, and reduce much of the technology’s computational burden and therefore energy consumption.
But what are these certificates and how do they bypass all the computationally burdensome work that has been the backbone for blockchain technology for so long? Kalai’s lecture at the 11th Heidelberg Laureate Forum answered these questions and more.
To get there, she took the audience on a whistle-stop tour of the key results since the 1980s that culminated in her work devising a method for generating succinct proofs that certify the correctness of any computation.
Zero-knowledge Proofs
It all started with zero-knowledge proofs, she said. Invented by Shafi Goldwasser, Silvio Micali (who both received the ACM A.M. Turing Award in 2012) and Charles Rackoff in 1985, their goal was to come up with proofs that reveal no information about why an assertion is true. Put another way, they wanted to know if it possible to solely show that some statement is true without giving any information up about that statement. Their solution to the problem was simple, but ingenious, introducing the notion of an interactive proof, where proving an assertion involves an exchange back and forth between a prover and verifier.
As an example, imagine owning a clothes factory and being friends with your main competitor. You discover that you are both buying cotton from the same supplier, and you both want to know whether you are paying the same price, but also don’t want to divulge the price you are paying to each other.
Assuming wholesale prices range from €6 to €15 per metre, we can go about it by labelling 10 lockable boxes €6, €7, €8…€15 – each with a small slot that can take only a piece of paper – and place them in a secure, private room. You go into the room alone first, take the key from the lockbox labelled €6 and destroy the keys for the other boxes, and then leave the room. Your competitor next goes into the room alone with a piece of paper, and slides it inside the lockbox labelled €12 before leaving the room.
You return with your key that can only open the lockbox labelled €6. You open the lockbox to find nothing but a mote of dust, so now you know that your competitor is not paying the same amount as you. And when your competitor sees you exit the room empty handed, they also know that you are not paying the same amount as them. Both of you return to your factories somewhat frustrated, knowing only that you are not paying the same amount, but neither of you knows whether you are paying more or less.
Probabilistically Checkable Proofs
“Later, researchers were able to show that actually you can convert any statement, any proof, into a zero-knowledge proof … but it requires the cryptographic assumption that one-way functions exist,” explained Kalai, referring to the open problem that mathematical functions exist that are significantly easier to compute in one direction (the forward direction) than in the opposite direction, roughly speaking. “To remove this assumption, they [Goldwasser and other colleagues in 1988] introduced a new model of proof system called a multiprover interactive proof.”
In the simplest multiprover system – the two-prover system – the verifier asks questions to two provers, who do not interact between themselves, and receives two distinct answers. “If you have two suspects that committed a crime, it’s easier to catch them because they need to cheat consistently with one another, and that’s much harder than just cheating on your own,” explained Kalai. “This is really what gives this model its power … in this model, you can make proofs exponentially shorter.”
This revelatory development spurred a flurry of activity, including the explicit introduction of probabilistically checkable proofs (PCPs) by Sanjeev Arora (ACM Prize in Computing – 2011) and Muli Safra. A PCP introduces a randomised algorithm that acts as the verifier to send random challenge values to the prover and simultaneously restricts the number of random bits the verifier uses and also the number of proof bit accesses it makes, but still can verify a proof’s correctness to high confidence.
The PCP theorem is widely regarded as a keystone result of complexity theory and cryptography. “PCPs were used to create succinct interactive arguments using cryptography to essentially take this long proof and kind of squish it in the first message to the prover,” said Kalai.
Despite PCPs making significant waves, even featuring in a 1992 story in the New York Times, ‘New Short Cut Found For Long Math Proofs‘, they could still be very large in and of themselves, and therefore required significant storage.
They were also still interactive. Succinct zero-knowledge interactive proofs (or arguments in cryptography parlance) require multiple rounds of interaction between the prover and verifier. The ideal is non-interactive zero-knowledge arguments, designed to be efficient and rendering direct communication between the prover and verifier unnecessary.
From Fiat-Shamir to SNARGs
One approach to eliminating interaction is known as the Fiat-Shamir transformation, first developed by Amos Fiat and Adi Shamir (ACM A.M. Turing Award – 2002) in 1986, and extensively utilised in real-world applications today, including in the most prevalent digital signature scheme (ECDSA) used by all iOS and Android mobile devices.
The idea behind the Fiat-Shamir transformation is that instead of having the verifier send a random challenge value to the prover, the prover can compute this value themselves by using a random function, such as a cryptographic hash function (i.e. a function that can, for example, take a plaintext data input and use a mathematical algorithm to generate an unreadable output).
Essentially, this is the last ingredient in the recipe for building succinct non-interactive arguments (SNARGs), combining an information-theoretically secure proof system (the PCP) satisfying some constraints, with cryptographic tools (the Fiat-Shamir transformation) that bind provers to those constraints.
Versions of SNARGs can be and are being used to certify the correctness of computations rapidly and efficiently, by blockchain technologies, for example, where they enable users to prove possession of certain information without revealing the details of that information, such as when they sign privacy-preserving smart contracts.
But are SNARGs secure? More specifically, is the Fiat-Shamir transformation secure? “That’s still our million-dollar question,” says Kalai. “It’s secure if you’re willing to rely on what’s called the random oracle model, or if you assume your hash function is truly random. It’s also secure if you assume the existence of non-signalling PCPs, or PCPs with a kind of strong non-signalling soundness, inspired from quantum physics.”
“In practice, it’s secure,” she continued. “We have examples that show that this paradigm is not secure, but these counterexamples are very contrived, and if we impose slightly stronger conditions on the hash and PCP we can regain security.”
Securing Humanity’s Future
As Kalai managed to succinctly describe in less than an hour, progress in cryptography since the 1980s has led to equally succinct proof protocols that are the foundation for blockchain security today. And more generally, these modern cryptographic methods are the bedrock for online security. But there is a technology on the horizon set to not only shake those foundations but destroy them: quantum computing.
“Cryptography is based on an assumption, where we use hardness and pretend that it’s randomness to encrypt and mask our communication,” said Kalai in an HLF press conference. “All our cryptographic schemes are based on hard problems, like hardness of factoring large numbers.”
For example, one of the oldest widely used public-key cryptosystems for secure data transmission – the RSA algorithm, named after Ron Rivest, Adi Shamir, and Leonard Adleman (all co-recipients of the ACM A.M. Turing Award in 2002) – relies on the hardness of factoring large numbers. Shor’s algorithm, developed in 1994 by Peter Shor (Nevanlinna Prize – 1998), shows that factoring integers is efficient on an ideal quantum computer, and that a large quantum computer could quite feasibly defeat RSA.
“This is a risk for cryptography,” underlined Kalai. “It’s a risk for humanity, because we’re all relying on this secure communication all the time – every time you go to Amazon, every time you buy online – based on cryptography that can be broken if quantum computers exist.”
So what can be done? Move our mindsets to a post-quantum world, said Kalai, where we build algorithms resistant to attacks by advanced quantum computers.
Despite forecasts from some of the brightest minds predicting it will be anything between 20 and 30 years before quantum computers have enough stable qubits (roughly a million) to break today’s advanced cryptographic protocols, post-quantum cryptography is already a vibrant burgeoning field.
Researchers are busy developing cryptographic systems that are secure against both quantum and classical computers, and can interoperate with existing communications protocols and networks. And the likes of IBM, Microsoft and many more have already developed cryptography approaches whose security relies on different, hard mathematical problems that appear to be resistant to an attacker who has access to a quantum computer. The key – as has always been the case in cryptography – will be to remain one step ahead of the opposition.
The post Cryptography: Taking on Any Adversary originally appeared on the HLFF SciLogs blog.