Generative Data Intelligence

Cryptography Tricks Make a Hard Problem a Little Easier | Quanta Magazine

Date:

Introduction

What’s the best way to solve hard problems? That’s the question at the heart of a subfield of computer science called computational complexity theory. It’s a hard question to answer, but flip it around and it becomes easier. The worst approach is almost always trial and error, which involves plugging in possible solutions until one works. But for some problems, it seems there simply are no alternatives — the worst approach is also the best one.

Researchers have long wondered whether that’s ever really the case, said Rahul Ilango, a graduate student studying complexity theory at the Massachusetts Institute of Technology. “You could ask, ‘Are there problems for which guess-and-check is just optimal?’”

Complexity theorists have studied many computational problems, and even the hard ones often admit some kind of clever procedure, or algorithm, that makes finding a solution a little bit easier than pure trial and error. Among the few exceptions are so-called compression problems, where the goal is to find the shortest description of a data set.

But last November, two groups of researchers independently discovered another algorithm for compression problems — one that’s ever so slightly faster than checking all the possible answers. The new approach works by adapting an algorithm invented by cryptographers 25 years ago for attacking a different problem. There’s just one restriction: You need to tailor the algorithm to the size of your data set.

“They’re really beautiful and important results,” said Eric Allender, a theoretical computer scientist at Rutgers University.

Defining Hardness

The new results are the latest to investigate a question first studied in the Soviet Union, well before the advent of complexity theory. “Before I was in grade school, people in Russia were formulating this,” Allender said.

The specific computational problem that those Soviet researchers studied, called the minimum circuit size problem, is akin to one that designers of computer hardware face all the time. If you’re given complete specifications of how an electronic circuit should behave, can you find the simplest circuit that will do the job? Nobody knew how to solve this problem without “perebor” — a Russian word roughly equivalent to “exhaustive search.”

The minimum circuit size problem is an example of a compression problem. You can describe a circuit’s behavior with a long string of bits — 0s and 1s — and then ask whether there’s a way to reproduce that same behavior using fewer bits. Checking all possible circuit layouts would take time that grows exponentially with the number of bits in the string.

This sort of exponential growth is the defining feature of a hard computational problem. But not all hard problems are equally hard — some have algorithms that are faster than exhaustive search, though their runtimes still grow exponentially. In modern terms, the perebor question is whether any such algorithms exist for compression problems.

In 1959, a prominent researcher named Sergey Yablonsky claimed to have proved that exhaustive search really was the only way to solve the minimum circuit size problem. But his proof left some loopholes. Some researchers noticed the flaws at the time, but Yablonsky was influential enough to discourage most others from working on the perebor question.

In the decades that followed, few researchers studied compression problems, and the perebor question was known mostly as a footnote in the prehistory of complexity theory. Widespread attention to the question came only recently, after researchers discovered a curious link between compression problems and the foundations of cryptography.

One-Way Traffic

Modern cryptography uses hard computational problems to safeguard secret messages. But computational hardness is only useful if it’s asymmetric — if it’s hard to decipher coded messages but not hard to encode messages in the first place.

In every cryptography scheme, the origin of this asymmetry is a mathematical object called a one-way function, which transforms input bit strings into output strings of the same length. Given an input to a one-way function, it’s easy to compute the output, but given an output it’s hard to invert the function — that is, to reverse-engineer it and recover the input.

“Cryptographers really would like to have very, very efficiently computable one-way functions that are really, really hard to invert,” Allender said. Many mathematical functions seem to have this property, and the difficulty of inverting these functions stems from the apparent difficulty of solving different computational problems.

Unfortunately, cryptographers don’t know for sure whether any of these functions are truly hard to invert — indeed, it’s possible that true one-way functions don’t exist. This uncertainty persists because complexity theorists have struggled for 50 years to prove that seemingly hard problems really are hard. If they aren’t, and if researchers discover super-fast algorithms for these problems, that would be disastrous for cryptography — akin to suddenly routing speeding cars in both directions on a one-way street.

Even though a comprehensive understanding of computational hardness remains elusive, cryptographers have recently made exciting progress toward a unified theory of one-way functions. One big step was taken in 2020, when the Tel Aviv University cryptographer Rafael Pass and his graduate student Yanyi Liu proved that one-way functions are intimately connected to a specific compression problem called the time-bounded Kolmogorov complexity problem.

If that one problem really is hard to solve for most inputs, then Pass and Liu’s result yields a recipe for how to construct a provably hard one-way function — one that’s guaranteed to be secure even if other computational problems turn out to be far easier than researchers expected. But if there’s a fast algorithm for solving the time-bounded Kolmogorov complexity problem, then cryptography is doomed, and any function can be easily inverted. A one-way function based on the hardness of this problem is the most secure function possible — a one-way function to rule them all.

Building on Data Structures

Pass and Liu’s discovery was the latest chapter in a long line of research that uses complexity theory to better understand the foundations of cryptography. But it also suggested a way to invert that relationship: The equivalence between the time-bounded Kolmogorov complexity problem and function inversion implies that insights about either problem can reveal more about the other. Cryptographers have been studying function inversion algorithms for decades to better understand the weak points of their encryption methods. Researchers began to wonder whether those algorithms could help answer age-old questions in complexity theory.

Like many computational problems, function inversion can be solved by exhaustive search. Given an output string, simply plug every possible input into the function until you find the one that yields the right answer.

Introduction

In 1980, the cryptographer Martin Hellman began to study whether it was possible to do any better — the same question those Soviet mathematicians had asked about compression problems decades earlier. Hellman discovered that yes, it’s possible — as long as you’re willing to put in some extra work in advance, using mathematical objects called data structures.

A data structure is essentially a table that stores information about the function to be inverted, and constructing one requires computing the outputs of the function for certain strategically chosen inputs. All those computations “could take a very long time,” said Ryan Williams, a complexity theorist at MIT. “But the idea is that this is done once, once and for all.” If you’re trying to invert the same function given many different outputs — say, to decode many different messages encrypted the same way — doing this work in advance might be worthwhile.

Of course, storing that extra information requires space, so take this strategy to the extreme, and you could end up with a fast program that can’t fit on any computer. Hellman designed a clever data structure that enabled his algorithm to invert most functions slightly faster than exhaustive search without taking up too much more space. Then in 2000, the cryptographers Amos Fiat and Moni Naor extended Hellman’s arguments to all functions.

After Pass and Liu’s breakthrough in 2020, these old results were suddenly newly relevant. The Fiat-Naor algorithm could invert arbitrary functions faster than exhaustive search. Could it also work for compression problems?

Out of Uniform

The first researchers to raise the question were the complexity theorist Rahul Santhanam of the University of Oxford and his graduate student Hanlin Ren. They did so in a 2021 paper proving that compression problems and function inversion were even more intertwined than researchers had realized.

Pass and Liu had proved that if the time-bounded Kolmogorov complexity problem is hard, then function inversion must also be hard, and vice versa. But problems can be hard and still admit solutions that are a bit better than exhaustive search. Santhanam and Ren showed that there is a close connection between whether exhaustive search is required for one problem and whether it’s required for the other.

Their result had different implications for two broad classes of algorithms that researchers often study, called “uniform” and “nonuniform” algorithms. Uniform algorithms follow the same procedure for every input — a uniform program for sorting lists of numbers, for instance, will work the same way whether there are 20 entries on the list or 20,000. Nonuniform algorithms instead use different procedures for inputs of different length.

The data structures used by the Fiat-Naor algorithm are always tailored to a specific function. To invert a function that scrambles a 10-bit string, you need a data structure that’s different from the one you’d need to invert a function that scrambles a 20-bit string, even if the scrambling is done in a similar way. That makes Fiat-Naor a nonuniform algorithm.

Santhanam and Ren’s result suggested that it might be possible to transform the Fiat-Naor algorithm into an algorithm for solving compression problems. But adapting the algorithm from one problem to the other wasn’t straightforward, and they didn’t pursue the question further.

Introduction

Pass stumbled on the same idea a year later, after hearing Fiat give a talk about the classic algorithm at a workshop celebrating Naor’s contributions to cryptography. “This idea of using function inversion had been in the back of my mind since then,” he said. He later began to work on the problem in earnest with Tel Aviv University researcher Noam Mazor.

Meanwhile, Ilango was inspired to attack the problem after discussions with other researchers, including Santhanam, on a visit to the Simons Institute for the Theory of Computing in Berkeley, California. “It came out of one of these very serendipitous conversations where you’re just throwing things around,” Santhanam said. Ilango later joined forces with Williams and Shuichi Hirahara, a complexity theorist at the National Institute of Informatics in Tokyo.

The hard part was figuring out how to embed the data structure at the heart of the Fiat-Naor algorithm into a nonuniform algorithm for solving compression problems. There’s a standard procedure for doing that kind of embedding, but it would slow the algorithm down, wiping out its advantage over exhaustive search. The two teams found cleverer ways to incorporate the Fiat-Naor data structure, and obtained algorithms for compression problems that worked on all inputs and remained faster than exhaustive search.

The details of the two algorithms differ slightly. The one by Ilango and his co-authors is faster than exhaustive search even if you restrict the search to the simplest possibilities, and it applies to all compression problems — time-bounded Kolmogorov complexity, the minimum circuit size problem, and many others. But the core idea was the same for both algorithms. The techniques from cryptography had proved their worth in this new domain.

Inversion Convergence

The new proof for nonuniform algorithms raises a natural question: What about uniform algorithms? Is there a way to solve compression problems faster than exhaustive search using them?

The recent string of results implies that any such algorithm would be equivalent to a uniform algorithm for inverting arbitrary functions — something that cryptographers have unsuccessfully sought for decades. Because of that, many researchers find the possibility unlikely.

“I would be very surprised,” Santhanam said. “It would require a completely new idea.”

But Allender said researchers shouldn’t discount the possibility. “A good working hypothesis for me has been that if there’s a nonuniform way to do something, very likely there’s a uniform way,” he said.

Either way, the work has made complexity theorists newly interested in old questions in cryptography. Yuval Ishai, a cryptographer at the Technion in Haifa, Israel, said that’s what makes it most exciting.

“I’m really happy to see this convergence of interest between different communities,” he said. “I think it’s great for science.”

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?