Generative Data Intelligence

The Computer Scientist Who Finds Life Lessons in Games



For Shang-Hua Teng, theoretical computer science has never been purely theoretical. Now 58, Teng is a professor of computer science at the University of Southern California and a two-time winner of the Gödel Prize, an annual award recognizing groundbreaking theoretical work. But he often strives to connect that abstract theory to everyday life in ways both practical and playful.

Born in Beijing on the eve of the Chinese Cultural Revolution, Teng came to the United States for graduate school planning to study computer architecture, but he soon changed direction to focus on more abstract mathematical theory. He received his doctorate at Carnegie Mellon University in 1991 for proving a theorem about how best to partition graphs — webs of points, or nodes, connected by lines, or edges.

Though theoretical, the work had practical applications — and often, he found, practical applications led to new theoretical insights. During a 1993 NASA summer fellowship, Teng joined a team simulating fluid dynamics using “finite-element” methods, which model complex structures as assemblages of many tiny pieces. These assemblages can be treated as graphs, and Teng’s task was to adapt the partitioning method from his graduate research to this new setting. But he became curious about the partitioning technique that the NASA team had used previously, and began to investigate its underlying mathematical structure together with fellow computer scientist Daniel Spielman, now a professor of computer science at Yale University. That joint research project kicked off a decades-long collaboration that won them the two Gödel Prizes.

It wasn’t the only time he saw a deep link between theory and practice. “Each time, these seemingly totally practical things had this beautiful mathematics behind them,” Teng said.

More recently, Teng has turned his attention to the beautiful mathematics behind games like tic-tac-toe, chess and Go. In such “combinatorial” games, there’s no element of chance, and both players always know everything about the state of the board. Yet combinatorial games remain challenging because the number of ways a game can play out might be dizzyingly large.

Game theory researchers like to generalize such games to ever larger boards — scaling up tic-tac-toe from 3-by-3 squares to n-by-n, for instance — and quantify the difficulty of determining which player will win given some initial board state. The different possible answers sort games into the same “complexity classes” that crop up throughout theoretical computer science.


One famous complexity class goes by the prosaic name P, for “polynomial time,” and contains the kind of problems that can be solved in a reasonable amount of time, roughly speaking. Problems in the equally famous class NP may take an unreasonable amount of time to solve, but their solutions are easy to check. For problems in another complexity class, dubbed PSPACE, even such efficient verification isn’t guaranteed. When researchers consider the “deep logic” of two-player games — “if you do X, and then if I do Y, and then if you do Z,” and so on — they often find themselves talking about PSPACE. But as Teng has helped prove, the mathematics of combinatorial games is not always straightforward.

Quanta spoke with Teng recently to discuss his path to computer science, the mathematics underlying board games, and his father’s influence. The interview has been condensed and edited for clarity.

What was it like getting an education in China?

I was born slightly before the Cultural Revolution, and my father was a university department chair in civil engineering. When the revolution happened, he was in captivity on campus. Then the whole campus was sent deep into the countryside.

I used to collect garbage to sell until I was practically finishing junior high, and then suddenly China changed. If you studied you could get into college, and we had no other prospect of having any regular job. I woke up, and I said, “I need to study.”

How did you choose computer science?

I wanted to study biology after high school. I don’t know why, but my father was not very happy with that. I was doing OK in math, and he asked me whether I wanted to do math. I said no. [Laughs.] And then he said, “You know, there’s a new discipline called computer science, and it’s really good.” Somehow, he nudged me into majoring in computer science.

The education at that time was very basic. We were not exposed to most things, and computer science was not even a department; it was a major in electrical engineering. But by totally random luck we were trained as math students in calculus, and I learned a few things that were eventually useful for becoming a theoretician. Without that I probably would have had zero chance to pass. These days the kids are much more talented: From high school on they are more gifted mathematicians than I was when I came to this country.


How did those gaps in your knowledge affect your experience of graduate school?

One day [my adviser, Gary Miller,] discovered I’d never heard of NP. It was in a discussion. He said, “This problem looks NP-hard.” I said, “Uh-huh.” He said, “You don’t believe me?” And then he began to prove it, and halfway through he sharply turned to me, because I was just sitting there, and he said, “Do you know what NP-hard is?” I said no.

I thought that was my last day working with him, but he continued and told me the definition. He said, “If you don’t know, it doesn’t matter, as long as you are able to think.” He had a tremendous impact on me.

You’re primarily a theoretician, but throughout your career you’ve made forays into industry. How did this practical work connect to your theoretical research?

In my thesis I developed some geometric methods for partitioning graphs. I was able to show that this family of geometric methods gave provably good cuts for finite-element graphs.

On my mentor’s recommendation, I began to give talks at NASA and Boeing Aerospace. At Boeing, I remember the 3D model of one of the wings already had close to a million elements — they couldn’t even load that into one machine. So they wanted to cut this graph into different components, put them onto different machines with similar computational loads, and minimize communication. That’s why mathematically the formula is a graph cut.

In theoretical computer science, often the underlying mathematical principles are unchanged even when the appearance of the problem changes drastically, from optimization to game theory. When you’re doing the research, it doesn’t feel like a drastic change.

Speaking of game theory, I saw that you helped design a board game. How did that happen?

Oh, I love board games! There are beautiful connections with complexity theory. But mostly I’m the student of my students.

I was giving a talk at Boston University about a beautiful discrete theorem called Sperner’s lemma. It’s very simple in one dimension. You have a line segment where one end is red and one end is blue. You divide it into subsegments [with nodes at both ends] and color each new node either red or blue. Then [no matter how you color them] we know there must be a segment that has both colors.

In two dimensions, it’s very fascinating. You have a triangle, and now you have three colors: One corner is red, one is blue and one is green. You divide this triangle into smaller triangles, so the edges are broken into segments. Each outside edge follows the one-dimensional rule: Nodes can only use the colors of the two ends. Inside the triangle, you can do all three colors any way you like. Sperner’s lemma says that any way you divide it, if you do this coloring, there must be a triangle that has all three colors.

Kyle Burke was my student, working on numerical analysis at the time. He came to my office and said there could be a beautiful board game of Sperner’s lemma: Two players iteratively color a board, and whoever induces a three-color triangle will lose the game. The best board games have winners rather than a tie, and here, clearly someone will win. Why? Because Sperner’s lemma!

I called my friend David Eppstein from Irvine to talk about what makes a good board game. He said, “A good game has simple rules and a beautiful board, and it has to be PSPACE-hard.” Because if you can solve it in polynomial time, a computer would beat you all the time.

So we went through those criteria. Kyle said, “Is this game simple?” I said, “Yeah, it’s one sentence!” He said, “Is this game colorful?” I said, “By design!” Then he said, “If I prove it’s PSPACE-hard, can I get a Ph.D.?” I said yes, and he did. There are many different facets of his theorem. It reveals certain things about fixed points, which are a very beautiful concept in mathematics.


Can I play the game anywhere?

It’s available, with some tweaks, online.

What games do you like to play?

I’m a theoretician of games. [Laughs.] I play a little bit with my daughter, but I didn’t grow up playing them. Unlike my students, who have played games all their lives.

What other work have you done on the mathematics of board games?

We had a paper recently about an open question: If you put two polynomial-time-solvable games together, side by side, would that make them PSPACE-hard? At each move you can only play one of them. This is called summation of games.

What does it mean to put two games together?

In the ancient game Go, when you put down enough stones, you get many separate arenas, so in some sense you are playing a sum of games. You have to worry about this corner and that corner. You want to win the whole thing, but that doesn’t mean you have to win every part.

It’s philosophically interesting, right? It’s like you have a war, and it has many battles, but your attention is finite. At any moment you can only make a single decision on one of the battlefields, and your opponent can either respond or double down in some other battlefield. I was trying to explain this to my father. When you play a sum of games, it really means: How do you lose strategically?

We proved it for two games, but you can put three games together and the theorem is still true: Three polynomial-time games put together can become PSPACE-hard.


Since he pushed you toward computer science, how did your father respond to the different work you’ve done over the years?

He often asked me, “Why are you doing this?” Working in theory, often you have no result for years, and he understood that gradually. Early on I could talk about the finite-element method — they teach that in civil engineering too. But I couldn’t figure out how to talk about this recreational mathematics.

Then I thought about an idiom derived from this famous Chinese novel called Romance of the Three Kingdoms. One of the characters, Zhuge Liang, was almost a perfect strategist, and the idiom goes, “Three shoe fixers are better than Zhuge Liang.” It’s used in this lighthearted way to say that three average people can be perfect when they put their heads together. But when you look at the history of this idiom, things were pronounced differently in different regions, and “shoe fixer” had the same sound as “field general.” So it says, “Three field generals together are better than this perfect strategist.”

I said to my father that’s exactly the theorem we proved with the summation of games. The field generals represent [algorithms for solving] polynomial-time games: On each battlefield, they know how to win. But the hard part is knowing when to lose, not how to win each of the component games. If someone can play that hard game, they really are the best strategist. The field generals do not make these deep logic decisions, but somehow if you put them nicely together, they’re no worse than this perfect strategist.

I told my dad, “I finally realized this math theorem that’s equivalent to one of our famous idioms!” He was 94 at that time, very sharp, and he said, “That’s a good try.” I didn’t quite convince him. That was my last technical conversation with him; a few months later he passed. Whenever I think about explaining my work, this is my highlight.


Latest Intelligence