33

Does there exist such an algorithm where, if given infinite processing power, a computer could play chess perfectly, i.e., it can generate perfect moves from any position?

If so, where can I find the pseudo-code for it?

Jonah
  • 457
  • 1
  • 4
  • 5
  • 9
    What do you mean by perfect chess? – Herb Dec 19 '17 at 19:53
  • 5
    @HerbWolfe I assume he means that it never makes a move that permits its opponent to force it to lose and resigns if, and only if, every possible move permits its opponent to force it to lose. – David Schwartz Dec 19 '17 at 20:36
  • 3
    Negamax/minimax don't make mistakes but they're not optimal at provoking mistakes in weaker opponents. – CodesInChaos Dec 19 '17 at 20:58
  • @CodesInChaos Notice that minimax are always cut down to certain depths, therefore they don't really see the whole game. – gented Dec 19 '17 at 23:09
  • 5
    @DavidSchwartz - "perfect chess", of course, can't be defined. Neither can "infinite processing power". Does this mean "executes all instruction sequences in 0 time"? "Has an infinite number of processors available"? FWIW - my definition of "perfect chess" is "never loses a game". – Bob Jarvis - Слава Україні Dec 19 '17 at 23:27
  • @BobJarvis Then what happens if two such algorithms play each other? – David Schwartz Dec 19 '17 at 23:28
  • 1
    @DavidSchwartz: they take infinite time to achieve a draw. :-) – Bob Jarvis - Слава Україні Dec 19 '17 at 23:29
  • 25
    Yes, it's called brute force. With infinite processing power you don't need to do alpha-beta pruning, although you may also need a rather large amount of storage to hold your search tree. – Michael Dec 19 '17 at 23:45
  • @gented: CIC's comment still applies even if you don't cut down the search -- an algorithm that never draws/loses a winning game and never loses a drawn game may yet get worse results in practice than an algorithm that makes mistakes, but is pretty good at, for example, tricking their opponents into making a losing play in a drawn game. –  Dec 20 '17 at 08:24
  • 1
    If something takes infinite time (never ends), it is by definition not an algorithm. The question should say "arbitrary" processing power. – RemcoGerlich Dec 20 '17 at 09:01
  • @BobJarvis: There is an easy way to play perfect chess by your definition: simply refuse to make a move. Of course, this doesn't work with time controls. –  Dec 20 '17 at 09:13
  • 1
    If you have infinite processing power you don't need a particular algorithm, you can simply calculate every possible position – Darren H Dec 20 '17 at 11:07
  • 4
    The concept of an "algorithm" and the concept of infinite processing power don't really mix. The theory of algorithms and of computability is all based on an assumption of achieving a result in a finite number of steps. If you're allowed an infinite number of steps, the distinction between what is computable and what isn't disappears. – Michael Kay Dec 20 '17 at 23:48
  • 3
    Please don't use term "infinite power" (infinite means so many different things), for chess you only need a finite but very very long time. So long, that it is actually not usable. – Jean-Baptiste Yunès Dec 21 '17 at 08:14
  • @BobJarvis actually infinite processing power is well emdefined. We just don't have the abilities to implement it. See this article on wikipedia: https://en.m.wikipedia.org/wiki/Hypercomputation – user64742 Dec 21 '17 at 20:53
  • @MichaelKay actually there are hypercomputation algorithms in existence. However it is a pretty stale thing to read about as nobody can implement them yet. – user64742 Dec 21 '17 at 20:54
  • 1
    @DavidSchwartz depends on whether or not a player in chess can force a win. – user64742 Dec 21 '17 at 20:57
  • The answer is yes for any game where all players have perfect information (which Chess is). In fact, most answers can be applied to any game in this category. – Jasper Dec 25 '17 at 02:04

8 Answers8

67

Does an algorithm exist? Yes. According to Zermelo's Theorem, there are three possibilities for a finite deterministic perfect-information two-player game such as chess: either the first player has a winning strategy, or the second player has a winning strategy, or either player can force a draw. We don't (yet) know which it is for chess. (Checkers, on the other hand, has been solved: either player can force a draw.)

Conceptually, the algorithm is quite simple: construct a complete game tree, analyze the leaf nodes (the game-ending positions), and either make the winning initial move, resign (if the opponent has a forced win), or offer a draw (if the position is a draw).

The problem lies in the details: there are approximately 1043 possible positions, and an even larger number of moves (most positions can be reached in more than one way). You really need your infinitely-powerful computer to take advantage of this, since a computer that can take advantage of this algorithm either can't fit in the known universe, or won't finish computation until sometime after the universe ends.

Mark
  • 711
  • 5
  • 10
  • 1
    That would be a stupid algorithm, actually. It assumes that the other player is a perfect player also. Even if the supercomputer is in a "losing position" assuming a perfect opponent, that doesn't mean it will in fact lose. – Wildcard Dec 19 '17 at 22:52
  • 13
    @Wildcard No, it doesn't assume anything: it just contains all possibile legal games of chess and it will pick all those ones where the player at hand does not lose. – gented Dec 19 '17 at 23:12
  • 11
    @gented, I was referring to the "resign" step of the algorithm. That's not a necessary step at all. – Wildcard Dec 19 '17 at 23:19
  • 4
    The 'perfect' computer would need to know which branches lead to 'resign' in order to avoid them – Black Dec 20 '17 at 04:25
  • 39
    The three-repetition rule bounds the search space, so the computer does not have to be infinitely powerful, merely astronomically powerful. – Hoa Long Tam Dec 20 '17 at 09:25
  • 9
    For reference, compare a lower bound for the number of possible games (10^120) to the number of atoms in the observable universe (on the order of 10^80). The simplest algorithm would have to find all those games and store their data. Storing one game per atom would take 10^40 times as many atoms as we estimate in the observable universe. – Engineer Toast Dec 20 '17 at 13:49
  • 2
    @EngineerToast that's why we use iterative deepening breath first search instead of breadth first search when the state space gets big. Or in this case, a good old forgetful DFS. Then you don't need ridiculous amounts of memory, just ridiculous amounts of CPU power. – John Dvorak Dec 20 '17 at 18:19
  • @JohnDvorak I am incredibly ignorant of search algorithms. Are you saying it's possible to solve chess with our current storage capacity if only we had enough time and / or processing speed? – Engineer Toast Dec 20 '17 at 18:47
  • 4
    @EngineerToast Of course it is! Just google depth-first search! For it the storage capacity needed will be approximately number_of_moves_at_any_point * longest_game_possible (excluding cycles). The longest possible game (with our present draw rule) is just shy of 6,000 moves, well within storage capacity. The problem is that this only produces an end result + next best move, not an entire strategy, and it cannot reuse already looked at positions (excluding cycles), so will end up searching through some games multiple times. But who cares - it's finite time. – Ordous Dec 20 '17 at 19:16
  • 3
    An infinitely powerful computer would be infinitely more powerful than the one we need to solve chess. Assuming we do some basic caching and merging of vertices on the game tree, a mere 10^45 Hz processor with a decent amount of RAM (10^45 bytes or so) could solve chess in just a few seconds. – Ray Dec 21 '17 at 00:30
  • 1
    @EngineerToast: yes, but when the heat death of the universe happens we won't have made visible progress yet. – RemcoGerlich Dec 21 '17 at 09:15
  • 8x8 checkers has been solved, 10x10 not yet as far as I know. – user1803551 Dec 21 '17 at 17:24
  • 6
    This answer is great until the very end when you refer to an "infinitely powerful computer". That's not what you mean, and that phrase doesn't belong in the question nor the discussion. – Don Hatch Dec 21 '17 at 22:52
  • Brute forcing is just ONE possibility. And it is maybe useless since it is not known whether there is a game strategy that allows you to win 100% of times. With infinite power you can apply other strategies; for instance you can hack the nuclear weapons controlling computers, launch one of them and then tell the other player "resign or I'll have it land on your house". Maybe hacking the PCs is a bit more hard on a computational way, but given infinite power the two algorithms will take the same time to execute... – frarugi87 Dec 22 '17 at 13:25
  • 1
    @HoaLongTam The three-repetition rule is not needed to limit the search space - as any forced win with the rule can be played without repetition. If a player can force a repetition they could do it not only 3 times but 25 times and thus it would be a draw due to the 50-move rule. – Hans Olsson Dec 22 '17 at 15:56
  • Does the situation improve with quantum computers? – user1997744 Dec 22 '17 at 17:26
  • @Black: Wildcard is simply correct. If you have ever programmed a computer chess player before, you would understand what he is saying. – user21820 Dec 23 '17 at 04:40
  • @EngineerToast But what about subatomic particles like quarks and electrons? =) – jpmc26 Dec 23 '17 at 05:08
  • 1
    @frarugi87 A brute force algorithm can tell you which category of games chess falls into. If one player can force a win, then the brute force algorithm will discover this. If the game can be forced into a draw, the brute force algorithm will find this. And it can additionally do so with any starting position of the board; it merely needs to trace down to all the leaves. There is no way in which it would not be useful. – jpmc26 Dec 23 '17 at 05:12
  • 2
    The finiteness of the board also bounds the search space. Even if you didn't have the three repetition rule or the 50 moves rule, you would know if you found the same position again that the game must continue in the same way from there, so there is no need for further searching of the tree. – N. Virgo Dec 25 '17 at 16:44
  • @Nathaniel: From a bare computability perspective, that is true. However, if we (like some of the above commenters) are interested in trading time for space, then the 50-move rule seems to be vital for keeping the recursion depth you might need to deal with small. – hmakholm left over Monica Dec 25 '17 at 20:53
  • @HenningMakholm I'm not sure if that's correct or not. Without the 50 move rule and three repetition rule you can immediately stop searching if you re-encounter the same board position, but with those rules in place you can't, because it's actually a different game state (the move counter is different). So it makes the search tree bigger in some ways, as well as smaller in others, and I'm not sure which wins out. In any case, "small" in this context is still large compared to the physical universe, so if you really want to cut it down to a manageable size you'd need a lot more clever tricks. – N. Virgo Dec 26 '17 at 02:40
  • 1
    @Nathaniel: Actually you can stop searching when the 50-move rule applies, because two perfect players cannot both prefer to continue the game (assuming they have a weak preference of an explicit draw over an infinite game). The only situation where we may need to continue searching from a position where the rule applies is if the first player to have the option can immediately move a pawn or capture a piece (in which case it is still possible that he may force a win). So there will be progress. – hmakholm left over Monica Dec 26 '17 at 14:24
  • And "small" in this case is, according to Ordous above, about 6000 moves, which is quite manageable. Recall that here we're talking about the depth of the tree (which is what governs the space requirements of DFS), not the total number of nodes. – hmakholm left over Monica Dec 26 '17 at 14:31
26

See https://en.wikipedia.org/wiki/Endgame_tablebase.

With infinite computer power, one could build such a table for the starting position and solve chess.

In practice, only positions with up to seven "men" (pawns and pieces, counting the kings) have been solved using current supercomputers, so we are very far from solving chess. The complexity of the problem increases exponentially with the increase in the number of pieces.

itub
  • 10,516
  • 1
  • 36
  • 51
  • 9
    As a side note, if you actually produced such a table, no matter what you stored the information on, it would weigh roughly 10^43 times as much as the observable universe; considering there are ~10^123 possible chess positions and only ~10^80 baryons in the observable universe. – Shufflepants Dec 19 '17 at 22:49
  • Can this answer be summarized as "retrograde analysis is an algorithm that would solve chess if given an infinitely powerful computer"? – ryanyuyu Dec 19 '17 at 23:12
  • 6
    @Shufflepants who said i was storing it using baryons? – Michael Dec 19 '17 at 23:46
  • @ryanyuyu, yes, pretty much. Isn't it amazing what one can do when resources are infinite? The universe being finite is just a technicality. :-) – itub Dec 20 '17 at 00:24
  • @Shufflepants FWIW If I calculated correctly then you could store it in a black hole with a radius in the range of 10^24 - 10^25 km. That's only a few times the size of the observable universe... – Christoph Dec 20 '17 at 10:01
  • 3
    @Christoph And assuming conservation of information, and assuming you had a detector and your super computer with infinite processing power, you could slowly over the course of something like a googolplex years read out the tablebase as hawking radiation. – Shufflepants Dec 20 '17 at 14:58
  • 3
    @Shufflepants Note that an actual winning strategy might require much less space than a full tablebase. For instance, Nim has a winning strategy that is simple to describe, there is no need to build a huge table of all possible states. – Federico Poloni Dec 20 '17 at 17:50
  • 2
    This solution as stated is not viable. The mass of such a table would form a black hole and it would be impossible to exfiltrate data from it. – emory Dec 23 '17 at 09:57
20

If you really had infinite processing power, such an algorithm would be actually trivial to write. As chess has a finite number of possible states, you could in theory just iterate through them all until you find a path of perfect play. It would be horribly inefficient, but if you have infinite processing power, it wouldn't matter.

vsz
  • 436
  • 3
  • 11
  • That's not true. He said you have infinite processing power, but not did not say anything about infinite space. – ubadub Dec 23 '17 at 20:05
  • @ubadub : We wouldn't need infinite space. The length of a game is limited due to the 50-move rule, and a rule can be made up to sort all possible moves from a position. As they can be sorted, they can be stored as an integer. This is all the memory required to walk the whole tree. And if you have infinite time, you can walk the tree as often as you want, so you don't have to store every possible chess game. – vsz Dec 23 '17 at 21:35
  • The length of the game is limited, but it is extremely large; as someone else pointed out, if you produced a table to store all such games, "no matter what you stored the information on, it would weigh roughly 10^43 times as much as the observable universe; considering there are ~10^123 possible chess positions and only ~10^80 baryons in the observable universe – ubadub Dec 23 '17 at 23:43
  • 2
    @ubadub : That is true, but I was not talking about "a table to store all such games". There are many tree-related algorithms which don't have to hold all the nodes of the whole tree in memory. – vsz Dec 24 '17 at 09:57
  • @ vsz good point – ubadub Dec 25 '17 at 02:45
13

To directly address the question: yes there is such an algorithm. It is called minimax. (The endgame tablebases are generated by using this algorithm (backwards!), but the plain old simple minimax algorithm is all you need). This algorithm can play any two player zero sum game perfectly. Find pseudocode here:

https://en.wikipedia.org/wiki/Minimax

note that variants of this algorithm are used by modern computer chess programs.

This algorithm basically generates the move which will minimize the opponent's chances, i.e., it would choose a move for White that is +2 instead of +1. Then, it generates the opponent's move which will maximize his chances, i.e., it would choose a move for Black that is -2 instead of -1.

chessprogrammer
  • 301
  • 1
  • 6
5

Not only is there an algorithm to play perfect chess, it is possible to write a short program that will (given infinite resources) play any deterministic perfect-knowledge finite-duration two-player game perfectly.

The game engine does not even need to know the rules of the game it is playing. All it needs is an opaque representation of a "game state" and functions that (a) given any game state, provide a list of legal next game states and (b) given a game state, decide if it is a win for player 1, a win for player 2, a draw, or it is not an end state.

Given those functions a simple recursive algorithm "solves" the game.

This fact has been alluded to in previous answers by chessprogrammer (minimax) and by Acccumulation (who provides a version of the program in python).

I wrote such a program over 20 years ago. I tested it by playing noughts-and-crosses (tic-tac-toe if you are American). Sure enough it played a perfect game.

Of course this will fall over quickly on any imaginable computer for any serious game. Because it is recursive it is effectively building the entire game tree on the stack, so you will get a "stack overflow" (pun very much intended) before you get anywhere near analysing the 10^123 states of chess referred to in other answers. But it is fun to know that in principle this small program would do the job.

For me this also says something interesting about AI: however much "intelligence" you think is exhibited by Deep Blue, or Go Zero, or indeed by a human playing Chess or Go there is a sense in which these games have trivial, exactly computable optimal solutions. The challenge is how to get a good though not optimal solution in a reasonable time.

gareth
  • 51
  • 2
  • Your algorithm only works for perfect-knowledge two-player games. It will fall over for hidden-information games such as Stratego, because any implementation of function (a) violates the game rules. It also fails for games of potentially infinite duration: for example, drop the 50-move rule from chess, and it can't tell that two kings chasing each other around the board isn't a winnable state. All it can tell is that it's not an end state. – Mark Dec 22 '17 at 18:59
  • Valid points. I will edit my answer. – gareth Dec 23 '17 at 19:36
3

I will ignore the possibilities of draws or infinite sequences of moves for simplicity. Once the algorithm is understood, it is not particularly difficult to extend it to those cases.

First, some definitions:

  1. Any move that wins the game for the player who makes that move is a winning move.

  2. Any move that loses the game for the player who makes that move is a losing move.

  3. Any move that leaves the other player with at least one winning move is also a losing move. (Since the opponent can take that move and force a loss.)

  4. Any move that leaves the other player with only losing moves is also a winning move. (No matter what move your opponent makes, you will win.)

  5. A perfect strategy means always making winning moves if any remain and resigning when one has only losing moves remaining.

Now, it's trivial to write a perfect strategy. Simply explode all possible move sequences and identify winning/losing moves. Ignoring stalemate, this will eventually identify every move as either a winning move or a losing move.

Now, the strategy is trivial. Look at all your possible moves. If any winning moves remain, take one and win. If only losing moves remain, resign, since your opponent can force you to lose.

It is not difficult to adjust the strategy to include the possibility of a stalemate.

Update: Just in case it's not clear how this identifies every move as a winning more or a losing move, consider:

  1. Every move that results in a win is a winning move.
  2. Every move that results in a loss is a losing move.
  3. Every move that results in the opponent having only winning or losing moves is either a winning or a losing move.
  4. Call n the number of moves in the longest possible chess game. (We are ignoring unbounded sequences for now, though including them is not difficult.)
  5. There are no moves with n prior moves we need to consider.
  6. Every move with n-1 prior moves is either a winning move or a losing move since n moves ends the longest game.
  7. Thus every move at depth n-2 is followed by only winning moves or losing moves and thus is itself a winning move or losing move.
  8. And so on back to the first move.
  • 1
    Your definitions of winning and losing moves are not comprehensive enough. The first move, for instance, neither wins the game (#1), nor leaves the opponent with only losing moves (#4), so it isn't a "winning move". Neither does it lose the game (#2), nor leave the opponent with any winning move (#3), so it isn't a "losing move". Your strategy requires that every move is defined either as a "winning move" or a "losing move", which simply isn't the case as you've defined it. – Nuclear Hoagie Dec 19 '17 at 20:47
  • 2
    @NuclearWang It does define every move as either a winning move or a losing move. What do you think the third alternative is? Visualize the tree of all possible chess games (and remember, we're excluding ties or infinite sequences for now). Every chain ends in either a win or a loss. This percolates up through the tree eventually identifying every move as either a winning move or a losing move. – David Schwartz Dec 19 '17 at 20:59
  • 13
    @NuclearWang either the first move is a winning move for one player, or else chess is (like tic-tac-toe) a drawn game with perfect play. We don't know which because no one has ever had the computing power to run this algorithm to completion, and no one has found a more direct proof. – hobbs Dec 19 '17 at 21:04
  • 8
    There is no randomness and no hidden information in chess, which leaves no room for "maybe". Every position is won, lost, or drawn (even if we haven't managed to identify them as such). And this explanation is leaving out the "drawn" option for simplicity, but it mostly amounts to 1) a position is drawn if it's drawn according to the rules, and 2) a position is drawn if it has no winning moves, but has at least one move that leaves the opponent with no winning moves. – hobbs Dec 19 '17 at 21:22
  • @hobbs the first move could (it's unlikely to say the least.... but still) be a "losing move" for white, how can you exclude that? – Francesco Dec 20 '17 at 14:25
  • In what sense would resigning ever be a better strategy than playing an arbitrary move? Unless one knows that one's opponent is perfect and in good health, I see no reason to believe that picking a move at random would not yield some non-zero probability of winning (even if the opponent would have nothing but forced winning moves the opponent forfeit because of time or other considerations). – supercat Dec 20 '17 at 16:36
  • @supercat Sure, one can try to trick one's opponent into making a mistake, but there's a lack of perfection in that strategy. If you accept that sometimes you should make a move just in the hopes that it will cause your opponent to make a mistake or you might benefit from your opponent's quirks, then there is no such thing as perfect chess. – David Schwartz Dec 20 '17 at 18:04
  • 2
    @DavidSchwartz: Unless someone is in a losing position, every move that isn't perfect is bad. In a losing position, there would generally be no single "perfect" move [except in a forced-move situation] since any legal move could have some probability of being the only winning or drawing move in some conceivable (possibly highly contrived) circumstances. Resigning, however, would seem the unambiguous worst "move". Suppose the game is proven solved as a win for White with d4. Would you want to play a chess program which responded to 1. d4 with ...resigns? – supercat Dec 20 '17 at 18:27
  • @supercat And yet, it is the perfect move. – David Schwartz Dec 20 '17 at 18:28
  • @DavidSchwartz: Designing a good program to play any perfect-information abstract game is not logically forced. It is true that against a perfect player resignation of a lost position is no different from playing to the end. However, a good program will not resign until it evaluates that the opponent is likely to win. Common chess programs are like that, as was AlphaGo. From a losing position, one can choose the next move based on many factors, such as the number of moves the opponent needs to guarantee a win, or the fraction of opponent responses that are imperfect. – user21820 Dec 23 '17 at 04:50
  • @Francesco I didn't exclude it, I just used less-than-ideal phrasing. I counted "lost for white" as "won for black", even though white is to move :) – hobbs Dec 23 '17 at 09:38
3

Suppose you have three functions: win_state , get_player, and next_states. The input for win_state is a game state, and the output is -1 if white is in checkmate, 0 if it’s a draw, 1 if black is in checkmate, and None otherwise. The input for get_player is a game state, and the output is -1 if it’s black’s turn and 1 if it’s white’s turn. The input for next_states is a list of possible next game states that can result from a legal move. Then the following function, when given a game state and a player, should tell you what game state to move to for that player to win.

def best_state(game_state,player)
  def best_result(game_state):
     if win_state(game_state):
        return(win_state)
     else:
         player = get_player(game_state)
         return max([best_result(move)*player for move in next_states(game_state)])*player
  cur_best_move = next_states(games_state)[0]
  cur_best_outcome = -1
  for state in next_states(game_state):
     if best_result(state)*player > cur_best_outcome:
           cur_best_outcome = best_result(state)*player
           cur_best_move = state
return(best_move)
Acccumulation
  • 2,262
  • 11
  • 13
1

Use a look-up table

Yes. It's easy. You don't even need infinite processing power. All you need is a look-up table that contains, for each possible board position, the best move to play in that position. Here is the pseudo-code:

def play-move(my-color, board-position):
    return table-of-best-moves[my-color, board-position]

The catch

The only catch is that this look-up table would have to be very, very large—perhaps larger than the Milky Way galaxy—and it would take a long time to construct it—perhaps longer than the current age of the universe, unless there's some undiscovered regularity in chess that makes it much simpler than we can see right now. But if you had this look-up table, the subroutine to choose a perfect move every time could be implemented in as little as one CPU instruction.

Also, given our current knowledge of chess, there's no way to be sure that perfect play guarantees that you won't lose. For example, if perfect play guarantees a win for White, then Black would lose even if Black plays perfectly.

Ben Kovitz
  • 111
  • 3