AlphaGo used data from the KGS Go Server, which had 160,000 games and 29 million board/next-move pairs. But crucially, after it was trained on the dataset, AlphaGo was trained through self-play, so its competence shouldn't be measured strictly in terms of its database.
I'm not 100% sure how Deep Blue worked, but I think it was a mix of 1. a "book" of opening theory 2. explicitly coded board evaluation functions 3. a "book" of endgames. So there isn't a "database" in your traditional "ML by big data" sense. But in any case, I would assume the bulk of the work is done by the evaluation function, so again its strength cannot be measured in terms of if its database.