Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Using evaluation functions in Monte-Carlo tree search
Lorentz R. Theoretical Computer Science644 106-113,2016.Type:Article
Date Reviewed: Dec 28 2016

Recently, the deep learning paradigm became popular as a consequence of the success of Go game playing. Generally, the investigation of two-player games can be considered a good laboratory for experimenting with algorithms that pursue alternative approaches. For decades, the so-called mini-max, alpha-beta pruning algorithms were popular and successful in the case of game-related algorithms [1]. In 2006, Monte Carlo tree search (MCTS) started to gain popularity and then proliferated.

The basic difference between mini-max and MCTS is that MCTS does not use (by default) an evaluation function. The major strategy of MCTS is the so-called random playouts when the algorithm tries to decide who will win, starting from a given position until the end of the game.

The author selected three games with two players that are simpler than chess or Go; however, they are complicated enough that the “goodness” of the proposed algorithms can be thoroughly examined. Furthermore, these games provide the opportunity to assess the behavior of the proposed algorithm when different and various algorithms are the opponents. The three selected games are as follows: Amazons, Breakthrough, and Havannah. There are championship competitions for all of them, and some of them even have human players beside algorithms realized in software [2,3,4], allowing the author to investigate the performance of his algorithms in competitive situations.

All three original algorithms were developed further with extensions for early playout termination (MCTS-EPT). A critical issue is when to terminate, and each algorithm showed different behavior where the optimal termination point is.

The paper analyzes the results of algorithms with different parameter settings, and its final conclusion is that the MCTS-EPT approach is superior to mini-max-based algorithms. There is a potential development for MCTS extended with evaluation functions, although the experiments proved that even the recent version of the selected combination of algorithms outperforms the opponents.

This paper will interest researchers who are involved in the development and analysis of current machine learning and artificial intelligence related algorithms.

Reviewer:  Bálint Molnár Review #: CR144980 (1703-0183)
1) Nilsson, N. J. Principles of artificial intelligence. Morgan Kaufmann, Palo Alto, CA, 2014.
2) Invader, (11/28/2016).
3) Havannah, (11/28/2016).
4) Little Golem, (11/28/2016).
Bookmark and Share
  Featured Reviewer  
Probabilistic Algorithms (Including Monte Carlo) (G.3 ... )
Minimax Approximation And Algorithms (G.1.2 ... )
Information Search And Retrieval (H.3.3 )
Would you recommend this review?
Other reviews under "Probabilistic Algorithms (Including Monte Carlo)": Date

Type: Article
Jul 1 1987
A probabilistic lower bound for checking disjointness of sets
Manber U. (ed) Information Processing Letters 19(1): 51-53, 1984. Type: Article
Feb 1 1985
On iterative methods in Markov modelling
Schatte P. Journal of Information Processing and Cybernetics 23(1): 49-51, 1987. Type: Article
Oct 1 1987

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy