You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Editorial: Research trends

Since the 1950s, the AI community has been developing techniques in order to outperform humans for a particular game. This has been accomplished in many board games such as Chess, Checkers, and Go. Recently, new challenges have been proposed such as real-time strategy (RTS) games. Here the agents have to make decisions in real-time environments that have imperfect information, non-determinism, and a continuous action space. It seemed that achieving super-human performance for them would take a couple of years. However, developments in Game AI are going fast. Early 2019 DeepMind announced that their agent AlphaStar, by using deep neural networks, has defeated two top professional players in the RTS game StarCraft II. Would this mean that we would be done in a couple of years as we are running out of games to establish superhuman performance? The answer is no, as there are still many other challenges in Game AI research. One of them is automatically developing a game or generating content (game-levels) for it. The first article of this issue deals with this research trend. In Using patterns as objectives for general video game level generation by Adeel Zafar, Hasan Mujtaba, Mirza Tauseef Baig and Mirza Omer Beg, a genetic algorithm is proposed to automatically generate levels for video games. Their generator finished at the third place in the 2018 general video-game level generation competition.

Deep neural networks have contributed to another research trend. Recently, the performance of AlphaGo Zero and AlphaZero has shown that accurate evaluation functions can be constructed by using deep neural networks. A challenge is to construct these evaluation functions with less computational resources. The next contribution RankNet for evaluation functions of the game of Go by Yusaku Mandai and Tomoyuki Kaneko, shows that pairwise RankNet training increases the potential number of training examples and alleviates the requirement for the number of game records. Experimental results indicate that neural networks trained by their approach show a better playing strength than other methods, especially when the data set size is relatively limited.

This issue includes several reports on Computer Chess, and there are some new trends there as well. The must read is the one reporting on the results of the TCEC Cup 2, where Leela Chess Zero won the tournament by using MCTS and deep neural networks. This result indicates that the days of classic chess engines, equipped with an αβ search and a handcrafted evaluation function, will soon be over. We will keep you posted!

Mark Winands