当前位置: X-MOL 学术IEEE Trans. Games › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Automated Video Game Testing Using Synthetic and Human-Like Agents
IEEE Transactions on Games ( IF 2.3 ) Pub Date : 2019-01-01 , DOI: 10.1109/tg.2019.2947597
Sinan Ariyurek , Aysu Betin-Can , Elif Surer

In this paper, we present a new methodology that employs tester agents to automate video game testing. We introduce two types of agents -synthetic and human-like- and two distinct approaches to create them. Our agents are derived from Reinforcement Learning (RL) and Monte Carlo Tree Search (MCTS) agents, but focus on finding defects. The synthetic agent uses test goals generated from game scenarios, and these goals are further modified to examine the effects of unintended game transitions. The human-like agent uses test goals extracted by our proposed multiple greedy-policy inverse reinforcement learning (MGP-IRL) algorithm from tester trajectories. MGPIRL captures multiple policies executed by human testers. These testers' aims are finding defects while interacting with the game to break it, which is considerably different from game playing. We present interaction states to model such interactions. We use our agents to produce test sequences, run the game with these sequences, and check the game for each run with an automated test oracle. We analyze the proposed method in two parts: we compare the success of human-like and synthetic agents in bug finding, and we evaluate the similarity between humanlike agents and human testers. We collected 427 trajectories from human testers using the General Video Game Artificial Intelligence (GVG-AI) framework and created three games with 12 levels that contain 45 bugs. Our experiments reveal that human-like and synthetic agents compete with human testers' bug finding performances. Moreover, we show that MGP-IRL increases the human-likeness of agents while improving the bug finding performance.

中文翻译:

使用合成和类人代理的自动化视频游戏测试

在本文中,我们提出了一种使用测试代理来自动化视频游戏测试的新方法。我们介绍了两种类型的代理 - 合成的和类人的 - 以及两种不同的创建它们的方法。我们的代理源自强化学习 (RL) 和蒙特卡洛树搜索 (MCTS) 代理,但专注于发现缺陷。合成代理使用从游戏场景生成的测试目标,这些目标被进一步修改以检查意外游戏转换的影响。类人代理使用我们提出的多贪婪策略逆强化学习 (MGP-IRL) 算法从测试者轨迹中提取的测试目标。MGPIRL 捕获由人类测试人员执行的多个策略。这些测试人员的目标是在与游戏交互的同时发现缺陷以打破它,这与玩游戏有很大不同。我们呈现交互状态以对此类交互进行建模。我们使用我们的代理生成测试序列,使用这些序列运行游戏,并使用自动测试预言机检查每次运行的游戏。我们分两部分分析了所提出的方法:我们比较了类人代理和合成代理在错误发现方面的成功,并评估了类人代理和人类测试人员之间的相似性。我们使用通用视频游戏人工智能 (GVG-AI) 框架从人类测试人员那里收集了 427 条轨迹,并创建了三个包含 45 个错误的 12 个级别的游戏。我们的实验表明,类人代理和合成代理可以与人类测试人员的错误发现性能竞争。此外,我们表明 MGP-IRL 增加了代理的人类相似度,同时提高了错误发现性能。
更新日期:2019-01-01
down
wechat
bug