Charles Explorer logo
🇨🇿

Why you should empirically evaluate your AI tool: From SPOSH to yaPOSH

Publikace na Matematicko-fyzikální fakulta |
2014

Tento text není v aktuálním jazyce dostupný. Zobrazuje se verze "en".Abstrakt

The autonomous agents community has been developing specific agent-oriented programming languages for more than two decades. Some of the languages have been considered by academia as possible tools for developing artificial intelligence (AI) for non-player characters in computer games.

However, as most of the research related to the development of new AI languages within the agent community does not reach production quality, they are seldom adopted by the games industry. As our experience has shown, it is not only the actual language that matters.

The toolchain supporting the language and its integration (or lack thereof) with a development environment can make or break the success of the language in practical applications. In this paper, we describe our methodology for evaluating AI languages and associated tools in practice based on controlled experiments with programmers and/or game designers.

The methodology is demonstrated on our development and evaluation of SPOSH and yaPOSH high level agent behavior languages. We show that incomplete development support may prevent the tool from giving any benefit to developers at all.

We also present our experience from transferring knowledge gained during yaPOSH development to actual AI design for an upcoming AAA game.