Difference between revisions of "AI Arena"

From The Battle for Wesnoth Wiki
(extracted AI hot redeployment to separate project)
(Testing the AI)
Line 1: Line 1:
= Testing the AI =
+
= AI Arena is an interactive AI testing framework =
There was an IRC discussion on March 23-24 between Dragonking, Crab_, and Velory, regarding the Arena to test AI - a specific map (later - set of maps) to test a 'chosen' AI in 'chosen' situation.
+
It is implemented as a test scenario <i>ai_arena_small</i> (available from r34329 (31 Mar 09) )
  
The elements of proposed implementation are as follows:
+
To launch it, run Wesnoth with a -t parameter (must not come as last parameter). For example:
  
<b>One shared map, usable with any ai and a number of starting situations</b>
+
./wesnoth-debug -t ai_arena_small -d
  
First arena is to be based on Den of Onis. It was picked by Dragonking because it is a small map and is good since it has different terrain including caves, lava, water etc and we can look for a bugs there when we test some particular usages.
+
The scenario is located in data/ai/scenarios/scenario-AI_Arena_small.cfg
  
--
+
It is based on the Den of Onis.
This map should be changed a bit to allow 3 players (human1, computer2, computer3). Faction leaders should start 'outside' the main map in isolated 1-space pockets (for example, to allow human player to be a 'spectator' in the fight between two computers, or to allow a human to fight with a computer).  
 
  
<b>Ability to select from multiple 'pre-defined' starting situations.</b>
+
The map includes three sides:
This will allow to design 'challenges' for the ai and select one of them for execution.
 
A challenge, which is created as part of that 'Arena scenario', basically creates some units on the map (optionally, if it so required by the situation under test - moves leaders from their isolated pockets to the battleground), sets their hit points and statuses, etc. It's purpose is to create a 'interesting tactical situation' for the ai to handle and it knows nothing about the AI that will try to handle it.
 
So, the scenario will include a number of starting situations, each will have a 'name'. There will be ways (for example, by stepping a specific off-arena unit to trigger a WML events to:
 
  
- clean up the arena
+
1: Human (AI developer)
  
- create specific tactical situation (by selecting it from a list or by typing a name in the dialog box)
+
2: Challenger AI (Side which which is tested, 'north team')
  
<b>[[AI_HotRedeployment|Ability to hot-redeploy AI for the specific side from in-game console.]]</b>
+
3: Champion AI (Side which commands the enemy of the AI being tested, 'south team')
This will allow loading a specific ai from a specific WML configuration file (this may also load some .fai files which were included in the configuration).
 
  
<b>Ability to clean up the map</b>
+
All leaders start off-map in pocketed locations.
This will allow to return the leaders to their starting positions, and remove all units from the arena. It can be used to 'restart' the test.
 
  
<b>Ability to restart the test - Combo ability to clean up, then select same pre-defined tactical situation, then hot-redeploy all the AIs</b>
+
Human leader can access an interactive menu by stepping on the "test!" marker (6,21)
This is basically an 'one shot restart test without typing too much in the console'. Can be implemented as a 'tile that some off-arena unit must step on' or as a console command.
 
  
<b>Maybe - Ability to loop the test and evaluate and output the results</b>
+
There is an interactive menu which allows AI developer to pick the challenge (each challenge has a small description and a unique number), and to pick the AI that will try the challenge (AI developer can select from a list or input a .cfg location which contains the bare *contents* of the SIDE tag with [[SideWML]] and [[AIWML]] configuration - without the SIDE tag itself). The test will be loaded (units will appear inside the Arena) and selected AI will be hot-redeployed).
This is for 'run this 100 times no more that 20 turns each and see which side wins' kind of testing
 
  
<b>Usage example:</b>
+
Then, the AI developer can end his turn and watch the AI's actions.
# create WML AI configuration which includes some formulas or includes some .fai files.
+
 
# launch the Arena.
+
Please feel free to add more 'AI challenges' and improve existing ones. Each challenge should be AI-independent.
# select a suitable tactical situation. Units will appear on the map.
+
 
# redeploy the ai(s) using in-game console.
+
While creating a challenge, do not forget about:
# test and debug the AI efficiency in that tactical situation
+
 
# restart or redeploy as needed. Changes in the AI WML configuration will be picked up after either redeployment or restart.
+
# Assign a unique number for your challenge. Insert your challenge at the end of the challenge list (so, numbers will be sorted)
# when formula change is committed, only it's testing AI configuration file (+ any referenced .fai) are to be commited. (because map and list of challenges do not depend on them)
+
# If you are creating multiple challenges to the same topic, you can 'nest' them in the challenge selection dialog (to let the AI developer select the topic, then select the challenge)
 +
# Add code to clean all your labels to cleaup function. (since there is (so far) now way to delete all labels on the map)
 +
# Set each side gold to be 'high enough'
 +
# if it is needed, use something this like to make the other side (champion AI) passive:
 +
[modify_side]
 +
    side=3
 +
    gold=10000
 +
    redeploy_ai_from_location=$test_path_to_idle_ai
 +
[/modify_side]
 +
 
 +
Example of AI configuration file:
 +
ai_algorithm =formula_ai
 +
[ai]
 +
    eval_list=yes
 +
    [register_candidate_move]
 +
        name=poisoner
 +
        type=attack
 +
        evaluation="{ai/formula/poisoner_eval.fai}"
 +
        action="{ai/formula/poisoner_attack.fai}"
 +
    [/register_candidate_move]
 +
[/ai]

Revision as of 14:15, 31 March 2009

AI Arena is an interactive AI testing framework

It is implemented as a test scenario ai_arena_small (available from r34329 (31 Mar 09) )

To launch it, run Wesnoth with a -t parameter (must not come as last parameter). For example:

./wesnoth-debug -t ai_arena_small -d

The scenario is located in data/ai/scenarios/scenario-AI_Arena_small.cfg

It is based on the Den of Onis.

The map includes three sides:

1: Human (AI developer)

2: Challenger AI (Side which which is tested, 'north team')

3: Champion AI (Side which commands the enemy of the AI being tested, 'south team')

All leaders start off-map in pocketed locations.

Human leader can access an interactive menu by stepping on the "test!" marker (6,21)

There is an interactive menu which allows AI developer to pick the challenge (each challenge has a small description and a unique number), and to pick the AI that will try the challenge (AI developer can select from a list or input a .cfg location which contains the bare *contents* of the SIDE tag with SideWML and AIWML configuration - without the SIDE tag itself). The test will be loaded (units will appear inside the Arena) and selected AI will be hot-redeployed).

Then, the AI developer can end his turn and watch the AI's actions.

Please feel free to add more 'AI challenges' and improve existing ones. Each challenge should be AI-independent.

While creating a challenge, do not forget about:

  1. Assign a unique number for your challenge. Insert your challenge at the end of the challenge list (so, numbers will be sorted)
  2. If you are creating multiple challenges to the same topic, you can 'nest' them in the challenge selection dialog (to let the AI developer select the topic, then select the challenge)
  3. Add code to clean all your labels to cleaup function. (since there is (so far) now way to delete all labels on the map)
  4. Set each side gold to be 'high enough'
  5. if it is needed, use something this like to make the other side (champion AI) passive:
[modify_side]
   side=3
   gold=10000
   redeploy_ai_from_location=$test_path_to_idle_ai
[/modify_side]

Example of AI configuration file:

ai_algorithm =formula_ai
[ai]
   eval_list=yes
   [register_candidate_move]
       name=poisoner
       type=attack
       evaluation="{ai/formula/poisoner_eval.fai}"
       action="{ai/formula/poisoner_attack.fai}"
   [/register_candidate_move]
[/ai]