Difference between revisions of "Experimental AI"

From The Battle for Wesnoth Wiki
(Experimental AI Setup: Update ExpAI setup instructions)
(Experimental AI Setup: Add custom parameters section)
Line 35: Line 35:
 
         #switch_ai=ai/ais/ai_experimental.cfg # Wesnoth 1.15 and later
 
         #switch_ai=ai/ais/ai_experimental.cfg # Wesnoth 1.15 and later
 
     [/modify_side]
 
     [/modify_side]
 +
 +
=== Custom Parameter Setup ===
 +
 +
{{DevFeature1.14|10}} '''(and from 1.15.2 in current master)'''
 +
A small number of custom parameters can be passed to the Experimental AI. These cannot be set up using the <code>ai_algorithm</code> syntax, but require the use of a macro in the <code>[side][ai]</code> tag:
 +
[side]
 +
    [ai]
 +
        {CUSTOMIZABLE_EXPERIMENTAL_AI (
 +
            high_level_fraction=0.33
 +
            randomness=0.2
 +
        )}
 +
    [/ai]
 +
[/side]
 +
Note that only the parameters one wants changed need to be defined in the macro, those omitted will take their default values.
 +
 +
'''Custom parameters:'''
 +
 +
* '''high_level_fraction'''=0: (non-negative number) The approximate fraction of units of level 2 or higher to be recruited. This is defined as fraction of units on the map, not of new units being recruited during a turn or over several turns (which makes a difference if there are already units on the map). The effect is also cumulative per level (starting from level 2), meaning that 1/3 of L2 units will be recruited in the example above, 1/3*1/3=1/9 L3s etc. The default value is zero, which leaves the high-level unit recruiting to the default Experimental AI algorithm resulting usually in very few such units being recruited.
 +
* '''randomness'''=0.1: (number)  A random number is applied to the Experimental AI recruiting score to prevent the recruitment pattern from being too predictable. 0 causes no randomness to be applied, while larger numbers increase the random effect. A value of 1-2 generates results in which the random effect is approximately equal to the scored effect. Extremely high values are essentially entirely random.
  
 
== Experimental AI Details ==
 
== Experimental AI Details ==

Revision as of 14:04, 17 October 2019

Experimental AI Summary

An Experimental AI is available for use in both MP maps and SP scenarios. At the moment, this AI contains the following candidate actions:

  • New and improved recruiting
  • More aggressive village grabbing
  • Healer placement behind injured units
  • Leader castle switching: useful on certain MP maps
  • Poison spreading
  • All the default RCA AI candidate actions except for recruiting

Experimental AI Setup

In multiplayer maps, this AI is available from the game setup menu as 'Experimental AI'. In single-player scenarios, it can be included by using the following code in a [side] tag

[ai]
    # Deprecated version (for Wesnoth 1.14 and earlier)
    {EXPERIMENTAL_AI}
[/ai]

(Version 1.15.0 and later only) The EXPERIMENTAL_AI Macro is deprecated now. The Experimental AI should be included as follows instead (this works in Wesnoth 1.14 already):

[ai]
    # Current version (works from Wesnoth 1.14 on)
    ai_algorithm=experimental_ai
[/ai]

Note that the EXPERIMENTAL_AI macro must be inside [side][ai], it does not work with [modify_side] or [modify_ai]. If the ai_algorithm version is used, the Experimental AI can also be added as

    [modify_side]
        side=2
        [ai]
            ai_algorithm=experimental_ai
        [/ai]
    [/modify_side]

or

    [modify_side]
        side=2
        switch_ai=ai/ais/ai_generic_rush.cfg  # Wesnoth 1.14 and earlier
        #switch_ai=ai/ais/ai_experimental.cfg # Wesnoth 1.15 and later
    [/modify_side]

Custom Parameter Setup

(Version 1.14.10 and later only) (and from 1.15.2 in current master) A small number of custom parameters can be passed to the Experimental AI. These cannot be set up using the ai_algorithm syntax, but require the use of a macro in the [side][ai] tag:

[side]
    [ai]
        {CUSTOMIZABLE_EXPERIMENTAL_AI (
            high_level_fraction=0.33
            randomness=0.2
        )}
    [/ai]
[/side]

Note that only the parameters one wants changed need to be defined in the macro, those omitted will take their default values.

Custom parameters:

  • high_level_fraction=0: (non-negative number) The approximate fraction of units of level 2 or higher to be recruited. This is defined as fraction of units on the map, not of new units being recruited during a turn or over several turns (which makes a difference if there are already units on the map). The effect is also cumulative per level (starting from level 2), meaning that 1/3 of L2 units will be recruited in the example above, 1/3*1/3=1/9 L3s etc. The default value is zero, which leaves the high-level unit recruiting to the default Experimental AI algorithm resulting usually in very few such units being recruited.
  • randomness=0.1: (number) A random number is applied to the Experimental AI recruiting score to prevent the recruitment pattern from being too predictable. 0 causes no randomness to be applied, while larger numbers increase the random effect. A value of 1-2 generates results in which the random effect is approximately equal to the scored effect. Extremely high values are essentially entirely random.

Experimental AI Details

The Experimental AI uses most of the candidate actions (CAs) of the RCA AI, the one exception being the recruitment CA which is replaced by another recruiting CA. In addition, it also adds a number of new CAs. The following are the differences between the Experimental AI and the default (RCA) AI:

Recruitment

The Experimental AI has a completely different recruiting algorithm that was designed to emulate the choices of a human player, especially in the first few turns. As such, it tries to pick units that counter both what is on the battlefield and that the opponent could recruit. As it gains more units relative to the enemy, it picks harder hitting units, even at the cost of fragility, to break through enemy lines.

Unlike the default recruitment algorithm, it also chooses where the units are recruited in order to maximize the number of villages that can be captured and if it knows the leader will be moving to a "better" keep (see below), it will under-recruit at the first keep to project more forward power, recruiting just enough to capture all villages.

It also adjusts its recruitment based on map size, favouring faster units on larger maps, trusting that the economy advantage from capturing more villages will more than offset the price.

It knows about poison and avoids recruiting units that depend on it for damage if the enemy is immune.

It also looks at the expected cost of the unit over the next few turns, which prevents the recruitment of too many fast weak units unless the additional gold from an expected capture of a village is enough to offset the increased cost.

Time of day is taken into account and it prefers units that will reach the current closest enemy at the favoured time of day. This allows, for example, for a slight bias towards saurians during the day and drakes at night on small maps (because they will reach the fight at the right time).

Because the AI cannot use them well, it does not accurately measure the damage done by a unit with berserk, causing them to almost never be recruited.

There is also a small amount of randomness added to allow two units that are almost equally good according to the evaluation to be chosen evenly, instead of only picking the first one every time.

Villages

The Experimental AI prefers to capture villages over fighting. This weakens it somewhat in raw fighting but it tends to get enough extra gold that this is actually to its advantage vs. the default AI, although its current implementation is probably too focused on villages.

Healing

If a healer is not used to attack, the Experimental AI will try to heal units, instead of positioning to attack later.

Units retreat to villages/healers when injured, but currently there is little attempt to identify when it would be better not to in order to finish off multiple enemies in an area.

Castle Switching

The leader will move to different keeps, providing a forward force. Sometimes this works very well, especially at the beginning of the game, but it keeps moving later, which sometimes gets it killed. It might be worth stopping this behaviour after turn 3-4 or trying to identify the best keep rather than picking a close one that is not the current one as a way to improve it.

Enemy Poisoning

There is also an explicit poison attack routine that tries to poison unpoisoned units instead of repeatedly poisoning the same one.

A Final Note

  • Many of the extreme biases in the Experimental AI come from the fact that it still uses a lot of the default AI code and thus cannot ask it what the default action would be and compare possible moves. It has to decide whether the alternative is better without knowing the default choice.


See also: Wesnoth AI