Difference between revisions of "FormulaAIandAdaptation"
Alesis-Novik (talk | contribs) (→Commits) |
Alesis-Novik (talk | contribs) (→Timeline) |
||
(4 intermediate revisions by the same user not shown) | |||
Line 10: | Line 10: | ||
This would be the plan and strategy base, updated on each action taken. The base plans and tactics would be inserted from user experience. The others the AI would formulate itself, with probability of success. After setting the major goal, such as defence or attack, the AI could use genetic algorithms to evaluate or create the best plan. Since the variations are too great, there would be an iteration limit set (this could also relate to difficulty: the harder the opponent, the more iterations). | This would be the plan and strategy base, updated on each action taken. The base plans and tactics would be inserted from user experience. The others the AI would formulate itself, with probability of success. After setting the major goal, such as defence or attack, the AI could use genetic algorithms to evaluate or create the best plan. Since the variations are too great, there would be an iteration limit set (this could also relate to difficulty: the harder the opponent, the more iterations). | ||
===The Knowledge Base=== | ===The Knowledge Base=== | ||
− | This would hold the information on every aspect of the game and would be updated with user created content as available. For example it would know that some two units are both archers, along with their statistics. This is quite straight forward: arrays of information on units (probably split into logical groups) with connections between (so we can select healers from elves or elves from healers, depending on how we sort them. The knowledge base would also hold information on terrain effects on units. | + | This would hold the information on every aspect of the game and would be updated with user created content as available. For example it would know that some two units are both archers, along with their statistics. This is quite straight forward: arrays of information on units (probably split into logical groups) with connections between (so we can select healers from elves or elves from healers, depending on how we sort them. The knowledge base would also hold information on terrain effects on units. After seeing the source, I would say that something as simple as a vector of simplified unit_stats would be able to hold the information on the units. The information on units can be collected from the unit .cfg files, and accessed easily from any other AI class (most importantly the similarity engine). There vectors could be grouped, making them specialized for certain unit groups, as I mentioned. The knowledge base would have information on the relations between those groups. |
+ | |||
===The Similarity Engine=== | ===The Similarity Engine=== | ||
This would compare the existing plans to the available resources and choose the best solution. For example, if we know that two fighters can defeat an enemy, it scales what are the chances of, lets say a fighter and a healer or two rangers. It wouldn't chose what looks closer, but what *looks better* (lets say in this case, the two rangers). Simple evaluation function could look something like (unit.hitpoints/hitpoints_in_plan * i + unit.damage/damage_in_plan * j) where i and j would be based on priorities (to survive or to do more damage) | This would compare the existing plans to the available resources and choose the best solution. For example, if we know that two fighters can defeat an enemy, it scales what are the chances of, lets say a fighter and a healer or two rangers. It wouldn't chose what looks closer, but what *looks better* (lets say in this case, the two rangers). Simple evaluation function could look something like (unit.hitpoints/hitpoints_in_plan * i + unit.damage/damage_in_plan * j) where i and j would be based on priorities (to survive or to do more damage) | ||
+ | ===Summary=== | ||
+ | The basic work flow of the AI engine would be something like this: the AI evaluates the situation it is currently in and checks the goals set to it (so it doesn't go offensive if it has to defend). After that, the possible plan from the case base is selected by looking at the position the AI is in. If the plan doesn't fit in the current situation perfectly (which would rarely be the case), the AI calls the similarity engine to evaluate the closest plan that can be produced with the resources available. This would include accessing the knowledge base and checking how close the units look like each other. The AI will be able to calculate possible outcome of the knew plan and discard it, if it would prove totally useless. The process would then be repeated (the best possible plan would in the end be selected (there is an option of recombination of the plans to produce a greater effect and a better plan)) | ||
==Timeline== | ==Timeline== | ||
− | *The first part would be to integrate the knowledge base to the AI, since it would need grouping and connections. | + | *Until May 26: The first part would be to integrate the knowledge base to the AI, since it would need grouping and connections. |
− | *The second part would be to write a case base so it could store plans and create new plans, along with the probability of success. | + | *May 26 - July 7: The second part would be to write a case base so it could store plans and create new plans, along with the probability of success. Since this would include the most communication with players, I believe getting this working and filling it would take most of the time until midterm. |
− | *The third part would be to write effective evaluation functions along with a probability engine to calculate success rate of possible plans. This may be done by using the Monte Carlo method (it is used when there are large arrays of data and discreet calculations are impossible) | + | *July 7 - August 18: The third part would be to write effective evaluation functions along with a probability engine to calculate success rate of possible plans. This may be done by using the Monte Carlo method (it is used when there are large arrays of data and discreet calculations are impossible). After writing this, the testing would begin, trying to see how the new AI scales with the old one. |
==Overall== | ==Overall== |
Latest revision as of 00:28, 12 April 2008
Contents
Personal Information
I am Alesis Novik, a computer science student. My IRC name is Alesis-Novik. I am quite good at programming c++ and writing effective algorithms. I hold an interest in AI and plan to work on that in the near future and I am eager to learn. Out of the more interesting things that I have programmed I could point out an algorithmic, non-bruteforce SuDoku solver, a Mastermind solver and a Java 3d pipeline.
The Idea
Create a FormulaAI based AI for Wesnoth. Write additional FormulaAI syntax if needed for a better AI. Provide the non-programmer users a simple way to define special characteristics of the AI that would override the default AI preferences (for example make a certain story unit behave in some specific way).
Details
After the discussions about the AI on the IRC, I was reading up and thought about an, in my opinion, interesting approach to the problem. First of all, from playing the game and talking to other player (done before summer begins) we get the idea of tactics and ideas they use while playing. We also need to make the AI be able to add to the tactic and strategy base on its own considering the outcome of each move. Obviously, with the mass of the possible options in the game, the full base would never be completed, so we need to use another method to ensure the effect. The AI would not only have the information on the tactics, but a large base of environment relations (like how one unit scales with the other and the land effect on them). Based on the current case, the strategy and tactics base and the world information it would compare and scale how close to a successful plan it can get from the current state, a similarity engine, if you will (for example, if one powerful unit could defeat the enemy, and we have two weaker units, maybe their combined power can defeat the enemy too). This type of strategy is a valid option in a TBS AI development and was used in other games like Civilization. Of course, there will always be some uncertainty, and that will be solved with probabilistic reasoning.
The Case (Strategy and tactics) base
This would be the plan and strategy base, updated on each action taken. The base plans and tactics would be inserted from user experience. The others the AI would formulate itself, with probability of success. After setting the major goal, such as defence or attack, the AI could use genetic algorithms to evaluate or create the best plan. Since the variations are too great, there would be an iteration limit set (this could also relate to difficulty: the harder the opponent, the more iterations).
The Knowledge Base
This would hold the information on every aspect of the game and would be updated with user created content as available. For example it would know that some two units are both archers, along with their statistics. This is quite straight forward: arrays of information on units (probably split into logical groups) with connections between (so we can select healers from elves or elves from healers, depending on how we sort them. The knowledge base would also hold information on terrain effects on units. After seeing the source, I would say that something as simple as a vector of simplified unit_stats would be able to hold the information on the units. The information on units can be collected from the unit .cfg files, and accessed easily from any other AI class (most importantly the similarity engine). There vectors could be grouped, making them specialized for certain unit groups, as I mentioned. The knowledge base would have information on the relations between those groups.
The Similarity Engine
This would compare the existing plans to the available resources and choose the best solution. For example, if we know that two fighters can defeat an enemy, it scales what are the chances of, lets say a fighter and a healer or two rangers. It wouldn't chose what looks closer, but what *looks better* (lets say in this case, the two rangers). Simple evaluation function could look something like (unit.hitpoints/hitpoints_in_plan * i + unit.damage/damage_in_plan * j) where i and j would be based on priorities (to survive or to do more damage)
Summary
The basic work flow of the AI engine would be something like this: the AI evaluates the situation it is currently in and checks the goals set to it (so it doesn't go offensive if it has to defend). After that, the possible plan from the case base is selected by looking at the position the AI is in. If the plan doesn't fit in the current situation perfectly (which would rarely be the case), the AI calls the similarity engine to evaluate the closest plan that can be produced with the resources available. This would include accessing the knowledge base and checking how close the units look like each other. The AI will be able to calculate possible outcome of the knew plan and discard it, if it would prove totally useless. The process would then be repeated (the best possible plan would in the end be selected (there is an option of recombination of the plans to produce a greater effect and a better plan))
Timeline
- Until May 26: The first part would be to integrate the knowledge base to the AI, since it would need grouping and connections.
- May 26 - July 7: The second part would be to write a case base so it could store plans and create new plans, along with the probability of success. Since this would include the most communication with players, I believe getting this working and filling it would take most of the time until midterm.
- July 7 - August 18: The third part would be to write effective evaluation functions along with a probability engine to calculate success rate of possible plans. This may be done by using the Monte Carlo method (it is used when there are large arrays of data and discreet calculations are impossible). After writing this, the testing would begin, trying to see how the new AI scales with the old one.
Overall
The result should be an AI with both overall and detailed customization (e.g. overall would be aggressive or defensive and detailed would be guarding or assassinating) and adaptiveness to new strategic plans and user created content.
Commits
- An additional formula to get close enemies and functions in pathutil for close tiles selection[1]
- An additional formula to get the possible battle outcomes [2]
Other
On a side note, I'd like to say that I am really interested in game development and would gladly continue working on Wesnoth after Google Summer of Code. Also, the reason why I can't contribute much now is that I have to attend the university and we are having some midterm exams, so I need to study a bit.