Difference between revisions of "SummerOfCodeProposal AI Improvement Crab"
m (→1. AI Module design phase) |
m (→1. AI Module design phase) |
||
Line 79: | Line 79: | ||
| various components which do *decisions* (they can be reused by various AIs, but it is needed to keep many versions around, because it is not immediately clear whether a decision is good or bad. | | various components which do *decisions* (they can be reused by various AIs, but it is needed to keep many versions around, because it is not immediately clear whether a decision is good or bad. | ||
|- | |- | ||
− | |ai/development/default_ai.hpp | + | | ai/development/default_ai.hpp |
| development version of default AI | | development version of default AI | ||
|- | |- | ||
Line 122: | Line 122: | ||
We need different kinds of tests. | We need different kinds of tests. | ||
− | 1.2.1 Firstly, we need unit tests for C++ service functions. This will be made on the base of current unit testing framework (src/tests) | + | 1.2.1 <b>Firstly, we need unit tests for C++ service functions</b>. This will be made on the base of current unit testing framework (src/tests) |
− | 1.2.2 Secondly, we need unit tests for formulas. These, also, will be made on current unit testing framework (src/tests) | + | 1.2.2 <b>Secondly, we need unit tests for formulas.</b> These, also, will be made on current unit testing framework (src/tests) |
− | 1.2.3 Thirdly, we need interactive testing. See [[AI_Arena]] - I've created a test scenario which is AI-independent and will allow testing multiple variants of AIs in specific situations. This test scenario also allows shorter formula development cycle (by using AI hot-redeployment functionality I've implemented). I will expand this test framework by creating test situations to cover all basic aspects of gameplay and to cover all unit abilities available in Wesnoth. I will expand this test framework to allow 'test runs' of multiple tests and multiple AI's per one developer command (with storage of the results in the log). I will expand this framework to allow easier 'parallel' testing of 'Champion AIs' and 'Challenger AIs' - to detect regressions easier. | + | 1.2.3 <b>Thirdly, we need interactive testing</b>. See [[AI_Arena]] - I've created a test scenario which is AI-independent and will allow testing multiple variants of AIs in specific situations. This test scenario also allows shorter formula development cycle (by using AI hot-redeployment functionality I've implemented). I will expand this test framework by creating test situations to cover all basic aspects of gameplay and to cover all unit abilities available in Wesnoth. I will expand this test framework to allow 'test runs' of multiple tests and multiple AI's per one developer command (with storage of the results in the log). I will expand this framework to allow easier 'parallel' testing of 'Champion AIs' and 'Challenger AIs' - to detect regressions easier. |
− | 1.2.4 Fourthly, we need non-interactive (batch) testing. In addition to interactive testing, we need to detect degradations and test the ideas without wasting valuable developer time. A good example of such a test is a full-scale battle of CHALLENGER AI vs CHAMPION AI. This also includes unfair battles (with one side having a distinct advantage) and pre-defined scenarios (from AI Arena) This testing has a very nice property - it can run 24x7, run many times, and store the results to the database, which will have a web-accessible frontend with 'interesting numbers' (later, it could be even possible to trick one of those GSoC students which will be working on stats.wesnoth.org into making some nice graphs ). This will act as a limited safety net for the AI developers (if they break the development AI, there is a good chance it will be noticeable during such testing), and this can also be used as an indicator for the replacement of CHAMPION AI with a better one. | + | 1.2.4 <b>Fourthly, we need non-interactive (batch) testing</b>. In addition to interactive testing, we need to detect degradations and test the ideas without wasting valuable developer time. A good example of such a test is a full-scale battle of CHALLENGER AI vs CHAMPION AI. This also includes unfair battles (with one side having a distinct advantage) and pre-defined scenarios (from AI Arena) This testing has a very nice property - it can run 24x7, run many times, and store the results to the database, which will have a web-accessible frontend with 'interesting numbers' (later, it could be even possible to trick one of those GSoC students which will be working on stats.wesnoth.org into making some nice graphs ). This will act as a limited safety net for the AI developers (if they break the development AI, there is a good chance it will be noticeable during such testing), and this can also be used as an indicator for the replacement of CHAMPION AI with a better one. |
NOTE: a question may arise: where to get this 24x7-working machine to make such testing possible ? This will not be a problem, since I've got my own server on a colo, and it will only need to get a small memory upgrade to compile wesnoth quicker, which is easily doable. | NOTE: a question may arise: where to get this 24x7-working machine to make such testing possible ? This will not be a problem, since I've got my own server on a colo, and it will only need to get a small memory upgrade to compile wesnoth quicker, which is easily doable. | ||
Line 150: | Line 150: | ||
− | This is a first, and important, milestone for my proposed project. A short overview: refactoring, code split to prevent breakage, testing framework, and tuning-parameter-awareness as a thin layer between game rules and service functions. This milestone will allow me to go forward with improving the formula language (remember, formula language | + | This is a first, and important, milestone for my proposed project. A short overview: refactoring, code split to prevent breakage, testing framework, and tuning-parameter-awareness as a thin layer between game rules and service functions. This milestone will allow me to go forward with improving the formula language and AI ( remember, formula language is NOT formula_ai, and formula language is not a part of a formula_ai. So, after decoupling them we can easily experiment with, for example, adding formula recruitment to development/default_ai - and we will have test numbers about the efficiency of the result, thanks to the constructed test system. ) |
===2. Formula language capabilities phase === | ===2. Formula language capabilities phase === |
Revision as of 13:20, 3 April 2009
Hello
My name is Yurii, My nick on IRC is Crab_, my nick on forum/wiki is Crab, on gna - crab.
terraninfo АТ terraninfo.net
Proposal Summary
I would like to reorganize the current AI subsystem to allow different people collaborate more efficiently on creating new AIs and improving AIs.
Also, I would like to make the Wesnoth AI better.
I propose a three-part project:
1. AI Module design phase
1.1 I will introduce a new AI development process. The two main goals:
- make AI development possible without breaking old AI.
- make AI development convenient.
- make measurement of results possible.
To do so, I propose to fork each ai into 'dev' and 'stable' variants, which will be present in the same game (so we will be able to compare them and gradualy make 'stable' variant better.
Stable versions of each AI files will be changed *only* for bugfixes or when testing shows that the change is good.
This is the only sane way I see of making AI progress without either regularly breaking it (if devs are too confident) or stalling progress (if devs are too cautious) - we need to *preserve* old AI for long enough to see if a patched AI is good.
To do so, I've started with separating AI code from game code, by creating an AI manager class. I will introduce support for getting AI from an algorithm_type via an indirect lookup (from an AI ALIAS)
There will be several important AI ALIASES:
- CHAMPION_MP_AI - AI which is known to be good for multiplayer. It will be default for MP scenarios.
- CHALLENGER_MP_AI - AI which is a candidate to become the "CHAMPION MP AI", and will be promoted to "Champion MP AI" if it proves itself better.
- CHAMPION_SP_AI - AI which is known to be good for singleplayer. It will be default or SP scenarios.
- CHALLENGER_SP_AI - AI which is a candidate to become the "CHAMPION SP AI", and will be promoted to "Champion SP AI" if it proves itself better.
(Of course, naming a concrete AI via ai_algorithm_type will be still possible, nothing will break at this point)
I see two 'active' AIs in current codebase:
- default_ai
- formula_ai
These have two different approaches to implementing ai_inteface::play_turn() and each of them must be preserved and developed. But, a way is needed to develop them without (A) changing too much and breaking them; (B) changing too little and not achieving anything;
To do this, I will start with a refactoring. I will split both default_ai and formula_ai into two (later there may be more) code lines. They can be called 'stable' and 'development', or 'champion' and 'challenger'. There may be more that two - "one champion plus multiple challengers". During this refactoring, pieces which are the same for each AI must be extracted into separate library code to avoid unnecessary duplication. The files will be moved into separate dir 'src/ai', since, IMO, 'ai' is a separate module that should be in it's own directory.
We'll have the following situation:
AI Module
ai/manager.hpp | Class which is managing the AI lifecycle. This is a single-point-of-interaction between AI module and rest of Wesnoth code. Individual AI implementations will *register* themselves with the ai/manager, and will be available to the rest of the game engine by name or by alias. |
ai/configuration.hpp | Class which is dealing with AI memory, configuration, effective configuration, global_configuration, etc |
ai/ai_interface.hpp | an interface for the ai, from the point-of-view of the ai/manager. |
ai/game_interface.hpp | an interface for the game situation, from the point-of-view of the AI implementation. a class based on current ai_interface::info in order to encapsulate access to everything AI needs. |
ai/game_rules.hpp | the game rules functions. Game rules functions are those C++ functions which are AI-independent and which evaluate their result based on the AI's perception on the rules of the game (AIs perceptions of the game rules are given to them as a parameter). Therefore, we can have only one copy of these functions, since if we have two such versions which yield different results, then one of them is guaranteed wrong (since they do not make any decisions, they simply answer a question about the game rules (taking AIs perception of the game rules into account)) |
ai/formula_language.hpp | Code that evaluates formula functions. Current formula functions are AI-independent too. So, they should be kept separate from the AIs (to make possible for any AI to use them). |
ai/formula_*.hpp | various support files for formula language, such as parser |
ai/components/*.hpp | various components which do *decisions* (they can be reused by various AIs, but it is needed to keep many versions around, because it is not immediately clear whether a decision is good or bad. |
ai/development/default_ai.hpp | development version of default AI |
ai/development/formula_ai.hpp | development version of formula_ai |
ai/development/*_ai.hpp | stable version of other AIs |
ai/stable/default_ai.hpp | stable version of default AI |
ai/stable/formula_ai.hpp | stable_version of formula_ai |
ai/stable/*_ai.hpp | stable version of other AIs |
...
Each of these AI will be backed by its own .cfg/.fai files, containing the AI default configuration and decision support scripts.
NOTE: I have already written ai_manager and ai_configuration, which made me very familiar with current AI lifecycle and integration points between team and AI, and integration points between the game and the AI.
To remove code duplication in stable/development, as much as possible will be extracted into ai-independent part. For example, most of the FormulaAI functions in formula_ai.cpp do not really belong there - they are not related with AI decision-making, and are not related with formula_ai, and are not related to ai_interface (for example, nearest_keep is the same for each ai - so, it should not be a part of formula_ai, it should be part of a common support library). Formula AI functions should be usable by ALL AIs that want to do so.
IMPORTANT NOTE: btw, a question may arise at this point: what is formula_ai then, if 'formula AI functions' are NOT part of it? Answer: formula_ai is code that makes a turn by evaluating some formulas in the specific order. It is that order of evaluation (turn sequence) that makes this AI unique. Even default_ai can evaluate a formula if it wants so. It may be even better to rename formula_ai.?pp to avoid confusion between 'formula ai functions' (which are NOT part of formula AI) and 'formula_ai'. It, also, immediately becomes clear that no AI can be written using just 'formula ai functions' without tuning the C++ part that controls evaluation sequence.
So, I will be able to remove a lot of code from formula_ai before splitting it into two versions. To further remove code duplication in stable/development, modularity in the AIs will be increased. for example, if we have two code paths A,B and A,C, I'll refactor it to A(X) and make both B and C to 'provide X'. This will allow us to keep one copy of code fragment A instead of two. This will have a side benefit of allowing us to plug-in another implementation of X later, if such an idea appears.
The primary candidate for such a refactoring is the formula_ai turn sequence I've spoken earlier. Turn sequence in the critical part of the AI and by allowing it to be pluggable we will allow ourselves to test a low of different ideas and possibilities. Several of the ideas about the turn sequence may include multi-unit evaluations, unit reservations, formula exception handling and flow control interruption, profiling, etc. To be able to test all of them, I will make the turn sequence pluggable. Also, recruitment is a candidate for such a refactoring.
and, as the last step here, I'll point CHAMPION_MP_AI, CHAMPION_SP_AI to "ai/stable/default_ai". CHALLENGER_MP_AI and CHALLENGER_SP_AI to "ai/development/default_ai"
This is, in itself, only a refactoring.
1.2 The next part of the AI Quest is to make any AI changes testable, preferably - with accounting in battle-proven and web-accessible numbers.
We need different kinds of tests.
1.2.1 Firstly, we need unit tests for C++ service functions. This will be made on the base of current unit testing framework (src/tests)
1.2.2 Secondly, we need unit tests for formulas. These, also, will be made on current unit testing framework (src/tests)
1.2.3 Thirdly, we need interactive testing. See AI_Arena - I've created a test scenario which is AI-independent and will allow testing multiple variants of AIs in specific situations. This test scenario also allows shorter formula development cycle (by using AI hot-redeployment functionality I've implemented). I will expand this test framework by creating test situations to cover all basic aspects of gameplay and to cover all unit abilities available in Wesnoth. I will expand this test framework to allow 'test runs' of multiple tests and multiple AI's per one developer command (with storage of the results in the log). I will expand this framework to allow easier 'parallel' testing of 'Champion AIs' and 'Challenger AIs' - to detect regressions easier.
1.2.4 Fourthly, we need non-interactive (batch) testing. In addition to interactive testing, we need to detect degradations and test the ideas without wasting valuable developer time. A good example of such a test is a full-scale battle of CHALLENGER AI vs CHAMPION AI. This also includes unfair battles (with one side having a distinct advantage) and pre-defined scenarios (from AI Arena) This testing has a very nice property - it can run 24x7, run many times, and store the results to the database, which will have a web-accessible frontend with 'interesting numbers' (later, it could be even possible to trick one of those GSoC students which will be working on stats.wesnoth.org into making some nice graphs ). This will act as a limited safety net for the AI developers (if they break the development AI, there is a good chance it will be noticeable during such testing), and this can also be used as an indicator for the replacement of CHAMPION AI with a better one.
NOTE: a question may arise: where to get this 24x7-working machine to make such testing possible ? This will not be a problem, since I've got my own server on a colo, and it will only need to get a small memory upgrade to compile wesnoth quicker, which is easily doable.
1.3 during the aforementioned refactoring, base formula functions will be pushed from formula_ai.?pp into a separate file. This separate file will become a service library of various functions. I will make them aware of the basic AI tune options.
- I will make a library of base FormulaAI functions which will be aware of the basic AI tune options and honour them. I will implement this as a thin layer between 'game rules' and them, called 'perception about game rules'. It will allow the SP scenario designers to fool the AI in a very elegant and declarative (no code to write, only declarations) way.
Example of proposed WML syntax: [ai] [perception] [filter] [filter_location] terrain=Gs^Fp [/filter_location] [/filter] my_defence_bonus=-40% [/perception] [/ai]
and the AI will, in all its decisions (including formula evaluation), *think*, that those forests give its units -40% defence from the standard %value (so, naturally, it will prefer to use non-forested tiles). This will be a very good and easy-to-use-and-easy-to-understand tool for SP designers.
This is a first, and important, milestone for my proposed project. A short overview: refactoring, code split to prevent breakage, testing framework, and tuning-parameter-awareness as a thin layer between game rules and service functions. This milestone will allow me to go forward with improving the formula language and AI ( remember, formula language is NOT formula_ai, and formula language is not a part of a formula_ai. So, after decoupling them we can easily experiment with, for example, adding formula recruitment to development/default_ai - and we will have test numbers about the efficiency of the result, thanks to the constructed test system. )
2. Formula language capabilities phase
2.1 I will increase FormulaAI capabilities by introducing:
2.1.1 .fai library support
2.1.2 .fai namespace support
2.1.3 .fai numeric type (example: 3.14157)
2.1.4 .fai exception handling
2.2 I will also create a FormulaAI debugger\profiler, to allow stepping through formulas, setting breakpoints, tracing and profiling, etc.
This is a second milestone: it is very small compared to the first because all the heavy lifting is supposed to be already done.
3. C++ ideas about AI coding phase
As I expect (1)-(2) to be finished early (till July), I will use the rest of the SoC time to implement my ideas about a new C++ AI. I will fill this wiki section in more detail on April 4th (I'm trying to implement an idea, and if it works till 03.04, I'll go with it, and if doesn't work, i'll change my plans to account for that.)
Some thoughts about the AI:
- I think that writing a good AI in FormulaAI is not possible without improvements in underlying C++ code (in particular, it should allow pluggable C++ 'candidate move testing and execution' implementation.
- I think that the AI must know the rules of the game and the consequence of it's actions.
- I will try to construct AI components which will use an extendable decision loop (SEE what the situation is -> EVALUATE it -> DECIDE what to do-> EXECUTE and EXPECT some results -> return to SEE what actually happened -> ...)
The task will be slightly easier because many trivial parts of it can be copy-pasted from current codebase.
I'm preparing a small demonstration of my AI ideas at the moment (I'm highly specializing it to make it quickly)
Timeline
+ | Mar 19 - Mar 22 | Familiarize myself with important parts of current Wesnoth AI code and Wesnoth way of doing things. |
+ | Mar 23 - Mar 27 | Discuss implementation ideas and details. Fix some more bugs. Gain SVN commit access. |
+ | Mar 28 - Mar 31 | Improve the AI lifecycle handling to allow AI hot-redeployment. Clean up current mess with AI parameter handling in team.cpp (without breaking anything, that is). Create a test scenario for interactive testing of the AIs |
now | Apr 1 - Apr 3 | Write an demonstration of the AI ideas I am trying to implement. |
April 4 - April 20 | Step 0: Write extended proposed AI redesign description and create the AI testing framework. | |
April 21 - June 10 | Step 1: Redesign current AI and FormulaAI code, split them into layers with defined responsibility, make the code cleaner, modular, and simpler, write a library of formula AI functions which will respect basic AI tuning parameters. | |
June 11 - July 10 | Step 2: Increase .fai capabilities: namespaces, exception handling, numeric datatype, libraries. Write a good Formula_AI debugger. | |
July 11 - August 10 | Step 3: Implement my AI ideas in C++. | |
August 11 | "'pencils down' date. Take a week to scrub code, write tests, improve documentation, etc." (c) google | |
Afterwards | - continue to improve the AI |
I've got an unique opportunity to improve Wesnoth because I am a last year student and my exams are already over (they were in February-09), and I have no lectures to attend and almost none university work to do - that means I am 90% free to improve Wesnoth (remaining 10% being my teaching practice, coordinating the work of several student developeres, and working towards my diploma). I'm able to work on improving Wesnoth 6 days a week (remaining day being my Sunday D&D session - I've been a D&D DM for 9+ years.), and hacking some code for 12+ hours per day is the thing I love:) So, I can easily devote 48+ hours to Wesnoth each week, even now.
I will need this time because AI is quite hard to do right. But I am confident in my abilities and my determination.
Patches and commits
I've earned a svn write access to the Wesnoth source repository on Mar 27.
I've already contributed some patches to make debugging easier and to fix some crashes related with FormulaAI.
[Bug 13218] - I've spotted and fixed an infinite loop (which hangs the game) in the AI if it persistently tries to move unit to occupied square, with a hint from Sirp and some help from Dragonking
[Bug 13230] - I've implemented one of the FormulaAI debugging-related suggestions on the EasyCoding page, under the guidance of boucman.
[Bug 13229] - I've fixed two small 'past-end-of-collection-dereference' bug in FormulaAI
[Patch 1136] -I've submitted a patch to fix a segfault in FormulaAI formula parser.
[Patch 1137] I've implemented FormulaAI function run_file - which adds ability to run .fai file directly from in-game console, which allows efficient debugging of the .fai files
[r34200] - I've fixed incorrect handling of poisoning attacks when suggesting best attack in user interface.
[r34298] - I've added basic history and hot-redeployment capabilities to formula console, reworked AI lifecycle management and AI configuration management. That patch alone was more that 2000 lines long.
[r34329] - I've used those new capabilities to create an AI Arena - a framework to easily test the AIs in specified situations
[r34378] - I've fixed enemy_units formula to not include incapacitated units, such as stoned units on Caves of the Basilisk
My immediate work plans:
Show a small working demonstration of my C++ AI Ideas
Make the game get multiplayer AI list from the list of AI configuration files in data/ai/ais/ directory to allow 'AI is an addon' paradigm to work
Answers to those questions
Basics
Write a small introduction to yourself.
Hello. My name is Yurii Chernyi. I am 23, and I live in Kiev, Ukraine. I study applied math at the Faculty of Cybernetics, Kiev University.
1.2) State your preferred email address.
terraninfo АТ terraninfo.net
1.3) If you have chosen a nick for IRC and Wesnoth forums, what is it?
IRC: Crab_
Wesnoth forums: Crab
Wesnoth wiki: Crab
GNA: crab
1.4) Why do you want to participate in summer of code?
I would like to make the world better and I've got some free time for this. I think that GSoC is a good opportunity to try to solve interesting problems.
1.5) What are you studying, subject, level and school?
Applied Mathematics, 6th year, Faculty of Cybernetics, Kiev University, Ukraine.
1.6) If you have contributed any patches to Wesnoth, please list them below. You can also list patches that have been submitted but not committed yet and patches that have not been specifically written for Wesnoth. If you have gained commit access to our SVN (during the evaluation period or earlier) please state so.
Yes, I've contributed some patches and fixed some bugs. I've earned commit access to Wesnoth svn on March 27. See My Patches and commits
Experience
good practical knowlegge of C++, Java, computer science concepts, networking, system and database administration.
2.1) What programs/software have you worked on before?
Mostly, I've done website development using Java EE. Also, I've done various C++ projects in regarding numeric computations (for example, in computational fluid dynamics)
2.2) Have you developed software in a team environment before? (As opposed to hacking on something on your own)
Yes, I'm familiar to team development, and, at present, I coordinate a team of student developers for a small university project.
2.3) Have you participated to the Google Summer of Code before? As a mentor or a student? In what project? Were you successful? If not, why?
I haven't participated in the GSoC before. This is my first and last (I graduate this year) chance to participate.
2.4) Open Source
2.4.1) Are you already involved with any open source development projects? If yes, please describe the project and the scope of your involvement.
I use a lot of open source, but Wesnoth is the first open source project I am involved with.
2.5) Gaming experience - Are you a gamer?
Yes, I am a gamer )
2.5.1) What type of gamer are you?
I'm not really sure what this question is about, but I'll try to answer: I prefer to think out a plan, and then execute it. So, according to http://www.quizilla.com/quizzes/result/532090/375850/, I'm a 'strategic gamer'.
2.5.2) What type of games?
I prefer strategy games, both turn-based and realtime.
2.5.3) What type of opponents do you prefer?
Those who play well and are challenging to beat. Those can be both human or computer players.
2.5.4) Are you more interested in story or gameplay?
It depends on gameplay. Some games are worth playing once - to see the story unfold. Some games are interesting to play even without the story :)
2.5.5) Have you played Wesnoth? If so, tell us roughly for how long and whether you lean towards single player or multiplayer.
We do not plan to favor Wesnoth players as such, but some particular projects require a good feeling for the game which is hard to get without having played intensively.
yes, I've played Wesnoth, from version 1.4.2. I've finished all single-player campaigns that were in the base install, some of them several times on different difficulty levels. I've played a bit of multiplayer in hot seat mode.
Communication skills
3.1) Though most of our developers are not native English speakers, English is the project's working language. Describe your fluency level in written English.
I think I can read and write English quite well. This wiki page can serve as an example :)
3.2) Are you good at interacting with other players? Our developer community is friendly, but the player community can be a bit rough.
That is yet to be seen, but I expect no problems. Being a player myself, I understand their desires )
3.3) Do you give constructive advice?
Yes, I personally think that I can give constructive advice.
3.4) Do you receive advice well?
Yes, especially if it is constructive :)
3.5) Are you good at sorting useful criticisms from useless ones?
Yes, I am.
Project
4.1) Did you select a project from our list? If that is the case, what project did you select? What do you want to especially concentrate on?
Yes, I looked through that list and I think it will be really hard and interesting to improve the AI (and the AI needs that improvement :) ).
4.2) If you have invented your own project, please describe the project and the scope.
My project is within the scope of 'improving AI and making it easier to tune its behavior' idea.
4.3) Why did you choose this project?
I want to make the AI feel more intelligent and I want it to be more fun to compete with. I would be happy to see Wesnoth with a better AI and I think I could make it better. Also, I want tweaking AI for specific scenarios to be a fun job, not a nightmarish formula debugging experience for scenario developers without programming background.
4.4) Include an estimated timeline for your work on the project. Don't forget to mention special things like "I booked holidays between A and B" and "I got an exam at ABC and won't be doing much then".
See Timeline
4.5) Include as much technical detail about your implementation as you can
See My proposal summary
4.6) What do you expect to gain from this project?
Knowlegge that I have done a good thing, satisfaction playing a better Wesnoth, and a bit of money from Google (I can probably earn more if I would work elsewhere as a full-time developer, but programming for Wesnoth is much more fun :) ).
4.7) What would make you stay in the Wesnoth community after the conclusion of SOC?
I will stay and continue to improve the game.
Practical considerations
5.1) Are you familiar with any of the following tools or languages?
* Subversion (used for all commits)
yes, both with command-line and gui.
* C++ (language used for all the normal source code)
yes
* Python (optional, mainly used for tools)
no, although I've worked with python code on several occasions and I can read it quite well.. I've had no problems patching wmllint when I needed some extra checks (to check mainline campaigns for usage of undocumented ai parameter syntax )
* build environments (eg cmake/autotools/scons)
'make' - yes, 'scons' - learned a bit while looking at Wesnoth build system. 'autotools' - no, but I know its purpose and I think it will be easy for me to learn to use it, if it becomes necessary. Using scons is all that I needed so far to compile Wesnoth.
5.2) Which tools do you normally use for development? Why do you use them?
vi, gdb, find, fgrep, small shell scripts, svn, git, Eclipse under GNU/Linux; Visual Studio and Eclipse under Windows. My work machine is a notebook running Debian GNU/Linux.
5.3) What programming languages are you fluent in?
I am fluent in Java (4+ years of experience) and C/C++ (4+ years of experience)
I know a bit of php/perl/sh/sed/awk/javascript/pascal
5.4) What spoken languages are you fluent in?
I can speak in Ukrainian (native), in Russian (native), English.
5.5) At what hours are you awake and when will you be able to be in IRC (please specify in UTC)
I'm in UTC+2 timezone. I am normally awake from +10 UTC to +02 UTC, and most of that time I'm at home, so I'm able to be in IRC.
5.6) Would you mind talking with your mentor on telephone / internet phone? We would like to have a backup way for communications for the case that somehow emails and IRC do fail.
Of course, we can talk by phone.