Evolving an AI for Spring
Moderators: hoijui, Moderators
Evolving an AI for Spring
Hi guys, I would appreciate your opinion on something. I want to try to create an AI for TA spring just using evolutionary means. That is, write the code for the AI in the form of a genetic algorithm and get it to compete amongst its siblings for the opportunity to reproduce with the possibility of mutation. Obviously the probability of reproduction must be dependant on how the AI performs in the game, with the victory being the most favourable outcome. It was also be good to include a positive reward for kills, buildings constructed and perhaps most importantly CPU time required or length of the code.
The whole process of AIs playing games against each other, evolving, recompiling and so on would have to be automated for it to be at all feasible. I think that instead of starting from scratch it would be much easier to start from a working AI, that is actually capable of doing something. My biggest concern is probably how many generations it would take before reasonable results would be achieved. Obviously months or years would be excessive. This is perhaps partially dependant on the number of cpp functions that are allowed as mutations.
I am not really interested in making an AI that could be ├óÔé¼┬Øfun├óÔé¼┬Ø to play in the sense that it puts up a good fight before ultimately succumbing, I am interested in making the AI play as well as possible. Its more of an experiment than anything its just that Spring provides such a good platform to try it.
Is there something that makes the idea totally unfeasible?
The whole process of AIs playing games against each other, evolving, recompiling and so on would have to be automated for it to be at all feasible. I think that instead of starting from scratch it would be much easier to start from a working AI, that is actually capable of doing something. My biggest concern is probably how many generations it would take before reasonable results would be achieved. Obviously months or years would be excessive. This is perhaps partially dependant on the number of cpp functions that are allowed as mutations.
I am not really interested in making an AI that could be ├óÔé¼┬Øfun├óÔé¼┬Ø to play in the sense that it puts up a good fight before ultimately succumbing, I am interested in making the AI play as well as possible. Its more of an experiment than anything its just that Spring provides such a good platform to try it.
Is there something that makes the idea totally unfeasible?
If you want it to recompile then yes it is unfeasable. Recompiling the C++ code would probably result in only 1 in a billion mutations producing code that doesnt crash.
Instead your going to have to provide a means of loading and executing a genome, or producing an AI that follows such a genome.
However what your proposing is a lot fo work and would rquire a vast amount fo tiem and processing power comapred to the same end result produced by an AI written manually.
Ontop of that your AI would become very mod specific, and at the same time could learn horrible errors and start evolving down a dead end path resulting in starting again from scratch.
And there are so many variables to take account of.
This sort of system is supposed to be complementary, it was never intended for something as large as this.
Instead your going to have to provide a means of loading and executing a genome, or producing an AI that follows such a genome.
However what your proposing is a lot fo work and would rquire a vast amount fo tiem and processing power comapred to the same end result produced by an AI written manually.
Ontop of that your AI would become very mod specific, and at the same time could learn horrible errors and start evolving down a dead end path resulting in starting again from scratch.
And there are so many variables to take account of.
This sort of system is supposed to be complementary, it was never intended for something as large as this.
You need way too much iterations, and iterations are way too slow.
Nevermind the complexity of evolving code, if you talk about that (as opposed to e.g. ANNs or 'just' evolving to the right values for some constants).
The only thing that could work reasonably well if you give it sufficient thought and time is the last thing: tuning a number of constants in your AI. But even then, you'll probably train it for the wrong set of input data, a too broad range of input data or a too small range of input data.
I'd start by evolving a hello world program using evoluting code and once you got that fast enough you may start thinking about an AI
Seriously though, I have made some code in the past that simulated ants driven by an artificial neural networks. Using genetic algorithms it trained them to find food. The number of neurons in the network was approx. 20 IIRC, and this worked reasonably well. However, once I made the problem even a little bit complexer (find food and bring it to base), the number of iterations increased dramatically or it didn't even seem to converge to a solution, no matter how big I made the neural network.
This messing around also showed that each generation needed to have at least about 50 members to keep sufficient diversity, and that 100s of generation were needed to evolve anything remotely close to a solution you can code in 5 minutes. In Spring terms, say 1 game takes 1 hour and you play each member of a generation vs 1 other member 1 time only (which isn't much given all kinds of random happenings that can kill off a good mutation if you allow it only one chance to prove itself): this gives 50*49/2*100 hours = 14 years of *minimum* time you need for it to evolve
In short, don't waste your time on evolving stuff unless it's in your research field
EDIT: also add all the points AF named, because they're very valid too
(I already assumed you could evolve code that never crashes etc., which I doubt you can in reality. With executing a genome you have to put really much thought in it to not make the solution space way too big, but also to not limit it too much.)
Nevermind the complexity of evolving code, if you talk about that (as opposed to e.g. ANNs or 'just' evolving to the right values for some constants).
The only thing that could work reasonably well if you give it sufficient thought and time is the last thing: tuning a number of constants in your AI. But even then, you'll probably train it for the wrong set of input data, a too broad range of input data or a too small range of input data.
I'd start by evolving a hello world program using evoluting code and once you got that fast enough you may start thinking about an AI

Seriously though, I have made some code in the past that simulated ants driven by an artificial neural networks. Using genetic algorithms it trained them to find food. The number of neurons in the network was approx. 20 IIRC, and this worked reasonably well. However, once I made the problem even a little bit complexer (find food and bring it to base), the number of iterations increased dramatically or it didn't even seem to converge to a solution, no matter how big I made the neural network.
This messing around also showed that each generation needed to have at least about 50 members to keep sufficient diversity, and that 100s of generation were needed to evolve anything remotely close to a solution you can code in 5 minutes. In Spring terms, say 1 game takes 1 hour and you play each member of a generation vs 1 other member 1 time only (which isn't much given all kinds of random happenings that can kill off a good mutation if you allow it only one chance to prove itself): this gives 50*49/2*100 hours = 14 years of *minimum* time you need for it to evolve

In short, don't waste your time on evolving stuff unless it's in your research field

EDIT: also add all the points AF named, because they're very valid too

(I already assumed you could evolve code that never crashes etc., which I doubt you can in reality. With executing a genome you have to put really much thought in it to not make the solution space way too big, but also to not limit it too much.)
I've recently been thinking about how an evolutionary, self learning AI could work for spring.
I think it is possible to get some decent results, ie an AI that does stuff that makes somewhat sense, if you would regard the units as individuals who like to make as much money as possible in your local TA economy.
economy
Each unit can use its money to buy energy, metal, storage or factory time to make 'offspring'. Conversion values of resource vs money would be based on current supply and demand, similar to how real stock works.
genetic algorithms
When offspring is created, the typical evolutionary algorithm is used. Copy the set of rules from the parent and generate some mutations. Rules for units could be like 'if <conditions> then <action>'.
- condition could be based on current unit money, local 'threat' map value, and a set of surrounding entities
- action could be 'move to' + relative position, 'move away from' + entity... etc. The rule system is probably the hardest part to get right. There should be a size limit to prevent rules becoming to complicated, and sometimes mutations would remove parts of the rule to help abstracting them.
Rule based learning might be applied to further help the invidiual units learn: When walking around, all used rules are stored in a list. When the unit has success in generating income, the rules in the list might be stimulated by increasing the chance that these rules are used again.
Details
The great thing is that when such an AI would work, you could apply it to litterally every game out there by just modifying interface code.
If anyone starts a project like this, please let me know because I might be interested in helping out
btw, part of these ideas come from the book 'The origin of wealth', which is about 'complexity economics' (About the study of economics as an evolutionary complex adaptive system). It should have been written more compact, but still an interesting read.
I think it is possible to get some decent results, ie an AI that does stuff that makes somewhat sense, if you would regard the units as individuals who like to make as much money as possible in your local TA economy.
economy
Each unit can use its money to buy energy, metal, storage or factory time to make 'offspring'. Conversion values of resource vs money would be based on current supply and demand, similar to how real stock works.
genetic algorithms
When offspring is created, the typical evolutionary algorithm is used. Copy the set of rules from the parent and generate some mutations. Rules for units could be like 'if <conditions> then <action>'.
- condition could be based on current unit money, local 'threat' map value, and a set of surrounding entities
- action could be 'move to' + relative position, 'move away from' + entity... etc. The rule system is probably the hardest part to get right. There should be a size limit to prevent rules becoming to complicated, and sometimes mutations would remove parts of the rule to help abstracting them.
Rule based learning might be applied to further help the invidiual units learn: When walking around, all used rules are stored in a list. When the unit has success in generating income, the rules in the list might be stimulated by increasing the chance that these rules are used again.
Details
- To prevent lots of units just running away from the enemy in chaos, instead of defending as a collective, there could be an overruling entity that collects taxes and uses it to build an offensive army.
- To support the idea of defenses, the economy system should make it possible for units to buy protection from other units.
- To allow for more complexity in the economy, and allow expensive units to be build such as a fusion plant, the rule system could include getting loans from other units. Interest rates would again be based on supply and demand just like in reality.
This way one unit might decide at some point to buy a fusion plant based on loans, and pay the money back later. - To make the global attacking system work as efficient as possible, attack orders could be 'outsourced' to individual units who receive government money for doing the job as efficient and soon as possible. That way the attack system also gets the advantages of the unit learning system.
- Maybe a hierarchical 'slave' system where offspring pays their parent could help too. After all this is not about units being happy, but about efficiency
The great thing is that when such an AI would work, you could apply it to litterally every game out there by just modifying interface code.
If anyone starts a project like this, please let me know because I might be interested in helping out

btw, part of these ideas come from the book 'The origin of wealth', which is about 'complexity economics' (About the study of economics as an evolutionary complex adaptive system). It should have been written more compact, but still an interesting read.
Last edited by jcnossen on 01 Sep 2007, 16:00, edited 1 time in total.
I personally have higher expectations of (concurrent)
hierarchical reinforcement learning than of GA's, SMDP
theory is more solid and most of the algorithms (such
as MAXQ) are online, so they learn & adapt much faster
than evolutionary approaches ever will (which IMO are
unsuited for realtime game AI, the gene pool population
alone would number in the thousands even assuming you
could find a good solution encoding scheme). The main
problem is compressing the state space enough through
abstraction to make RL methods feasible, they can still
only deal with toy learning tasks when compared to the
complexity of a full-fledged RTS. It's the direction I would
take though if my (primary) goal was efficiency.
hierarchical reinforcement learning than of GA's, SMDP
theory is more solid and most of the algorithms (such
as MAXQ) are online, so they learn & adapt much faster
than evolutionary approaches ever will (which IMO are
unsuited for realtime game AI, the gene pool population
alone would number in the thousands even assuming you
could find a good solution encoding scheme). The main
problem is compressing the state space enough through
abstraction to make RL methods feasible, they can still
only deal with toy learning tasks when compared to the
complexity of a full-fledged RTS. It's the direction I would
take though if my (primary) goal was efficiency.
Last edited by Kloot on 16 Jan 2008, 00:25, edited 4 times in total.
If I were to ever try something like this just because it sounds like a fun experiment to play with, I would write a large ammount of diff high level commands that would probably control an allready stable AI(or write a simple one). Also for the "gene" code that controls the AI I would use some kind of assembly stryle binary code so that mutations can easily just be randomized in a data array, and no data will lead to crash. The most important part is the training/selection algos, they would be really tricky to code as they would have to take away the lucky factor and things like that.
I would probably fail to make a good or even working AI this way but I would have fun trying. So glhf
I would probably fail to make a good or even working AI this way but I would have fun trying. So glhf

Re: Evolving an AI for Spring
Hi!
I am currently starting to develop KPAI - A kernel panic AI. I ran over this thread and thought that Kernel Panic would be a very good mod for this. The Genetic Algorithms part of the AI would concern the build order of units - like: [Bit, Bit, Bit, Bit, Assembler, Byte, Byte, Pointer, Pointer] would be an example build order and would be the chromosome for the AI. This is perfect because the home base is the only thing where you choose what to build (Besides the assembler, the worker unit).
I think that this would be a good application for Genetic Algorithms, howerver, I'm not exactly sure where to go with this.
Maybe if anyone wants to expand on the idea....
I am currently starting to develop KPAI - A kernel panic AI. I ran over this thread and thought that Kernel Panic would be a very good mod for this. The Genetic Algorithms part of the AI would concern the build order of units - like: [Bit, Bit, Bit, Bit, Assembler, Byte, Byte, Pointer, Pointer] would be an example build order and would be the chromosome for the AI. This is perfect because the home base is the only thing where you choose what to build (Besides the assembler, the worker unit).
I think that this would be a good application for Genetic Algorithms, howerver, I'm not exactly sure where to go with this.
Maybe if anyone wants to expand on the idea....
Re: Evolving an AI for Spring
Sounds interesting, but you will probably find it a problem that
build orders aren't static learning targets (they depend on the
map and obviously on what your AI's opponent happens to be
doing at any given moment). GA's don't perform well when the
solution space is dynamic, in general they converge too slowly
to make them viable for online learning tasks. Still, good luck!
build orders aren't static learning targets (they depend on the
map and obviously on what your AI's opponent happens to be
doing at any given moment). GA's don't perform well when the
solution space is dynamic, in general they converge too slowly
to make them viable for online learning tasks. Still, good luck!
Re: Evolving an AI for Spring
Well I think it could at least rule out the BAD ones..
Like.. building only a bunch of assemblers (the builder units, they don't attack).
Regardless of what I do with it, I'm still going to represent the (initial) build order in a chromosome-compatible structure, so I guess I could try different things with it.
Like.. building only a bunch of assemblers (the builder units, they don't attack).
Regardless of what I do with it, I'm still going to represent the (initial) build order in a chromosome-compatible structure, so I guess I could try different things with it.
-
- Posts: 2
- Joined: 15 May 2007, 12:45
Re: Evolving an AI for Spring
I dont want to discourage you. But building an AI with evolutionary algorithms is diffecult and the outcome is seldom satisfactory.
During my study Artificial Intelligence I followed this course called "Game A.I. Design". One lecture was about evolutionary game A.I.
Let me tell you what the best A.I. at the end of 1000's of runs:
Rush cheap offensive units! I've seen a couple of experiments and all converge to this A.I.
If you still plan of writing a learning A.I., use Dynamic Scripting. For more information, go to the library and get the A.I. Wisdom books (part 1,2, and 3). Especially Dr. P. Spronck (google him) has done much study on learning A.I. in several game genres.
http://www.cs.unimaas.nl/p.spronck/Pubs ... onsenM.pdf
During my study Artificial Intelligence I followed this course called "Game A.I. Design". One lecture was about evolutionary game A.I.
Let me tell you what the best A.I. at the end of 1000's of runs:
Rush cheap offensive units! I've seen a couple of experiments and all converge to this A.I.
If you still plan of writing a learning A.I., use Dynamic Scripting. For more information, go to the library and get the A.I. Wisdom books (part 1,2, and 3). Especially Dr. P. Spronck (google him) has done much study on learning A.I. in several game genres.
http://www.cs.unimaas.nl/p.spronck/Pubs ... onsenM.pdf
Re: Evolving an AI for Spring
One of my colleagues has run off with my copy of AI wisdom 1
Re: Evolving an AI for Spring
Right, Genetic Programming seems like the best approach - Evolving machine code.
I think if implemented right it may be able to produce a somewhat satisfactory result...
I think if implemented right it may be able to produce a somewhat satisfactory result...
Re: Evolving an AI for Spring
Don't we now already have Spring AI that do that?AF wrote:and at the same time could learn horrible errors and start evolving down a dead end path resulting in starting again from scratch.

Sometimes, I'm under the impression it's the BEST one. At least if you're not attacked too early and the map is rich with geos. Because past the first minute or two, having gone all cons gives you a huge and increasing advantage on the unit count.Jonanin wrote:Well I think it could at least rule out the BAD ones..
Like.. building only a bunch of assemblers (the builder units, they don't attack).
But otherwise, indeed a simple and small mod like KP (both in unit type count and average game time) is best for any evolving AI (not that it wouldn't be too complex already).
Re: Evolving an AI for Spring
Rushing cheap offensive units? Even the human brain, a powerful intelligence system, has converged on this strategy. Its commonly referred to as "flash spaem".
Re: Evolving an AI for Spring
Note that said flashing strategy took -years- of non-stop human multiplayer games to emerge as the dominant tactic in OTA. The fact that we've had it in it in the main *A mods here from the start is just an inheritance from OTA.
Re: Evolving an AI for Spring
Really, spamming cheap units sounds like a pretty standard thing to do, people do it in every rts..