Unit Learning Algorithms
Moderators: hoijui, Moderators
So youre saying you can come up with the scripting system i mentioned above?
Because unless you can factor in every possible situation it will always be just as good as a learning system with similar complexity, except you have to do all the work yourself.
Im glad i got that out of the way straight away when making KAI...
Because unless you can factor in every possible situation it will always be just as good as a learning system with similar complexity, except you have to do all the work yourself.
Im glad i got that out of the way straight away when making KAI...
One algorithm cant fit every possible situation.
Allowing the whole thing to be scripted allows modders to specify thigns that work in their mods that may not work in other mods.
And there are other reasons and benefits/advantages to the approach I've taken, that Argh is saying we should follow.
At the moment I cant think of many advantages in unit learning algorithms in the way that they are currently used for unit choice selection.
However they can apply in thigns such as threat matrixes, and combined with scripting they can be very useful.
Scripting alone has lots of benefits, but combined with these 'nit learnin' algorithms they cna eb more pwoerful.
However these algorithms alone are hopeless in trying to get where you're all tryign to get.
Allowing the whole thing to be scripted allows modders to specify thigns that work in their mods that may not work in other mods.
And there are other reasons and benefits/advantages to the approach I've taken, that Argh is saying we should follow.
At the moment I cant think of many advantages in unit learning algorithms in the way that they are currently used for unit choice selection.
However they can apply in thigns such as threat matrixes, and combined with scripting they can be very useful.
Scripting alone has lots of benefits, but combined with these 'nit learnin' algorithms they cna eb more pwoerful.
However these algorithms alone are hopeless in trying to get where you're all tryign to get.
When you play the game you're not considering millions of possibilities at once, and there is accepted wisdom as to what a unit should do most of the time.
However what the AI developer and what the modder may not agree, so why should the whole thing then be dictated by the AI developer, regardless of the AI developers skill at actually playing the game, for that mod the rules governing the IA may be totally inadequate and the AI may go in a totally wrong direction.
*sees that no matter how hard he tries he'll never rid krogothe of this misconception of what AF is trying to say, and may even require to start hinting at Epic but dares not do so, and hopes krogothe continues barking up the wrong tree*
If you wont take my word for ti and read my explanations and try to see my point of view then you'll never understand what I'm trying to say and you'll just have to wait until I release Epic and then we'll see who is right and who is wrong.
Not that I am saying I am right, just that I see something terribly wrong with what you're all saying and none of you see it. I'm not going to spend my time anymore trying to push you all in that direction so that you can arrive there yourself and see ti for yourself. I have my own ideas and my own designs, and my own ability to implement them, so lets implement these things and then I'll show you why, I'll show you what, and I'll show you how.
But so far Argh is the closest, but he just doesn't realize that there's something in what he's rubbishing which is being drowned in the false concepts he's attacking that needs saving and shoving into the things he's telling us. Of all the things said here, what he has said has been of most use to me, and I'm horrified that most if not all of you are actively trying to quash him and haven't truly taken on board what he's saying.
I'm more and more getting the feeling that this section of the community si converging on a "this is how we do it, these are common ideas and this is the general direction we're moving towards"
It's been such a long time since anybody posted anythign that was interesting and related to AI's that wasnt simply someone elses research.
However what the AI developer and what the modder may not agree, so why should the whole thing then be dictated by the AI developer, regardless of the AI developers skill at actually playing the game, for that mod the rules governing the IA may be totally inadequate and the AI may go in a totally wrong direction.
*sees that no matter how hard he tries he'll never rid krogothe of this misconception of what AF is trying to say, and may even require to start hinting at Epic but dares not do so, and hopes krogothe continues barking up the wrong tree*
If you wont take my word for ti and read my explanations and try to see my point of view then you'll never understand what I'm trying to say and you'll just have to wait until I release Epic and then we'll see who is right and who is wrong.
Not that I am saying I am right, just that I see something terribly wrong with what you're all saying and none of you see it. I'm not going to spend my time anymore trying to push you all in that direction so that you can arrive there yourself and see ti for yourself. I have my own ideas and my own designs, and my own ability to implement them, so lets implement these things and then I'll show you why, I'll show you what, and I'll show you how.
But so far Argh is the closest, but he just doesn't realize that there's something in what he's rubbishing which is being drowned in the false concepts he's attacking that needs saving and shoving into the things he's telling us. Of all the things said here, what he has said has been of most use to me, and I'm horrified that most if not all of you are actively trying to quash him and haven't truly taken on board what he's saying.
I'm more and more getting the feeling that this section of the community si converging on a "this is how we do it, these are common ideas and this is the general direction we're moving towards"
It's been such a long time since anybody posted anythign that was interesting and related to AI's that wasnt simply someone elses research.
Well, thats why im the only active AI developer that i know of with a star, plus i am in the best spring clan, so im confident enough in my skills, thank you!
I see your point but i simply am a disbeliever, thats all... Ill eagerly wait for epic AF and hope youre right, because so far only OTAI is holding up against KAIs mighty 2 weasel attack force!
*puts on asbestos suit, sprays anti-flame and ducks in his secret KAI bunker*
I see your point but i simply am a disbeliever, thats all... Ill eagerly wait for epic AF and hope youre right, because so far only OTAI is holding up against KAIs mighty 2 weasel attack force!
*puts on asbestos suit, sprays anti-flame and ducks in his secret KAI bunker*
First off, that was a joke, kind've.
Second off, how hard is it to deliver an AI that makes decisions based on this:
[THREAT]
{
UnitName=Brawler;
ThreatLevel=1.5;
GroupMultiplier=1.25;
BestCounters=Jethro, Defender, Chainsaw... etc.
LeadTime=10;
HeightFactor=0;
}
And there ya go. All of the razzmatazz in a very concise package.
What matters?
Five simple things actually matter, and none of them are directly tied to unit stats (or, if they are, the relationships are so complex that none of us have the time to sort them out).
What is the real threat a given unit poses?
This is a somewhat subjective judgement, and will not be totally right under every possible scenario, but that's where fudging comes in. A Knight in NanoBlobs is a ThreatLevel of about 5 vs. Wolves, for example, about 4 vs. Archers, 1 vs. Knights, 0.75 or so against SpireRooks, 0.33 vs. SquareRooks, 0.25 vs Demons (theoretically, this is still being balanced) 2 vs. Holders, 1 vs. a Lord, and ... 1000000000000 vs. Sheep.
Therefore, depending on how we want an AI to respond to the Knight, we can give it an overall ThreatLevel. Given the Knight's unique abilities (it simply cannot be killed by a low enough unit count of the weaker units) I'd give it a generic ThreatLevel of about 3, to make sure that the AI counters it when seen.
How much does massing the unit increase its per-unit effectiveness?
Some units are far more effective if they are grouped than others.
SpireRooks, for example, are much more powerful in groups than they are as singletons. Whereas Knights are much less powerful in groups, but do gain some power (any attack unit gains some power, because they present more targets to a defender if nothing else). So, a Knight might have a GroupMultiplier of only 1.1, whereas a SpireRook might have a GroupMultiplier of 2.0.
What units best counter the unit?
Some units are countered by other units, whether by specifics in the game design (NanoBlobs is meant to demonstrate this on a variety of levels, from the simple rock-paper-scissors of Knight/Archer/SpireRook to the complex relationships between SquareRooks, Archers and Wolves) or by accident (the OTA relationship between Jethros vs. Peewees).
Modders can and should be able to explicitly lay out these relationships. We know the hidden weaknesses of our units- and these relationships are often very subtle. For example, in NanoBlobs... Wolves actually own SpireRooks, by costs-over-time, but only if gathered into effective swarms (Wolves are meant to demonstrate a unit that has more than one effective state- they are scouts, but they are also effective attack units, if micro-managed).
How much time would it take to build/deploy the best possible counter?
Some units, like Krogoths from OTA, are slow enough that, on a large map, it is practical to counter it by actually building units once it is spotted. Some units, like Hawks/Brawlers, can only be countered with what's actually available, because building to counter them right now is impractical.
It is totally silly to have an AI build to meet yesterday's challenges. But, if a LeadTime has been established that makes it practical to counter a given threat, then the AI might want to build the counters. This is much smarter than having the AI get owned by Scissors, build Rocks, then get owned by Paper a few minutes later. Humans aren't that dumb, and build mixed forces to compensate (or risk everything by building a unit that's great at one thing, lousy for another). AIs can build mixed forces that make sense, given human controls - I've already talked about buildtrees a lot, and NTAI shows how they can be done in a way that's reasonably elegant and useful. But when it comes to attack / defense behaviors, it is sometimes possible, even "smart" to override stock buildtrees in favor of something else.
Does height make a real difference in unit performance?
Some units are greatly affected by height- both in positive and negative senses. For example, units with Starburst missiles as their main weapons are height-neutral- while height might make a small difference in time-to-impact for their weapon systems, it's not terrifically important, compared to the overall time delay imposed. Units with LineOfSight weapons are somewhat negatively affected by being at lower heights, so they should have a HeightFactor of 1.1 or so, depending on a wide variety of other factors which modders can weigh for themselves (such as shot speeds, range, randomized accuracy, etc.). Ballistic weapons are the biggest gainers from being high up, so they should usually get a fairly high HeightFactor. However, this is not always the case, and it should be weighed by modders as they are tweaking AI behaviors.
So... for attacking behaviors, we have 5 factors that actually matter. Many of which are actually shorthand for very, very complex considerations that are not amenable to execution on a realtime basis. And none of these can be directly tied to any one statistic, nor are they likely to be arrived at by iterative counting processes.
Second off, how hard is it to deliver an AI that makes decisions based on this:
[THREAT]
{
UnitName=Brawler;
ThreatLevel=1.5;
GroupMultiplier=1.25;
BestCounters=Jethro, Defender, Chainsaw... etc.
LeadTime=10;
HeightFactor=0;
}
And there ya go. All of the razzmatazz in a very concise package.
What matters?
Five simple things actually matter, and none of them are directly tied to unit stats (or, if they are, the relationships are so complex that none of us have the time to sort them out).
What is the real threat a given unit poses?
This is a somewhat subjective judgement, and will not be totally right under every possible scenario, but that's where fudging comes in. A Knight in NanoBlobs is a ThreatLevel of about 5 vs. Wolves, for example, about 4 vs. Archers, 1 vs. Knights, 0.75 or so against SpireRooks, 0.33 vs. SquareRooks, 0.25 vs Demons (theoretically, this is still being balanced) 2 vs. Holders, 1 vs. a Lord, and ... 1000000000000 vs. Sheep.
Therefore, depending on how we want an AI to respond to the Knight, we can give it an overall ThreatLevel. Given the Knight's unique abilities (it simply cannot be killed by a low enough unit count of the weaker units) I'd give it a generic ThreatLevel of about 3, to make sure that the AI counters it when seen.
How much does massing the unit increase its per-unit effectiveness?
Some units are far more effective if they are grouped than others.
SpireRooks, for example, are much more powerful in groups than they are as singletons. Whereas Knights are much less powerful in groups, but do gain some power (any attack unit gains some power, because they present more targets to a defender if nothing else). So, a Knight might have a GroupMultiplier of only 1.1, whereas a SpireRook might have a GroupMultiplier of 2.0.
What units best counter the unit?
Some units are countered by other units, whether by specifics in the game design (NanoBlobs is meant to demonstrate this on a variety of levels, from the simple rock-paper-scissors of Knight/Archer/SpireRook to the complex relationships between SquareRooks, Archers and Wolves) or by accident (the OTA relationship between Jethros vs. Peewees).
Modders can and should be able to explicitly lay out these relationships. We know the hidden weaknesses of our units- and these relationships are often very subtle. For example, in NanoBlobs... Wolves actually own SpireRooks, by costs-over-time, but only if gathered into effective swarms (Wolves are meant to demonstrate a unit that has more than one effective state- they are scouts, but they are also effective attack units, if micro-managed).
How much time would it take to build/deploy the best possible counter?
Some units, like Krogoths from OTA, are slow enough that, on a large map, it is practical to counter it by actually building units once it is spotted. Some units, like Hawks/Brawlers, can only be countered with what's actually available, because building to counter them right now is impractical.
It is totally silly to have an AI build to meet yesterday's challenges. But, if a LeadTime has been established that makes it practical to counter a given threat, then the AI might want to build the counters. This is much smarter than having the AI get owned by Scissors, build Rocks, then get owned by Paper a few minutes later. Humans aren't that dumb, and build mixed forces to compensate (or risk everything by building a unit that's great at one thing, lousy for another). AIs can build mixed forces that make sense, given human controls - I've already talked about buildtrees a lot, and NTAI shows how they can be done in a way that's reasonably elegant and useful. But when it comes to attack / defense behaviors, it is sometimes possible, even "smart" to override stock buildtrees in favor of something else.
Does height make a real difference in unit performance?
Some units are greatly affected by height- both in positive and negative senses. For example, units with Starburst missiles as their main weapons are height-neutral- while height might make a small difference in time-to-impact for their weapon systems, it's not terrifically important, compared to the overall time delay imposed. Units with LineOfSight weapons are somewhat negatively affected by being at lower heights, so they should have a HeightFactor of 1.1 or so, depending on a wide variety of other factors which modders can weigh for themselves (such as shot speeds, range, randomized accuracy, etc.). Ballistic weapons are the biggest gainers from being high up, so they should usually get a fairly high HeightFactor. However, this is not always the case, and it should be weighed by modders as they are tweaking AI behaviors.
So... for attacking behaviors, we have 5 factors that actually matter. Many of which are actually shorthand for very, very complex considerations that are not amenable to execution on a realtime basis. And none of these can be directly tied to any one statistic, nor are they likely to be arrived at by iterative counting processes.
... and having caught up with the rest of the cross-flaming...
First off, I have demonstrated that I know how to play, as well as build mods. In fact, I am an above-average (but not stellar) player- quite beatable by experts, mind you, but no newbie. I designed NanoBlobs for expert players, and have been both amused and dismayed that the inherent difficulty has proven a real stumbling block (short version- newbies get thrashed).
If anybody has any doubts as to whether I know what I am doing, they are welcome to cross digital swords with me
Secondly, what I am trying to demonstrate, with a certain amount of sharp critique, is that there is a huge gulf between the theoretical egg-heads who study "machine learning" in RTS games and the practical realities of building an effective, real-world-useful AI. Kohan II should be your guide, folks- not some crappy paper by a PhD. candidate who isn't good enough to work in the Industry, and who has few incentives to build AI that can win, as opposed to play "intelligently". I have been a loud and harsh critic here because so much of the "debate" here comes from articles written by such egg-heads, instead of concrete examples from real-world success stories.
Where is the open and frank discussion of the strong/weak points of OTA's AI design? What about StarCrap? AoE, I, II and III have all taken similar but divergent approaches- why are these not being dissected and properly understood?
If you want to write good books, read Dostoevsky, Simmons, Drake or Faulkner.
If you want to study game design, study the works of Wright, Meier, Carmack, Miyamoto, Molyneux, and Costikyan.
If you want to study RTS game AI, study the final products, because they are a totality, not just a theory. Game designs are totalities. I did not sit down and write NanoBlobs all at one go, with a magical Theory of Cool in my head. It doesn't work like that- everything has to mesh. It takes time, it's inevitably messy, and it's never completely clear even to the designer why it's right when it's right, when you're dealing with something as complex as a modern RTS.
Study Magic Carpet, Command and Conquer, StarCraft, WarCraft, TA and Black and White. Study the difference between strategic relationships in Civilization, where the AI has the leisure to examine very large numbers of variables (but ultimately still makes some very poor choices) compared to the tactical "now" of StarCraft, where strategic decisions are actually run in a very crude fashion, and one mainly sees a good game from the AI when it is cheating. I could go on and on here, but I hope that my point is taken- if we are to create a better future, we must understand the past.
I have played many, many games over the years, and I have been in your collective faces for good reasons- I've been watching you re-invent the wheel several times, or spin in circles when there's an enormous "GO HERE" painted on the door overhead. Before trashing what I preach, at least take the time to read about these games- go find post-mortums, or something, if you cannot be bothered to pirate them and play them for an hour
First off, I have demonstrated that I know how to play, as well as build mods. In fact, I am an above-average (but not stellar) player- quite beatable by experts, mind you, but no newbie. I designed NanoBlobs for expert players, and have been both amused and dismayed that the inherent difficulty has proven a real stumbling block (short version- newbies get thrashed).
If anybody has any doubts as to whether I know what I am doing, they are welcome to cross digital swords with me

Secondly, what I am trying to demonstrate, with a certain amount of sharp critique, is that there is a huge gulf between the theoretical egg-heads who study "machine learning" in RTS games and the practical realities of building an effective, real-world-useful AI. Kohan II should be your guide, folks- not some crappy paper by a PhD. candidate who isn't good enough to work in the Industry, and who has few incentives to build AI that can win, as opposed to play "intelligently". I have been a loud and harsh critic here because so much of the "debate" here comes from articles written by such egg-heads, instead of concrete examples from real-world success stories.
Where is the open and frank discussion of the strong/weak points of OTA's AI design? What about StarCrap? AoE, I, II and III have all taken similar but divergent approaches- why are these not being dissected and properly understood?
If you want to write good books, read Dostoevsky, Simmons, Drake or Faulkner.
If you want to study game design, study the works of Wright, Meier, Carmack, Miyamoto, Molyneux, and Costikyan.
If you want to study RTS game AI, study the final products, because they are a totality, not just a theory. Game designs are totalities. I did not sit down and write NanoBlobs all at one go, with a magical Theory of Cool in my head. It doesn't work like that- everything has to mesh. It takes time, it's inevitably messy, and it's never completely clear even to the designer why it's right when it's right, when you're dealing with something as complex as a modern RTS.
Study Magic Carpet, Command and Conquer, StarCraft, WarCraft, TA and Black and White. Study the difference between strategic relationships in Civilization, where the AI has the leisure to examine very large numbers of variables (but ultimately still makes some very poor choices) compared to the tactical "now" of StarCraft, where strategic decisions are actually run in a very crude fashion, and one mainly sees a good game from the AI when it is cheating. I could go on and on here, but I hope that my point is taken- if we are to create a better future, we must understand the past.
I have played many, many games over the years, and I have been in your collective faces for good reasons- I've been watching you re-invent the wheel several times, or spin in circles when there's an enormous "GO HERE" painted on the door overhead. Before trashing what I preach, at least take the time to read about these games- go find post-mortums, or something, if you cannot be bothered to pirate them and play them for an hour

argh, you could have made your point in about 1/4 of the text....
and also, most commercial, if not every commercial game uses cheating AI's... which brings me to the question: Do you want a cheating AI?
The success stories aren't examples of non-cheating AI's, so how do they prove that they are better in a real game situation?
And what do books have to do with it? (yes, rhetorical)
and also, most commercial, if not every commercial game uses cheating AI's... which brings me to the question: Do you want a cheating AI?
The success stories aren't examples of non-cheating AI's, so how do they prove that they are better in a real game situation?
And what do books have to do with it? (yes, rhetorical)
@Zaphod:
Yes, I could've made my point shorter, sorry, I am always long-winded as well as being hard-headed.
I've already said that I'm in favor of allowing AIs to cheat. I do not necessarily think that it is required, and while I think it is probably the best option for arriving at an AI that poses a serious threat in larger, more complex game systems, I think that a non-cheating AI can give players a halfway-decent game if it evaluates things correctly.
Kohan II's AI does cheat, but I think it's still brilliant. AoE's AI cheats, but it's also quite brilliant. The cheating is just another way to ramp things up, and I think that it would be healthy for AI developers to look at cheating as just another tool within the totality.
@krogothe: if it's cheating the way that I think it is (I followed the two-Weasel thing to its logical destination) I hope that Spring's developers close that particular loophole, as it poses a direct threat to fair play. No offense intended- your demonstration with the Commander was brilliant work, but it is a loophole that should be closed, imho.
Yes, I could've made my point shorter, sorry, I am always long-winded as well as being hard-headed.
I've already said that I'm in favor of allowing AIs to cheat. I do not necessarily think that it is required, and while I think it is probably the best option for arriving at an AI that poses a serious threat in larger, more complex game systems, I think that a non-cheating AI can give players a halfway-decent game if it evaluates things correctly.
Kohan II's AI does cheat, but I think it's still brilliant. AoE's AI cheats, but it's also quite brilliant. The cheating is just another way to ramp things up, and I think that it would be healthy for AI developers to look at cheating as just another tool within the totality.
@krogothe: if it's cheating the way that I think it is (I followed the two-Weasel thing to its logical destination) I hope that Spring's developers close that particular loophole, as it poses a direct threat to fair play. No offense intended- your demonstration with the Commander was brilliant work, but it is a loophole that should be closed, imho.
The problem I see with disecting those games, is that on it's most basic level, AI's have never really been designed to win. Certainly it would seem so at the start, while you're only learning the game. But as you progress and become better at it you realize it's impossible for them to win. I am tactically, a very poor player. I freely admit it. But I can crush most AI's underfoot without thinking twice about it.
Every AI has tactical limitations and loopholes that are game and often race specific. AOE2 for example is a very aggressive AI. If however you understand that as a Naval AI it's sorely lacking you can put ships in the water, and have a safe source of food, and a largely undefended route of attack, you can simply wall off your base from ground attacks, reinforced with patrolling archers and gunners, and build a nicely sized army to move out, and smash the opposition.
Starcraft is a good AI, but is simple enough to defeat if you bring the correct sacrificial forces along. Melee to deal with ranged, and vice versa, as well as heavy units such as Siege Tanks, Ultralisks, or Reavers, you can demolish his base and production, while he's busy trying to defend against your sacrifices. As far as I can tell, Warcraft suffers the same problem, though I've only played that once or twice.
But the simple truth is, that each one of these AI's suffer the same flaw, if you don't let them play to the strengths of the AI, they stall completely. Most players don't do that. You keep an ARM player from raiding and rushing, he'll jump tactics on you and push to the next tech level so he can counter your Goliaths with Bulldogs as they're coming out of the factory. If you force a CORE player on the offensive early, before he can tech up, he'll jump tactics and start raiding as effectively as he can.
AOE2, one of the most dangerous nations to play against the Vikings. They have a powerful Naval unit, and a powerful melee attacker. The simple tactic to keep them from being unstoppable is to keep castles and docks down. Even one opens an avenue of attack.
So how do you make an AI capable of learning how to do that? Destroy castles or docks, take advantage of the hole in tactics? That if you can crush the enemies raiding, you can force him to defend til the next tech? Or if it has it's raiding crushed, how to start pushing for the next tech level? How do you teach an AI, that if it's CORE it has to get to the next tech level first at all costs?
Every AI has tactical limitations and loopholes that are game and often race specific. AOE2 for example is a very aggressive AI. If however you understand that as a Naval AI it's sorely lacking you can put ships in the water, and have a safe source of food, and a largely undefended route of attack, you can simply wall off your base from ground attacks, reinforced with patrolling archers and gunners, and build a nicely sized army to move out, and smash the opposition.
Starcraft is a good AI, but is simple enough to defeat if you bring the correct sacrificial forces along. Melee to deal with ranged, and vice versa, as well as heavy units such as Siege Tanks, Ultralisks, or Reavers, you can demolish his base and production, while he's busy trying to defend against your sacrifices. As far as I can tell, Warcraft suffers the same problem, though I've only played that once or twice.
But the simple truth is, that each one of these AI's suffer the same flaw, if you don't let them play to the strengths of the AI, they stall completely. Most players don't do that. You keep an ARM player from raiding and rushing, he'll jump tactics on you and push to the next tech level so he can counter your Goliaths with Bulldogs as they're coming out of the factory. If you force a CORE player on the offensive early, before he can tech up, he'll jump tactics and start raiding as effectively as he can.
AOE2, one of the most dangerous nations to play against the Vikings. They have a powerful Naval unit, and a powerful melee attacker. The simple tactic to keep them from being unstoppable is to keep castles and docks down. Even one opens an avenue of attack.
So how do you make an AI capable of learning how to do that? Destroy castles or docks, take advantage of the hole in tactics? That if you can crush the enemies raiding, you can force him to defend til the next tech? Or if it has it's raiding crushed, how to start pushing for the next tech level? How do you teach an AI, that if it's CORE it has to get to the next tech level first at all costs?
*_*
First off go look at the Glest and stratagus AI's, Glests AI is simple and easy to understand (and I will use that for something soon).
As for all of this, I must remind you all that what you're all discussing in terms of 'unit learning' algorithm is all implementation dependant yet you're talking as if it's generic thing and that all the AI's use a very similar system.
Either way I would have talked about other AI's but the atmosphere here gave em the impression I'd be wasting my time and that I'd be dismissed. Every thread I've started discussing theory or ideas has lead to a common "yes that sounds good" sort of reply, NTAI X didn't have nearly as much useful discussion in ti in terms of replies as I'd hoped, the most useful comments garnered from threads like that as weaver replying on darkstars to something, something you lot probably never read and would be of great use to you.
First off go look at the Glest and stratagus AI's, Glests AI is simple and easy to understand (and I will use that for something soon).
As for all of this, I must remind you all that what you're all discussing in terms of 'unit learning' algorithm is all implementation dependant yet you're talking as if it's generic thing and that all the AI's use a very similar system.
Either way I would have talked about other AI's but the atmosphere here gave em the impression I'd be wasting my time and that I'd be dismissed. Every thread I've started discussing theory or ideas has lead to a common "yes that sounds good" sort of reply, NTAI X didn't have nearly as much useful discussion in ti in terms of replies as I'd hoped, the most useful comments garnered from threads like that as weaver replying on darkstars to something, something you lot probably never read and would be of great use to you.