If you have been following my series of arguments with the AI development community, and the outcomes (which, now that I have an AI that I can configure with predictable outcomes, is actually getting somewhere interesting)... then you can probably skip this, because I've said most of it before, elsewhere. However, since you're new to this, here's my rant on this topic. Disregard it if you wish, but I'm a modder- you're an AI developer. Want your AI to be promoted? Want your AI to give players a good game? Then pay attention.
Rant-mode ON
Personally, I think that all of these numeric approaches are utter crapola. Want a demo? See how NanoBlobs plays with NTAI 7.5XE if you doubt that modders can and do understand the relative strengths/weaknesses of their units better than a piece of code ever will.
You want your nifty new AI to own the humans, in a fair fight? Fine. Give me a fully-configurable AI interface that allows me to script what the AI should do when it spots a given unit, cluster of units, or a stationary defense, and give me smarter sub-behavioral scripting that I can call given conditions. Give me the ability to tell the AI to gather 20 Pewees, 10 Flashes, and 10 Mortys, if it spots a HLT and a Guardian in the same sector. Give me the ability to tell the AI
not to attack under certain (modder-defined) conditions.
Quit assuming you know jack about how to play, or about how mods are designed, or about game-balance! If you did, and had the billion-and-one other skills required, you'd be mod designers. Instead, look at what you're doing this way. You're basically building a piece of engineering. If you're doing a good job, then you're building software that allows modders to do
their job, which is game design. Which, in the professional world, includes specifying parameters and control for AI. Good modders who really care about their end product can and will design good AI scripts, if you give us tools that are worth using and produce useful outcomes.
It's simply stupid/overweening to assume that an AI is learning anything useful if, say, it gets 50 Peewees owned by one good player who has placed two HLTs on small hills to either side of a pass that the AI is dumb enough to keep using.
C'mon. Don't kid yourselves. What is it "learning", that is worth "knowing"? That HLTs always > Peewees? What rubbish! What a human would learn is
not easily transformed into numeric logic. A human who knows how to play would learn, "hey, if I want
future Peewees this way, they're going to get owned. I should attack the HLTs first, even if
all I have is Peewees at the moment or back off until I have the proper force concentration to attack through this defensive line."
Go on. Break that down into numbers. I dare you. Your brain will melt. Or your processor. Either way, it's not happening. And you're
never going to get this approach by simply throwing iterations at an "efficiency calculator".
Instead, solve the problem by giving
modders, who actually understand game design... the tools to address specific situations. For example, in NTAI 7.5XE, I can now assign "target weights", and NTAI will tend to attack certain targets before others. Now, the way it's currently coded doesn't really work very well for NanoBlobs (mainly because NanoBlobs is already flooding the network once you're past the early game), but it will work very well for slower-paced mods that don't have hundreds of units dying per minute. Does this "target weight" feature involve the AI making "intelligent decisions" or using any kind of "efficiency calculation"? Hell no! I want that AI to attack the Lord (in NanoBlobs, this is the Commander-equivalent) on sight,
period. If the Lord is spotted, every attack unit should re-route to its location, trying to stomp on it, because that's what a human would do (at least, an intelligent one).
The approach used, where various numbers about
outcomes are somehow supposed to result in smart
behaviors is a complete dead end, and I've been watching various people beating it to death for months now with some amusement. If you want smart behaviors... program 'em, and give modders ways to access them in a way that makes sense. Then let modders who actually care about AI support configure a script. A common interface and agreement on the rough meaning of standard parameters and commands would also help quite a bit, because then y'all wouldn't be constantly re-inventing the wheel. AF has at least a plan for an interface, but I think that it's equally crucial that people get their heads wrapped around a very simple idea: if your AI requires me (a game designer) to learn a completely different set of configuration commands than another AI, and that other AI is more advanced / polished... why should I waste my valuable time on learning anything about your AI? Think, people. Modders are
busy and we want stuff
now. I am not learning 5 different sets of configuration scripts- I am going to learn the one that works best and is easiest to get done during the playtesting cycle. Making up your own (if you even bother... and if you don't, forget it, I'm not touching your AI) just to be "special" is a waste of everybody's time. Instead, announce that you're extending the common standard, explain why, and document your code. That way, you're not wasting anybody's time if your idea is useful. It's straight-up Open Source common sense.
Even AAI doesn't use a purely numeric "efficiency approach" at this point, otherwise Submarine wouldn't have broken compatibility with mods that aren't very OTA-like. Or maybe it's the assumption that everybody thinks that mexes are vital to good gameplay. Or whatever.
Which is too bad, since AAI at least gave everybody else a useful benchmark to test against, and if it had been kept stable and universal, it'd be more useful than it actually is. What I find ironic is that Submarine isn't supporting the current NanoBlobs when it was about the only mod that's been made that gives AI developers a good starting place for evaluating the design philosophies behind efficiency calculation in a rational way, with a real, live testing environment

Again, thanks for missing the point...
To sum up a somewhat-rambling post (sorry, I keep going on side-jaunts cuz I'm at work and keep getting interrupted, plus this IS a rant)...
AF's work with me on NTAI's attack patterns just confirmed for me that "learning" is completely over-rated. The NanoBlobs configuration, which plays a very mean game, uses no "thinking" whatsoever. The AI is purely scripted and state-driven, and really it just follows simple linear instructions, for the most part.
Instead, what we really need are AIs that will do smart (i.e., human-designed) things, like, for example, the way that NTAI 7.5XE will gather units of a given Attackers group in a random place along a patrol path,
then send them in a group. This works wonders for attack effectiveness, and involves zero "learning". Due to the way that this was coded, it usually involves a semi-random attack vector, too (although I personally think it needs to use a wider seed value for the patrol paths, as it doesn't hit the side flanks quite well enough yet, this is, again, not anything we're going to see AIs "learn" ).