How should the AI handle it's army?
Moderators: hoijui, Moderators
Re: How should the AI handle it's army?
Its always a safe assumption to guess that the AI will attack you, that there are only a handful of vectors for it to attack, and it will usually just send a large blob of units. Some AIs might make that blob weave in and out in a way that's tricky for you to micro, and some blobs are bigger than others. There will be several blobs.
I think that means all our AIs are pretty predictable
Hint: Almost all the AIs are thwarted by a wall across the map/key chokepoints. If you're playing BA, make sure to build a t2 wall else t2 units will crush the first one and a whole games worth of AI units will come swarming in
I think that means all our AIs are pretty predictable
Hint: Almost all the AIs are thwarted by a wall across the map/key chokepoints. If you're playing BA, make sure to build a t2 wall else t2 units will crush the first one and a whole games worth of AI units will come swarming in
- Forboding Angel
- Evolution RTS Developer
- Posts: 14673
- Joined: 17 Nov 2005, 02:43
Re: How should the AI handle it's army?
In evo the walls are units, and units love to attack them
Seems like I dodged a lot of bullets and irritation by making my walls alive and self regenerating.
Seems like I dodged a lot of bullets and irritation by making my walls alive and self regenerating.
Re: How should the AI handle it's army?
There is always the tech to nuke method =p Build an army just don't bother sending it out
Re: How should the AI handle it's army?
Forboding Angel wrote:In evo the walls are units, and units love to attack them
Seems like I dodged a lot of bullets and irritation by making my walls alive and self regenerating.
i doged that bullet as well, and then made a bullet that can hit dodgig moddevs (lava builds walls of features from the mountaintops)
Re: How should the AI handle it's army?
Not exactly. It is a good idea to implement hardcoded flank attacks and unit formations for them when a given condition is met (ie splitting enemy groups and counting in which side are weaker/less units). Pick up some books used in military schools about vehicle battalion tactics, especially in hilly battlegrounds.AF wrote:I think that means all our AIs are pretty predictable
Re: How should the AI handle it's army?
Easier said than done, you need to define exactly what hilly ground is, where it is, where it begins and ends, test extensively in multiple games where terrain might be considered hilly in one game but not in another. Then you need to be able to figure out exactly how to flank, aka where do you issue the units move order? How do you deal with choke points and walls next to them that might restrict your movement? etc etc etc etc100Gbps wrote:Not exactly. It is a good idea to implement hardcoded flank attacks and unit formations for them when a given condition is met (ie splitting enemy groups and counting in which side are weaker/less units). Pick up some books used in military schools about vehicle battalion tactics, especially in hilly battlegrounds.AF wrote:I think that means all our AIs are pretty predictable
Re: How should the AI handle it's army?
Just split the map in quadrants, count average height of each one for global choise of tactics (in given height range, for example difference below 400m means the map is flat, so flat-map tactics will be used), save height point coordinates in sub-quadrant array (mark unreachable) and use them for a single unit move-algorithm.
How to flank is determined usually by calculating the unit power (hard-coded table which holds data for how many units of type A can be killed by single unit of type B) of sub-groups in the battalion. Choose for target the weaker ones.
How to flank is determined usually by calculating the unit power (hard-coded table which holds data for how many units of type A can be killed by single unit of type B) of sub-groups in the battalion. Choose for target the weaker ones.
Re: How should the AI handle it's army?
Hardcoded is always bad. Even worser is that my game transforms the map.. so heightmap is changing all the time. Knorkes idea produces fast results, and is imune to most advanced ai troubles. Quite a feat. Also maintainanceable-even if the dev drops..
Re: How should the AI handle it's army?
Under 'hard-coded' I meant exactly the knorke's logging ... ummh ... workaround. From the game client's point of view, per player gathered data is possible to be distributed p2p among all logged players. Probably an encrypted SQLite db for local storing will do the job. Updates for 'unit efficiency' will work perfectly even in the middle of a heavy battle. Or maybe done between two battles.
About changing map topography - imho it's not a big deal, since in-game array holding it is (should be) usually in real-time and probably affect algorithms in 'no-troubles' manner.
About changing map topography - imho it's not a big deal, since in-game array holding it is (should be) usually in real-time and probably affect algorithms in 'no-troubles' manner.
Re: How should the AI handle it's army?
no-troubles.. thats two words making a nice couple. But yeah, im living below them - they fight every evening, he drunk beyond recognition, she sobbing - both throwing stuff.
Re: How should the AI handle it's army?
100Gbps wrote:Under 'hard-coded' I meant exactly the knorke's logging ... ummh ... workaround. From the game client's point of view, per player gathered data is possible to be distributed p2p among all logged players. Probably an encrypted SQLite db for local storing will do the job. Updates for 'unit efficiency' will work perfectly even in the middle of a heavy battle. Or maybe done between two battles.
About changing map topography - imho it's not a big deal, since in-game array holding it is (should be) usually in real-time and probably affect algorithms in 'no-troubles' manner.
All the hard parts of your suggestions where done or co-opted into AIs a long time ago.
All the easy parts of your suggestions that you gloss over are actually much harder to build, and thus it was much easier to grab low hanging fruit that yielded far greater results.
A lot of what you're suggesting needs some kind of spatial awareness, which has never been a strongpoint of computers as it is.
Sure we could approximate it, do a nearest neighbours type thing on the nearest unit, approximate a centroid and a radius, then move around that radius on either side. That seems obvious and easy until the units path into the enemy units, walk into walls, attempt to walk on water, or the enemy units start moving before the AI units can finish their maneuver.
Suddenly flanking without AI stupidity and complaining players now has these dependencies:
- Figure out if there is an open path between the AI unit and the target
- Determine what to do if that path is open for some AI units in the group but not others
- Determine if the group is small enough to fit in said path
- Determine if there are no obvious traps like mines, or flanking the enemy group into an obvious hole and then being trapped and blown up
- Determine if by conducting this maneuver vital structures are made vulnerable. ( Don't want a defending force to try to flank an army, while that army then immediately walks straight into an undefended base.
- Determine the proportion of the group to flank with, all none half quarter etc
- Code to detect flanking motions
- A data structure to save and record those flanking motions
- A metric of measuring how effective the flanking was
- Metrics for which flanking motion was effective on which map in which area
- A system to re-run the replays and record the data
- Pattern recognition to detect a flanking motion and match it to an existing one
- Code to determine if a flanking operation is viable in the AI
- Code to figure out the most successful flanking pattern available
- Code to determine which flanking patterns are viable given a collection of AI units, and what is the maximum deviation from the recorded flank before the statistics for it are no longer valid
Having said that, someone may devise a half competent flanking heuristic that improves AI offensives, if anybody does, please make a lot of noise about it, I'd very much like to see.
For the moment I think that time would be better spent preventing shipyards in ponds, kbot labs on tiny islands, and more adaptive economics
Re: How should the AI handle it's army?
Zones of interest:
Each mex is interesting
The closer it is to other mexes or the more metal it has, the more it is interesting (dual-mexes, triplets)
The AI should either try to kill the metal economy, the energy economy or the build power economy, dependend on the mod.
Kill constructors in EVO-RTS is useless, while killing nano towers in Zero-K and *A games is good (no wreckages, very valueable, low health, chain reacting)
The AI could use units with a low projectile speed against static targets as they are usually meant to kill statics. Against units, it can try to use the best units with HP*DPS*maxVelocity*range
It should not repeat the same recent strategies if there are other strategies which it can try - except the last one was very successful (made many times cost by killing enemy stuff).
If threads belong to the same group (scouting attacking), (eco-building, unit-building) the weights between this entries could change based on success (resources spent, resources destroyed/gained).
There could be 2 scouting threads - one which tries to coordinate defense, and one which tries to find attack targets.
The more the AI is splitted into modules, the easier it will be to debug and code.
Players often make statics to defend the economy against scouts, but the AI only need statics if static defense is much better AND there are stealthy units (what does the enemy do, what does the mod provide?).
Else the AI can make anti-scout units with high speed and a good range (if there are good units for that purpose) to react to radar blips faster, than players can do it.
If there is a good point to defend, try to get static defense and units - the ratio is dependent on how good defense in this mod is and how much control points the AI need to defend.
If the control points are too far away from each other to react to radar blips properly, prefer raiding instead of defending a bit more.
If you want a human-like AI, limit the number of strategies which can be processed at once, limit the number of reactions per second to the highest priorised treats, etc.
The simpliest human-like AI contains a memory which stores relations (player => (strategy1(success%), strategy2(success%) ), (economy => (buildRadars(), defendAgainstScouts() )).
Whenever a task is running, this task calls all related tasks, whenever something is successfully, it repeats itself by a chance.
Each watchRadar() thread starts a buildRadar() thread which fill out gaps in your los as long as you have enough builders and don't spam radars where you cannot protect them.
If watchRadar is successful ( give you important information ) it repeats or even multiply itself by an increased chance to get more watchRadar() threads and more buildRadar() threads.
buildRadars() either request an idle constructor or start a buildConstructor() thread, which increase the weight of constructors in your build queues.
If you have enough relations and good task.repeatChance settings, the most efficient thing get used most often.
Some tasks decay after a time and some tasks ends if they failed or succed with their goal.
And other tasks run only once and never end - they are there to ensure that at least some important tasks are always running from time to time.
If you want, you can let the AI forget about some least useful tasks if there are too many tasks running.
Each mex is interesting
The closer it is to other mexes or the more metal it has, the more it is interesting (dual-mexes, triplets)
The AI should either try to kill the metal economy, the energy economy or the build power economy, dependend on the mod.
Kill constructors in EVO-RTS is useless, while killing nano towers in Zero-K and *A games is good (no wreckages, very valueable, low health, chain reacting)
The AI could use units with a low projectile speed against static targets as they are usually meant to kill statics. Against units, it can try to use the best units with HP*DPS*maxVelocity*range
- If the units not make cost, use the opposite units or choose another strategy/target.
It should not repeat the same recent strategies if there are other strategies which it can try - except the last one was very successful (made many times cost by killing enemy stuff).
If threads belong to the same group (scouting attacking), (eco-building, unit-building) the weights between this entries could change based on success (resources spent, resources destroyed/gained).
There could be 2 scouting threads - one which tries to coordinate defense, and one which tries to find attack targets.
The more the AI is splitted into modules, the easier it will be to debug and code.
Players often make statics to defend the economy against scouts, but the AI only need statics if static defense is much better AND there are stealthy units (what does the enemy do, what does the mod provide?).
Else the AI can make anti-scout units with high speed and a good range (if there are good units for that purpose) to react to radar blips faster, than players can do it.
If there is a good point to defend, try to get static defense and units - the ratio is dependent on how good defense in this mod is and how much control points the AI need to defend.
If the control points are too far away from each other to react to radar blips properly, prefer raiding instead of defending a bit more.
If you want a human-like AI, limit the number of strategies which can be processed at once, limit the number of reactions per second to the highest priorised treats, etc.
The simpliest human-like AI contains a memory which stores relations (player => (strategy1(success%), strategy2(success%) ), (economy => (buildRadars(), defendAgainstScouts() )).
Whenever a task is running, this task calls all related tasks, whenever something is successfully, it repeats itself by a chance.
Code: Select all
// avoid infinite task instances ( generate less new instances if you already have many instances running ).
event runTask( task )
task.counter +1
if task.counter > task.instances
task.counter = 0
task.startNewInstance
end
end
event endTask( task, successRatio )
task.repeatCounter + task.repeatChance*successRatio
while task.repeatCounter > 1 // 100%, ...
task.repeatCounter -1
runTask( task )
end
end
If watchRadar is successful ( give you important information ) it repeats or even multiply itself by an increased chance to get more watchRadar() threads and more buildRadar() threads.
buildRadars() either request an idle constructor or start a buildConstructor() thread, which increase the weight of constructors in your build queues.
If you have enough relations and good task.repeatChance settings, the most efficient thing get used most often.
Some tasks decay after a time and some tasks ends if they failed or succed with their goal.
And other tasks run only once and never end - they are there to ensure that at least some important tasks are always running from time to time.
If you want, you can let the AI forget about some least useful tasks if there are too many tasks running.
- Forboding Angel
- Evolution RTS Developer
- Posts: 14673
- Joined: 17 Nov 2005, 02:43
Re: How should the AI handle it's army?
Evo is all about killing the econ. Mainly power.NeonStorm wrote: Kill constructors in EVO-RTS is useless, while killing nano towers in Zero-K and *A games is good (no wreckages, very valueable, low health, chain reacting)