Could AI copy PlayerStyle out of a Replay

Could AI copy PlayerStyle out of a Replay

Here is where ideas can be collected for the skirmish AI in development

Moderators: hoijui, Moderators

User avatar
PicassoCT
Journeywar Developer & Mapper
Posts: 10454
Joined: 24 Jan 2006, 21:12

Could AI copy PlayerStyle out of a Replay

Post by PicassoCT »

It would be really cool, if a AI would be able to read out a Replay and would use one of the Replay-Players style.
User avatar
unpossible
Posts: 871
Joined: 10 May 2005, 19:24

Post by unpossible »

i've asked before - is there something about the replay format that precludes the AI from observing games?
if it were possible i'm sure there are aspects of the game (like unit choices) that'd be well suited to being learned from 'good' replays.
trouble is it'd become very map specific if the AI were allowed to observe anything more than the simplest choices the user makes...

[hints]anyone looking for a uni AI project? :| [/hints]
User avatar
krogothe
AI Developer
Posts: 1050
Joined: 14 Nov 2005, 17:07

Post by krogothe »

If someone can actually do that fully (eg not just copying unit selection but tactics and style) then they would get a few million worth of nobel prizes and probably take over the world... Learning a playing style by watching replays is an incredibly complex task, harder than creating an AI itself by a long shot!
User avatar
AF
AI Developer
Posts: 20687
Joined: 14 Sep 2004, 11:32

Post by AF »

See the Epic think tank documentation for those of you with access.
Tobi
Spring Developer
Posts: 4598
Joined: 01 Jun 2005, 11:36

Post by Tobi »

I guess it's very well possible to extract certain statistics from human games. Like what units are used with that map and what mod. Which units do they use to counter which other units. What are the points where most battles are (probably chokepoints), what units are used on which places on the map, at which time (or M/E income) does one level up, what is the buildtree the humans use, etc.

All that isn't too hard IMHO (drawing the right conclusions & generalizing it & linking events together is tho.. :P )

Note though that it's impossible for an AI to learn from a replay file alone. It could be possible by running the AI as spectator in a replay.
User avatar
Triaxx2
Posts: 422
Joined: 29 Aug 2004, 22:24

Post by Triaxx2 »

The trouble isn't so much learning from the replay, because the standard style of learning is 'did the unit die?' 'was it effective?', but learning tactics.

Most of the stuff can be learned as simple statistics. Such as grabbing the numbers of incoming metal/energy at the time of the first constructed ADV. plant.
User avatar
unpossible
Posts: 871
Joined: 10 May 2005, 19:24

Post by unpossible »

well obviously you'd have to try it with a simpler system...but surely simplifying battles into a set of 'moves' would let the AI approximate the behaviour of a player. it'd just be the paramteres of the moves that would observable/learnable. it's still horrendously complex...imagine microing something to keep it out of range of multiple units...
Tobi
Spring Developer
Posts: 4598
Joined: 01 Jun 2005, 11:36

Post by Tobi »

Indeed, that's what I mean with
Tobi wrote:drawing the right conclusions & generalizing it & linking events together is tho..
collecting the statistics is easy, using them without any intelligence is pretty easy too, but if you want a really smart AI it has to generalize stuff (filter the information from the enormous stream of data), and has to be able to apply the relevant information to it's own tactical decisions, preferably without exactly copying the player (because that usually won't work because of differing circumstances). That is the hard part.
User avatar
PicassoCT
Journeywar Developer & Mapper
Posts: 10454
Joined: 24 Jan 2006, 21:12

Post by PicassoCT »

It could have prepared Scriptpieces of extreme Sucessfull Moves in the Backyard of its Memory, and if a Situation repeats nearly equal on the same Map, load in the Script of Sucess fullfill the Moves, with what available and return to duty as usual. But you are right, such copy and paste behaviour would not be really "intelligent"
Tobi
Spring Developer
Posts: 4598
Joined: 01 Jun 2005, 11:36

Post by Tobi »

It sounds easy but actually matching the current situation to the best match in memory is fairly hard, let alone selecting game situations (and within them, the units related to that situation) to remember. Especially to keep it fast when the number of situations in memory is huge.

E.g. you don't want it to remember a solar just behind the enemy lines in the memory as a specific component of some large battle. You do want the dominator standing next to it tho. But if you're attacking the dominator and it moves behind the solar to seek cover, then you want it to remember the solar. But not as solar, but as generic coverage.

As such you can come up with many other things... answering the question which units/things/properties are of major importance in a certain tactical move is extremely hard and just storing everything gives you far too many combinations to manage...
User avatar
PicassoCT
Journeywar Developer & Mapper
Posts: 10454
Joined: 24 Jan 2006, 21:12

Post by PicassoCT »

Me < AI Newb waving wite Flag, but beeing crushed by Terminator Truth!! :oops:
bamb
Posts: 350
Joined: 04 Apr 2006, 14:20

Post by bamb »

Generalized babble coming up. :oops:

I believe it's too hard to code all the contexts manually after a certain limit, you have to rely on something like neural networks and emergence. Ie, feed it lots of data and it sorts out the important things by itself. I don't know, maybe support vector machines work better with computers and can be as good.

One important question is, is a hand-tuned simple ai still "better" than a massive self-organized ai?

And is the required neural network just too damn huge to be ever practical to do.

The best would probably be some compromise where the data is reduced hugely by hand-tuned pre- and post processing.

Massive learning data would be available as replays and you could run spring in batch mode without the graphics interface at say, 100x speed. :) It'd be a chore to get it to work with just a few units and their tactics.

There might be a need actually for a supersimple mod with supersimple maps just for ai theory testing. Say, just a few decisions and actions available to the ai and only a few possible results.
Tobi
Spring Developer
Posts: 4598
Joined: 01 Jun 2005, 11:36

Post by Tobi »

Yes, if you're going to try neural networks you need an extremely simple mod.

Even then I don't think you can manage to learn it, at least not fully autonomous. I have experimented with ANNs a long time ago and when learning them through genetic algorithms the learning time really grows something like exponential with the problem complexity...

(For the insiders, human supervised learning through backpropagation would probably work, but would be a hell of a lot of boring work to supervise it)

IMHO a hand tuned AI, possibly a finite state machine, is far more efficient (considering both CPU & memory usage and development time) then any approach using neural networks.
bamb
Posts: 350
Joined: 04 Apr 2006, 14:20

Post by bamb »

I'd like to say, I've done some courses at school with neural networks. Their strength is essentially recognizing, what's relevant and storing that to the structure of the network. One excercise had a neural network recognizing textures, and it worked very well as long as the network had enough neurons and stages.
The textures were easy to differentiate for a human, but it's hard for a computer - there are lots of intertwined variables and it can't know which is relevant if you don't tell it (which is tedious) it or make it learn (where you only need to give the correct answers for some training cases, not to tell what to watch).
There is a very good neural networks toolbox available for Matlab which I recommend, if you have access to Matlab at university or anywhere. Matlab is a very happy development environment anyway, vector and matrix handling is like a breeze, bugs are easy to catch and data plotting is also very handy, it takes about one hour to write a bug-free program in Matlab m that would take 50 in C. Also data import and export is very easy. And documentation is perfect too. The program also has very little bugs in the core. It also has availability to mix with C with a dll, so you can input/output data from a C program on the fly.
User avatar
PicassoCT
Journeywar Developer & Mapper
Posts: 10454
Joined: 24 Jan 2006, 21:12

Post by PicassoCT »

Neural Networks have Problems, when expected to be "full" consience and learning. Its like with Babys, in the Beginning, just a "find out" which output leads to positiv input - thats were the Network would need help of "Parents". After that Phase, it would develop on - but would need extreme big ressources. Neural Networks won`t work if not all Spring Users are willing to sacrifice CPU percents and bandwith. Just imagine sitting with a dumb sheep minded creature like in Black & White, beating it, everytime it is sending it`s tanks into Artillery. And the Restart, if it turns such a Ugly Psycho Creature like in BW.

SB or AF make a craddle.

Another Idea would be a Hive AI, with a Central Queen (that controlls Activity Levels of the SubAis) and specialised WorkerAIs (for BaseBuilding, Ressourcecollecting, Attacking, Repair, Defense. Problem would be communication between SubAIs - because they would in need for a fine ballance and awareness. For example a Attack AI should not hand over a hord of Bulldogs to a Defense AI, just because a hord of Fleas is coming closer. That thing is a reachable Target.

Sorry, but i vote for bees. Insects survive Nukes.
Last edited by PicassoCT on 30 Aug 2006, 23:26, edited 2 times in total.
bamb
Posts: 350
Joined: 04 Apr 2006, 14:20

Post by bamb »

I don't know if it's too unspecific, but could backpropagation somehow work if you generate the teaching "answers" from demo data, like units doing lots of damage having lots of score and units doing less having less. Ultimately (which might not be feasible) you always could have 1v1 demos with data like the map and the players' actions. Winning team would then have output "1" and losing team output "0". Computer, find out the relevant factors in the data which made this team win and this team lose! :-)

I don't know how to turn it around though that given some of the data like the map, it could now turn around the network and get the winning actions from the desired "1" result.

Ugh my head hurts.

It'd be a blast to test this with a very simple mod.
Tobi
Spring Developer
Posts: 4598
Joined: 01 Jun 2005, 11:36

Post by Tobi »

Hm true, you could even automate the supervising that way. It will still take a lot of CPU time tho, but that's far cheaper then human brain time IMHO :P

Still I doubt 1 ultra big network can ever handle all aspects of a strategy game (within reasonable memory/CPU time constraints).

I don't feel like doing an example calculation of numbers of neurons now, but if I remember those calculations from long time ago and scale them to spring, I'm pretty sure the network is just too insanely huge to handle.

Splitting it up could help..

Anyway, I would be interested in the results if someone tried it out, but I don't consider it worth my time anymore. The success to failure rate is far too small IMHO...
User avatar
Peet
Malcontent
Posts: 4384
Joined: 27 Feb 2006, 22:04

Post by Peet »

http://www.cray.com/products/xt3/index.html

Let's invest in one of these...probly could do it!
..anyone got $20 million handy? No?

Seems to me there's some learning AI in a game somewhere...but I can't remember any specifics.
bamb
Posts: 350
Joined: 04 Apr 2006, 14:20

Post by bamb »

Yes, you would have to somehow divide the network, ie that some parts had very specific unit control (a huge amount of data, every units' type, position and health at least) and terrain info (both are first level) and then some other part need not some so specific info, just on the level of "my relatively strong skirmish army somewhere around here" and "some of my antiair guys over there", (tenth level of the network) and it could find the relevance that ok, if I attack, I should combine them because usually attacks at this part of the game get picked by gunships (it's seen it from the demos.). Or if it's specifically seen enemy's air units. Knowing enemy has air units is a tiny data bit, but of huge importance, and neural networks are good at finding such from heaps of data.
And that decision should propagate again to some lower level to actually move the units together and get them to attack. This is extremely hard to do, as the possible number of actions (dof) is immense.
One AA replay atm is about a megabyte, which is a too huge amount of data and needs to be simplified a lot for a neural network to start looking for important factors.

But it'd be easier to test it with a huge amount of quick demos from a very simple mod, say only one or two types of unit, only metal as resource and just passable/nonpassable terrains. :) Say, conbot that can build mexxes, conbots or ak:s. All take equal time and metal. But I don't know if this kind of too simple mod would only have one way of winning, thus making it irrelevant, or if it would be random, guessing the right time to attack and if your enemy has defence or not. That would be quickly determined by better players than me.

One helper is of course having as some data the unit stats, I bet they gain relevance very quickly, knowing somewhat which unit is against which (I imagine KAI already does this).
User avatar
AF
AI Developer
Posts: 20687
Joined: 14 Sep 2004, 11:32

Post by AF »

I think you're all falling into traps here in your thought trails

This can be done without neural networks and genetic algorithms.

And if you did use them it wouldnt be a single gigantic network, as that would be a silly implementation.

I cant wait to see what Argh has to say about all this.


Your all forgetting another thing. Human babies brains arent randomly wired meshes that are trained into people, they are prewired and constantly rewiring untill death.
Post Reply

Return to “AI”