Could AI copy PlayerStyle out of a Replay
Moderators: hoijui, Moderators
Nah, we just like discussing it (at least I didAF wrote:I think you're all falling into traps here in your thought trails

Yes, as I pointed out earlier (implicitly tho, I didn't mention the implementation at all), before the talk about ANNs started.This can be done without neural networks and genetic algorithms.
That makes it even harder, a self-reorganizing ANNAF wrote:Your all forgetting another thing. Human babies brains arent randomly wired meshes that are trained into people, they are prewired and constantly rewiring untill death.

Let alone the boring work of prewiring the network

Either way, my point is
Tobi wrote: IMHO a hand tuned AI, possibly a finite state machine, is far more efficient (considering both CPU & memory usage and development time) then any approach using neural networks.
I think Arena Wars has one. It has 4 units and a small maximum amount of units based on money, something like $1000 which equates to about 5 units. I've also heard that Superpower has one and it didn't really work. Not sure about that though.P3374H wrote:Seems to me there's some learning AI in a game somewhere...but I can't remember any specifics.
Erm...I cant wait to see what Argh has to say about all this.
I have beaten all of this stuff to death. To DEATH. Over and over again. I see no point in rehashing all of the fundamental arguments I raised. For those who can't be bothered to read my previous rants, here's a very short synopsis. Yup, this is the short version...
1. What is "learning", from a machine's perspective? How do we turn that into any practical application? I have yet to read anybody's take on this that cannot be quickly and logically dis-proven, given different starting conditions.
2. What is worth "knowing" is a lot different than what is likely to emerge from "learning". What humans know is really just a gestalt of pattern recognition- this is an acceptable argument. However, what humans are observing, and drawing conclusions from, is quite a bit more complicated than it first appears.
3. Statistics are a pleasant but ultimately futile trap. Statistical approaches inevitably make so many fundamental assumptions about starting conditions that they are almost entirely useless. Humans do not really store statistical anything in their minds. They do store odds, but only in a general way. And, as experiment after experiment has shown, they store data that is specific to a problem being solved. For example, chess grandmasters aren't any better than average test subjects at feats of memorization... however, they are very good at remembering specific combinations of complex structures from chess. The same has been shown with many other specialists.
4. AIs that are going to defeat humans should cheat. Period.
5. No AI designer is going to possibly understand a game design as well as the designers themselves. The best possible AI is something that modders can tune to the specific needs of their game design, not something that magically "figures out" how a game design actually works.
6. Show me a good "learning" AI that plays a decent game of AA, and I will show you a "learning" AI that will suck at NanoBlobs. I have watched the "learning" approaches come and go here for months- they have all died of their own hubris and the authors inevitably fade away, having viewed the great mountain and coming away humbled. AAI's painful demise was especially interesting, if not terribly satisfying- I had hopes that Submarine would stay pure to his initial design goals, so that we'd have a pure "learning AI" to compare all other design strategies to... but, as the competition grew fiercer, he couldn't resist adding more specificity, which made it harder to support other mods, and so forth, until AAI was, basically, just an AI designed to play AA with mediocre abilty, and play anything else badly. Or crash

7. If your AI can't beat a human at NanoBlobs, then it's not likely to beat a human at anything more complex. I challenge any would-be AI designer to design an AI that give me more than a moderately-challenging game. The closest I have gotten, in that regard, has been NTAI, now that it's tuned decently (it will kill you, if you're a newbie), and KAI when NanoBlobs was a lot less subtle and allowed for more raw speed than it currently does. I was fascinated by how just slowing down the speed at which things could be built made such a huge difference in how well KAI could play, though- once things were slowed down to a level a human could deal with, it was very easily defeated, all other factors being equal.
At this time, NTAI is the only AI that can even play the mod that was designed to give AI developers a very focused, problem-specific and balanced testbed, designed to give humans and AIs roughly even chances.
I think this is a real shame. If would-be AI developers would focus on making their AI capable of playing something small and tight like that, they'd get a really close-up view of the problems they face.
If you cannot design an AI to beat a human at a mod that's designed to give AIs specific advantages in the areas that they exceed human abilities (such as multitasking, concentrating on multiple points of observation, and cheating methods such as map-hacking and other things)... then what are your chances of defeating a human at anything complex? Zero.
I have played every AI for Spring at least once. Most weren't even worth a second glance, because they keep going down the same roads:
1. They are severely state-driven, and have no dynamic command structures to give them flexibility and randomness. This makes them extremely hard to re-task for new mods, and they fail utterly when confronted with truely new game designs.
NTAI had this problem until XE8, and tbh when I finally got this latest buildtree done, I yelped with glee, because I'd been simulating flexibility with very mushy buildtrees, and now my simulated flexibility is statistically accurate and presents a balanced long-term set of curves to work with. In the long term, NTAI now builds armies that make at least some sense, even against humans. Perfect? No. There are details that aren't yet there. But it's still 100% better than before.
2. They fail to understand that units are not just a collection of statistics in a TDF file, but also contain many pieces of information that aren't available directly through the AI Interface code, such as the rate of turn on turrets and the other "small details" that contribute HUGELY to the relative balance of units!
3. They don't spend nearly enough energy on understanding the role that maps play in games. KAI's advanced topography/chokepoint scanner is probably the single most interesting piece of new AI technology developed for Spring, and I am hoping he'll be nice enough to give us the source, just so that that can become part of the body of knowledge.
4. They don't provide decent, or even any, support for modders. AAI and NTAI were competing on this front for awhile, but NTAI pulled waaaaay ahead several versions ago, and has continued to improve.
5. They're built by people who come at the problems of AI in prejudiced ways, instead of studying the issues like proper scientists. It's one thing to think the Bertha is currently OP vs. Tim in AA... it's another thing entirely to bring that sort've fuzzy, not-necessarily-accurate worldview and impose it on a piece of state-driven logic.
6. They forget that an AI that keeps CPU usage to a bare minimum while being stable and relatively decent is a lot more useful and will attract a much greater audience than an AI that is theoretically going to start improving upon Plato and Kasparov in 2100, but requires a super-computer to move gigs of datasets around, crashes constantly, and is generally un-genial.
... let's see... did I miss any of my traditional ranting points on this subject? Hmm....
To sum up...
AIs should be flexibly designed, because Spring mods are constantly being altered, and new game designs continue to emerge. Some of the game designs I've checked out recently around here have been pretty neato new things, conceptually speaking, and there's exactly one AI that can handle them to some extent.
AIs should not attempt to "learn" until somebody comes up with a proof of what "learning" would mean that is both useful and defendable. Personally, I think that road is a waste of time, and I hate reading this same set of tired arguments brought forth. No amount of repetitious random-bashing of all possible units on all possible terrains will equal a decent human player's total gestalt. Period. This is not chess, where the moves are ultimately calculatable. There are too many random factors and possible choices to predict ahead in time with decisive results, once your simulation becomes complex enough. While it may be possible to make a Weasel kill a Commander by obeying some simple state-driven rules, it is not possible for that Weasel to always win.
Lastly... AIs that can't work with NanoBlobs will not work with anything else that's radically different from AA/XTA. AI designers who want to be involved with the modding community should keep in mind that the vast majority of us developer-types... are working on things that are fundamentally different from OTA in various ways. Some mods keep OTA's economic assumptions intact... I strongly suspect that will change as soon as we have more options. And very few of them preserve OTA's gameplay assumptions, because we modders have already been there and done that

Last edited by Argh on 31 Aug 2006, 12:36, edited 2 times in total.
More efficient by at least six orders of magnitude in dev time, likely much, much more for computer requirements IMO.Tobi wrote: IMHO a hand tuned AI, possibly a finite state machine, is far more efficient (considering both CPU & memory usage and development time) then any approach using neural networks.
Its all about the abstraction levels (correct word? ill use it anyways!). If you make an AI with pretty much no abstraction, youll have to code thousands of scripts, buildtrees and cfg files. Its an incredible amount of work but really easy work. If you do the opposite, you have a very general AI that can play on any setting and mod and adapt to engine changes by itself or anything you throw at it. It takes a lot less work (eg a finite amount of work for infinite compatibility) but it is almost impossible for a human mind to conceive and program it, hence its not been done yet. for KAI i tried the middle ground, so it doesnt take too long to build up large support yet i am not bored/overwhelmed by the difficulty of the project.
Argh, i would very much like you to read the Epic think tank, as it puts a whole new spin on everything.
Learning AI is what people say they want. learning AI is what they work towards. What they actually want is Adaptive AI. Learning AI needs context and concepts applied and learning AI needs its learning to permeate everything it does, which point blankly it doesnt.
AAI needs teaching that lvl 2 > lvl 1. A human knows this already as soon as they see that there's a lvl 2 tier. They also have a vague sense of higher prices higher firepower.
In Epic I believe I have solved all these problems and including a core engine that takes adaptivity to the extremes but is not based on learning, and although it may have some vague sense of knowledge, it is not statistical knowledge and it is not the be all end all decision maker it is in the existign learning systems. The system focuses on being situationally aware and multifaceted through consequence.
For those of you in the epic think tank there's a poll ongoing in the epic forum asking should it be opened up yet only lindir has voted. This means that by the end of the week the population of that forum could fall or 3-4 people rather than the 10-15 people who currently have access.
Learning AI is what people say they want. learning AI is what they work towards. What they actually want is Adaptive AI. Learning AI needs context and concepts applied and learning AI needs its learning to permeate everything it does, which point blankly it doesnt.
AAI needs teaching that lvl 2 > lvl 1. A human knows this already as soon as they see that there's a lvl 2 tier. They also have a vague sense of higher prices higher firepower.
In Epic I believe I have solved all these problems and including a core engine that takes adaptivity to the extremes but is not based on learning, and although it may have some vague sense of knowledge, it is not statistical knowledge and it is not the be all end all decision maker it is in the existign learning systems. The system focuses on being situationally aware and multifaceted through consequence.
For those of you in the epic think tank there's a poll ongoing in the epic forum asking should it be opened up yet only lindir has voted. This means that by the end of the week the population of that forum could fall or 3-4 people rather than the 10-15 people who currently have access.
I will take the time to view and digest whatever you've got in terms of theory for Epic when NanoBlobs 0.6 is released. Right now, I'm concentrating almost entirely on getting it ready for Spring 0.73b, and that's taking all of my free time atm. I have a new unit being added to the mix (an animation-concept demo for newbies, mainly) and getting it in, along with all of the balancing that a new unit always requires, and getting the new special FX stuff done... is pretty much all I can get to for awhile. I intend to finish skinning/scripting the new thing this weekend... after that, it all kind've depends on how many more things I'm going to need to find/hunt/kill in Spring to help this next release be the best it can possibly be... 0.73b is awesome in potential, and just keeps looking better and better as it advances to a final state...
IMHO people don't want AIs that learn everything. It's fun yes, if you own the AI with a certain tactic and later it owns you with the same tactic. I had that once in Tiberian Sun. I obviously can't be sure it was learning, it may have been a random sequence of events.
What happened was I owned the AI by sending a subterranean APC loaded with 5 engineers to the middle of his base, near his construction yard. I unloaded it there & captured & sold his construction yard.
What happens ten minutes later in the game? Indeed, I get owned, because the AI sent a subterranean APC loaded with 5 engineers to the middle of my base, near my construction yard. (Yes, I should have build pavement or sensor arrays...) He unloaded it there & captured & sold my construction yard.
Coincidence or learning?
I suspect coincidence but I was still impressed, because it looked like learning
Either way, my point is: people don't want AIs that have to learn everything, they want AIs that play a good game and don't cheat too obviously (I always hated the C&C/RA1/RA2/TS AIs because they had buildTime=0, used a maphack etc.), and if it occasionally learns a bit, thats a nice addition... nothing more nothing less...
E: AI devs do want learning AIs tho, I know the feeling when you write an AI and it actually outperforms you with a tactic you didn't hardcode in it
What happened was I owned the AI by sending a subterranean APC loaded with 5 engineers to the middle of his base, near his construction yard. I unloaded it there & captured & sold his construction yard.
What happens ten minutes later in the game? Indeed, I get owned, because the AI sent a subterranean APC loaded with 5 engineers to the middle of my base, near my construction yard. (Yes, I should have build pavement or sensor arrays...) He unloaded it there & captured & sold my construction yard.
Coincidence or learning?

I suspect coincidence but I was still impressed, because it looked like learning

Either way, my point is: people don't want AIs that have to learn everything, they want AIs that play a good game and don't cheat too obviously (I always hated the C&C/RA1/RA2/TS AIs because they had buildTime=0, used a maphack etc.), and if it occasionally learns a bit, thats a nice addition... nothing more nothing less...
E: AI devs do want learning AIs tho, I know the feeling when you write an AI and it actually outperforms you with a tactic you didn't hardcode in it

Well, you'd only use neural networks to sort out what data is essential in what context and what not, so you don't have to do it by hand, just the preprocessing.
Like say, a neural network recognizes grayscale textures always from a 100x100 pixel sample (it was taught four textures and it has four outputs. It will give out the [1 0 0 0] if the given texture is the first and [0 0 0 1] if it's the last etc..). But you don't feed the network all the 10,000 or more bytes of data, you preprocess it, calculate various things and maybe only give 50 bytes of data. And it works, it really recognizes the textures! It'd be extremely painful to search the relevant variables and their combinations by hand.
There's even been experiments where neural networks have been taught to recognize the difference of a man and a woman from a passport photo and they were as good if not better than humans at recognizing gender from photos after that.
Teaching works with the backpropagation algorithm, that kinda starts from the answer that the network gives and if it was right, it goes back and reinforces the network values that emphasized the given answer and weakens the values that guided into the opposite direction... and if the answer given by the network was wrong, it does the opposite. Backpropagation is kinda the opposite of how the neural network forms the answer.
No genetic algorithm involved.
Like say, a neural network recognizes grayscale textures always from a 100x100 pixel sample (it was taught four textures and it has four outputs. It will give out the [1 0 0 0] if the given texture is the first and [0 0 0 1] if it's the last etc..). But you don't feed the network all the 10,000 or more bytes of data, you preprocess it, calculate various things and maybe only give 50 bytes of data. And it works, it really recognizes the textures! It'd be extremely painful to search the relevant variables and their combinations by hand.
There's even been experiments where neural networks have been taught to recognize the difference of a man and a woman from a passport photo and they were as good if not better than humans at recognizing gender from photos after that.
Teaching works with the backpropagation algorithm, that kinda starts from the answer that the network gives and if it was right, it goes back and reinforces the network values that emphasized the given answer and weakens the values that guided into the opposite direction... and if the answer given by the network was wrong, it does the opposite. Backpropagation is kinda the opposite of how the neural network forms the answer.
No genetic algorithm involved.
I think that a learning AI has to develop the same steps on evolution like Humans did. So the first step we know is making a "Animal" that reacts, unreflected on certain states, just having the primary target, to keep it`s system intact, to copy the working System several Times, protect the Copys of its surviving strategys, of which one third will have a Mutation in its "Vars". The rest is selection, just by surviving Games in it`s Body (Units and Buildings) If i got ARGH right, it is impossible to do something human like - but something on Animallevel will always have stupid phases, till it finds into a new "Body".
That ahs its problems too.
Every single new stage would need to be more complex in order to exhibit more complex behaviour, and for such a generational system to be used you'd need multiple offspring to evaluate. Simply modifying values wont lead to any more complex behaviour unless there are trillions of these values, at which point the number of offspring that can be evaluated at a time is 1 and thats an issue as the number of offspring quickly becomes too great for progression to continue at a feasable rate.
It isnt very good at reacting to situations either. What it learns makes it perfect for the same game over and over again, stick it in a new situation and it fails miserably, which shouldnt happen in a true learning AI.
Every single new stage would need to be more complex in order to exhibit more complex behaviour, and for such a generational system to be used you'd need multiple offspring to evaluate. Simply modifying values wont lead to any more complex behaviour unless there are trillions of these values, at which point the number of offspring that can be evaluated at a time is 1 and thats an issue as the number of offspring quickly becomes too great for progression to continue at a feasable rate.
It isnt very good at reacting to situations either. What it learns makes it perfect for the same game over and over again, stick it in a new situation and it fails miserably, which shouldnt happen in a true learning AI.
pacman AI
Code: Select all
if(pacman is in LOS){
move towards pacman
else{
Randomly pick a direction to move in
}