Adding motivations and drives?

Adding motivations and drives?

Here is where ideas can be collected for the skirmish AI in development

Moderators: hoijui, Moderators

Post Reply
User avatar
hughperkins
AI Developer
Posts: 836
Joined: 17 Oct 2006, 04:14

Adding motivations and drives?

Post by hughperkins »

This could be a little "way out there". It's half serious and half a thought experiment. It's too early to say how practical this is for use within a Spring AI, but I'm not ruling it out either ;-)

The future of AIs could be intuition, feelings, emotions. It could be useful to integrate a naive version of this into Spring AIs.

We give the AI a number of drives/hungers/motivations, and it seeks to maximize or minimize them, according to whether they are good or bad.

Examples of these drives are:
- hunger, for metal. Seeks to increase metal income, and total metal
- thirst, for energy. Seeks to increase energy income, and total energy
- greed, for material things. Wants to build and expand
- nakedness. This drives defense building. Buildings make the AI feel less naked, but the nakedness increases roughly linearly with time, and also in response to the enemy. Seeing a nuclear silo in the middle of the enemy base makes the AI feel intensely vulnerable and naked until either the silo is destroyed or an anti-nuke is built.
- appetite for destruction. The AI likes destroying enemy units. The more valuable the unit the better. Value is subjective and varies from AI to AI
- curiosity. The AI likes to explore, and to see what the other AIs are doing
- domination. Drives destroying the other AI and taking their land

Clearly, the relationship between these drives is arbitrary. We could have multiple AIs with different configurations. A cowardly AI will overrate nakedness. An agressive AI will overrate appetite for destruction. A greedy AI will privilege metal, energy and expansion at the expense of all else.

One could imagine a genetic code assigned to each AI, which encodes the relative importance of the drives, and the values of each unit or unit property, eg weapon range.

We could generate a bunch of genetic codes, play them off against each other and either apply a genetic algorithm or dump the results into an SVM.
User avatar
Peet
Malcontent
Posts: 4384
Joined: 27 Feb 2006, 22:04

Post by Peet »

I say we kill him before he founds the Core.
Kloot
Spring Developer
Posts: 1867
Joined: 08 Oct 2006, 16:58

Post by Kloot »

What you're describing isn't true belief-modelling but rather hillclimbing search over the outcome (win/lose) of games based on preset play-parameters, the performance of which is affected by the shape of the search-space (eg. if there's one a priori combination of drives that always leads to victory, the search process will converge on it given enough iterations) and by the nature of the fitness function used to evaluate each outcome (which should look at far more data than just whether the AI won or not). However, since in general RTS games are highly dynamic with no clear-cut winning paths (except if they're unbalanced), the search will essentially never find combinations that stand out and just keep wandering between local maxima. It's an interesting idea to test various playstyles among AIs, but not so much to create challenging opponents.
User avatar
PicassoCT
Journeywar Developer & Mapper
Posts: 10454
Joined: 24 Jan 2006, 21:12

Post by PicassoCT »

Would really would be cool to make an AI more aware of possible weakspots, would be a constant "rain" of Attackilusions (coming from Enemy Playerz), that only dissappear for the Ais logic if real Attacks happen. That way the AI would always investigate possible breaches in its defenses, because it think something is there.

Another cool Thing would be if the AI had something like a "Fear for Existence" Parameter, that influences how much Metall & Energy are spend in Attack or Expansion.

And if it could remember "Places were it burned its hands" - so that a ConVehic would not start a Mex again and again, till it is destroyed. (C#AI)

But after all this Wishing let say it out loud - The Spring AI Devs do there best, and far more, so just wishing without Thanking is evil! Thx for all the hours of fun.. and giving me somebody who is willing to test my Maps against me even if the are 100 Mbs in Size :)
User avatar
hughperkins
AI Developer
Posts: 836
Joined: 17 Oct 2006, 04:14

Post by hughperkins »

It's ok to find local maxima. An effective AI can switch between multiple opening gambits to provide uncertainty to the opposition. An AI that always does the same thing is a dead AI. An AI that switches randomly between three contrasting strategies stands a chance, even if each strategy on its own is quite naive.

Ideally, there could be a specific drive for this:

*boredom. Boredom drives the AI to not do the same thing over and over again. It pushes for multiple opening gambits. It prevents centration. It causes the AI to surrender when it cant do anything useful any more.

In practice, boredom is a somewhat nebulous concept and needs some refining. Maybe tie it in with learning??? Boredom is inversely proportional to the rate of learning?
User avatar
AF
AI Developer
Posts: 20687
Joined: 14 Sep 2004, 11:32

Post by AF »

boredom is a pointless drive for an AI because it encourages impatience and pestulence. It detracts from the cold efficiency, and it flies in the face of the fact that with enough time patience and skill anyone can turn around a battle and win against insurmountable odds.

Picasso, with those mexes, I think you'll find NTai did that since XE9RC12.

Hugh, I suggest you look further back through this forum. I wrote a thread 'NTai X' discussing unit emotions and their applications and how they cumulate to create overall AI emotion and stances, aswell as inheritance and personalities.
bamb
Posts: 350
Joined: 04 Apr 2006, 14:20

Post by bamb »

"The intended plan is a tree of desires".

That's imo one way of thinking how the AI should behave, re-evaluating that tree every once in a while to see what it will do next. But it should have a plan with a structure. Not a rigid plan, but a tree of desires. Decisions are affected by desires.

It's a bitch to code probably... :(
User avatar
Dragon45
Posts: 2883
Joined: 16 Aug 2004, 04:36

Post by Dragon45 »

As always i'm going to suggest a multi-agent alternative :P

Meaning that each unit type has its own "personality", i guess you ocudl call it in this thread - assault unmits will attack if they have comrades nearby, cosntruction units will build metal producing structures if there's low metal, terraform if there's heavily deformed ground nearby, etc etc - and patrol if there's nothing else to do.


Hell, you could implement an genetic algorithm if you
1) arbitrarily gave a set of con units different fuzzy-logic weighings for each behavior (some prioritze building metal more, others guarding more, etc)
2) add themselves to their build menus
3) when they build, pass on a slightly mutated version of their personlity to their 'child'
4) keep going
5) WHEEE

For assault units, their personalities might be based around pack-tendency, movement/positioning, retreat tendency, and chosen target. To pass on their traits, since they cant build, keep an index of veterancy and accomplished goals, and randomly pass on personalities to the next type of assault unit being built, with the highest "effectivenesses" (based on the veterancy/goals index) having the highest chances of being passed on.




The benefits of this method is that you wouldnt have to start new games to breed effective behaviors; just keep the same game running for a week (multiple such AIs agaisnt each other), or until someone wins, and eventually you'd have some emergent AIs that should be pretty effective.
User avatar
AF
AI Developer
Posts: 20687
Joined: 14 Sep 2004, 11:32

Post by AF »

That dragon is even closer to what was discussed in the NTai X thread than hughperkins stuff. Only NTai X expanded more on how emotions interacted and how they all came together and how they represented players and the AI as a whole and how it attributed to the map, and other factors such as influence between units, and even a decision making process.
User avatar
Dragon45
Posts: 2883
Joined: 16 Aug 2004, 04:36

Post by Dragon45 »

The problem with some multi-agent implementations is that they dont have enough faith in emergence. In other words, they try and force certain aspects of personality onto an AI.

The rules should be simple, and emergent properities will take care of themselves...
bamb
Posts: 350
Joined: 04 Apr 2006, 14:20

Post by bamb »

Well, for example per-builder building orders with agents really suck imo, building has to be centrally planned.

If you're a fresh conbot straight outta factory, you damn well want to have some advice what to do and not start soloing.
User avatar
AF
AI Developer
Posts: 20687
Joined: 14 Sep 2004, 11:32

Post by AF »

soloing when guided by globalized rule sets can be very powerful. I should know, I was the first person to do it, and still use it today and get nice results, and in cases it has its own emergent behaviour. A similar system is used in SAI, OTAI, the second and first KAI rewrites, and some other projects.
Post Reply

Return to “AI”