Thesis: machine learning AI for spring 1944?

Thesis: machine learning AI for spring 1944?

Here is where ideas can be collected for the skirmish AI in development

Moderators: hoijui, Moderators

Post Reply
Fil
Posts: 6
Joined: 13 Nov 2015, 11:52

Thesis: machine learning AI for spring 1944?

Post by Fil » 13 Nov 2015, 13:21

Hi guys,

I'm new to forums, I'm here looking for some advices from you AI programmer veterans for my thesis project :)

I'm currently at the end of a university 2nd degree in industrial engineery, planning to write my thesis on a ML implementation in rts games.
It's not strictly "my business", but working on a game IA is something i really would like to do if i get the possibility in the future.
Moreover I know it's not the best technique to be used on a videogame but that's the setting of my thesis so results will follow :D

I would like to implement some reinforced learning in the spring 1944 game.

The idea is: put the AI in a game of which it doesn't know rules, and see if it can learn some good "tactical behaviour" by trial and error.
To clarify i'm skipping the economy part to focus only on the tactical (eg. creating a setting who pseudo-randomly spawns units at start).


For each unit type we would like to obtain the tactical elements that create the most favourable combat situation.

A tactical situation for a single unit is composed by a mix of more tactical elements, like current unit stats, enemy type and distances, distance to allied unit, etc. Every time a unit has a positive or negative result (hit or being hit) the current tactical situation is rewarded or punished, in all of its components.

By the end we could do a statistic on which tactical elements have been more frequent in winning situations, and adopt them as unit's "preferences".

To simplify we might end up noticing that a light tank is often damaged or destroyed when the parameter "distance to infantry" is "very close", and then this preference is used in the future to improve his AI behaviour.

In this sense the main part would be optimizing the state's space by creating a definition of state who:
- represents "well enough" the tactical situation for the current unit.
- is "light" -> use distance instead of coords (thats what we care about tactically), discretize distance (not floating point), etc
- map indipendent -> obtained unit behaviour should be less possible linked to that map
- is "modular", meaning we can extract for each single component the unit type preference, and then mix them again to obtain a general behaviour model for the AI for the unit type.

A direct consequence of this "per unit" approach is that we are not creating any "upper level strategy" who coordinate multiple units, but i expect instead some coordination to emerge from individual unit behaviour.

Really thanks for your attention, please give let me know if this project makes sense, what are main drawbacks, and how difficult it would be.
In particular i don't know (not having exp with spring) how difficult it would be to extract game data to be used by the AI, for example unit elevation, enemy distance, to know if i get shot and by who, etc.
I have reasonable programming skills and 3-4 months for completion.

Thank you all guys :)
Last edited by Fil on 13 Nov 2015, 18:28, edited 3 times in total.
0 x

User avatar
Nemo
Spring 1944 Developer
Posts: 1376
Joined: 30 Jan 2005, 19:44

Re: Thesis: machine learning AI for spring 1944?

Post by Nemo » 13 Nov 2015, 17:37

Hi Fil!

I'm not deeply acquainted with Spring AI programming, but here are some general thoughts (I'm one of the S44 devs).

I think the primary challenge will be just as you stated: the search space is enormous, full of local optima, and the primary training feedback mechanism/evolutionary pressure is highly dependent on the other player. Generally speaking this is probably ok, you're writing a thesis, it's supposed to be hard :)

In technical terms, Spring provides two basic ways of writing AI: one of them is the Native AI interface (where 'native' means C++/Java) and another is LuaAI.

The native interface must be compiled against Spring: it is (of course) fast, but access to game specific internals is much more limited. LuaAI are simply Lua scripts, and can interface with the game very easily. For example, it would be very difficult for a native AI to understand the S:44 mechanic of infantry suppression/pinning, but a LuaAI can easily get that information using the normal Lua APIs (see here: https://springrts.com/wiki/Lua_Scripting), which provide access to all of the information you mentioned.

As it happens, S44 actually already has an AI framework waiting for some more intelligence about combat -- one of the former lead developers of Spring wrote a simple framework for Lua AIs. This might be a good starting point so that you don't need to worry as much about the 'framework' bits, but you should evaluate it on your own terms. Here's is a link to CRAIG's code, and here is the combat module (currently just 'build enough units to form a group, and then order towards an enemy HQ').

One potential difficulty is that implementing the AI directly in Spring Lua tools pretty much precludes online learning with any really intensive algorithms: you'll be sharing CPU time with the game Lua code and Spring's simulation. You might be able to get around this by opening a socket to an external service with a really fast implementation of ML algorithms (like http://torch.ch/) or by doing your processing offline entirely, and just writing the bits to save statistics for crunching/load generated behaviors and execute them.

Hope that helps! You are most welcome in IRC freenode #spring1944 or Spring lobby server #s44 (they're linked) if you want to explore this in a slightly more real-time conversation: I'm available most waking hours in GMT+1. Happy to keep answering here, as well :)
0 x

Fil
Posts: 6
Joined: 13 Nov 2015, 11:52

Re: Thesis: machine learning AI for spring 1944?

Post by Fil » 16 Nov 2015, 22:25

Hi Nemo!

Really thanks for your reply, you have been so kind! :)

I played some S44 games these days and it's lovely, so my compliments to all the team.

Apar from being realistic and well modeled, it has some features that are definitely not seen often in many games, eg. markers for enemy units being heard but not in sight, or drawing a line to define a path for your unit to follow, or changeable range and damage multipliers in game settings.

Apart from playing Spring 44 I started looking to some code behind it, but i'll probably need some time to digest all the info and get my feet wet, since i'm new to Spring and Lua scripting.

Anyway it's a pleasure to be in contact with you and the team, and i think it's gonna help a lot.

We'll hook up on the S44 channel these days, so we can talk about some ideas.

Definitely thanks :)
See you on #s44!
0 x

User avatar
PauloMorfeo
Posts: 2004
Joined: 15 Dec 2004, 20:53

Re: Thesis: machine learning AI for spring 1944?

Post by PauloMorfeo » 04 Jan 2016, 00:56

Fil wrote:Hi guys,
... I'm here looking for some advices from you AI programmer veterans for my thesis project :) ... thesis on a ML implementation in rts games.
Hi Fill, welcome to Spring.

I did a Computer Science final course project completely dedicated to machine learning for games and built a proof-of-concept skirmishing AI for Spring. Sorry I caught this thread so late. Are you still around? I'd like to have a chat with you.
0 x

Fil
Posts: 6
Joined: 13 Nov 2015, 11:52

Re: Thesis: machine learning AI for spring 1944?

Post by Fil » 04 Jan 2016, 02:17

Hi Paulo!

Thanks for your interest! : )

The project is still up and running...4 day ago it had the baptism of fire, and did pretty well (considering its not trained yet)... these days I'm into some performance tuning, then more will come, so the project keeps flowing..!

For sure we can have a chat, every contribution is welcome, and you could sure help me out!
There's even some intellectual work needed if you like, for example improving the reward function (since I'm short on time it's difficult for me to find the time to just stay there and "think", so ideas would be appreciated)

You can mostly find me on the #s44 channel...today I'll be out until late afternoon (some Ikea tourism with friends ahahaha), but I write you here (or on PM) or in s44 when I have some freetime... as you prefer! Moreover what's your timezone? I'm from Italy so gmt +1

see ya! :)
0 x

User avatar
PauloMorfeo
Posts: 2004
Joined: 15 Dec 2004, 20:53

Re: Thesis: machine learning AI for spring 1944?

Post by PauloMorfeo » 04 Jan 2016, 20:21

Fil wrote:...
You can mostly find me on the #s44 channel...today I'll be out until late afternoon (some Ikea tourism with friends ahahaha), but I write you here (or on PM) or in s44 when I have some freetime... as you prefer! Moreover what's your timezone? I'm from Italy so gmt +1
...
If we have anything useful to discuss I propose we try to keep it in this thread so that it remains for others to see in the future (your post is not the first asking about evolutionary AI, so I reckon whatever we discuss here might be useful for someone else).

In the meantime, I'll try to reach you on Spring's "s44" IRC channel so I can have a chat with you, hoping you introduce me what you're doing, how you're doing it, etc. I am Portuguese living in London, so your timezone-1 (GMT+0). I'll only be able to log between 18:00~24:00 GMT (19:00~24+1:00 IT).

In Spring I run by the name "Paulada". Do you also run by the name "Fil" in Spring?

---
Evolutionary things and AI overall are just absolutely .. so interesting. My semester working with Genetic Algorithms and Neural Networks was the happiest period of my life that I can remember. Looking forward to know what kind of interesting things you're involved with.
0 x

Fil
Posts: 6
Joined: 13 Nov 2015, 11:52

Re: Thesis: machine learning AI for spring 1944?

Post by Fil » 07 Jan 2016, 18:41

Ok first of all sorry for my late reply but i was being hosted by some family friends and (don't know why :?: ) the spring site was unaccessable from my laptop, so i thought it was down. My excuses :roll:

I think you are absolutely right, it's a good practice to store some knowledge/documentation inside this thread, to keep is as an official thread. Moreover its useful for me too having some written documentation to use when i could need it, so i'll post some updates. Apart from this i'm happy to see you are passionate about machine learning, i share with you the interest and i really think this is a topic is mainly left to academic research, since it's "dangerous" to implement it in a commercial product.
I suppose obviously you have read my first post, so i'll cut on some already-known facts.

THE REASON BEHIND THE PROJECT, BETTER EXPLAINED:

The main idea behid this work is to test a method to face some high-complexity problems and get good results.

I start from the fact that strategical/tactical thinking is probably the most difficult objective for an AI, because it requires intelligence, that being the thing that machines lacks more, opposed to other contests (like FPS) who are more "skill-based".

The focus on high complexity pleased me because i love historical/realisic focused games, and the more the gameplay adds elements, the more solving the problem gets complex.
In fact a lot of research / AI competitions often try to solve "very well" relatively simple problems, with a low number of units, or some rock-paper-scissors reasoning, or simple terrain evaluation. In contrast, i aim to create a framework able to solve "less perfectly" problems with high numbers of variables/elements, and consequently complex. Thats the case for example of vehicles carrying more than 1 weapon type, each one efficient against a particular type of enemy, or combined arms/terrain/visibility problems. For sure i will not be able to solve all of these problems, since my time is short, but i can create a proof-of-concept for some elements that can be hopefully extended to other problems.

THE IMPLEMENTATION:

The model is full-agent based. This means it controls every unit at a time, and i expect some coordination/good tactics to emerge from individual behaviours.

1) Stats recording

Actually, everytime a unit kills another unit (or is killed by), this event is recorded on a table, with additional details on tactical elements we decide to record.

By now the only combat factor recorded/implemented is the distance from enemy (the more important, since it determines most of "can attack/can be attacked" cases). Other may follow, like relative angle from units or unit stats (e.g. is suppressed).

Btw soon i'll substitute kill events with damages, to be more precise when recording.

The so-called Combat Table (where values are stored) is structured this way:

"my unit type" -> "has killed / died by" -> "enemy unit type" -> "for this tactical element (e.g. distance)" -> "[element range]" -> X times.

Note: "killed or died" are 2 different sections of table, the first storing kills our unit did on enemy units, the second storing deaths by hand of this enemy type.
Note 2: "tactical elements" are table sections for different factors we decide to record, actually only distance.

This way, every time 2 units faces themselves at distance D, we are able to extract from the Combat Table how many times we have been able to kill the enemy and how many times the enemy killed us (at that distance). So:

Unit X -> killed -> unit Y -> at distance -> 200m -> tot times
Unit X -> died by -> unit Y -> at distance -> 200m -> tot times

From these 2 values we can get a kills/deaths ratio, who summarize the result of our past attempts to kill this enemy in this situation.
KD ratio is this:

R = K / (K + D) [0-1]

For example K = 3 and D = 2 -> 3/3+2 = 0.6 so 60% success possibility. This way we are also indipendent on number of scores recorded (works for 3 as for 4000 kills).

2) Reward function

These will be our inputs to take decisions. Decisions are pretty simple, for now units choose between 2 actions: shoot (engage) or move.

Now...how may we interpeter these data to decide if to engage the enemy or move out?

I reasoned this way. You are a infantry in the middle of a battle, and you know by experience you have 60% winning chance.
Do you decide to attack? don't you attack? this answer doesn't have a unique answer, since there's no clear way to decide.
Sometimes we can attack, some others no. Even we as humans have problems to answer this.

Of course we probably will engage more often if our chance is high, and retreat more often if chances are low. For this reason what I do is to extract a random [0-1] and compare it against our success chance: probability will make me (on average) engage often if favourable, and avoid engaging if not favourable. Moreover it has a good "random/human" feeling that makes them less robotic, and adds some variance to behaviours. Fight caos with caos.
The unit we will attack is the one with higher ratio ( = the one i'm better at fight with).

Oh, and if we don't engage, then we move out. Doing nothing is not contemplated.
Moving evaluates success probabilities on near cells (note: spatial representation is reduced to a "grid", for both recording/rewarding efficiency), and the unit moves on the cell with highest success expectation (plus a small bonus for distance to objective).

MANAGING MOLTEPLICITY

The problem is: how do we manage the fact that we face multiple enemies? how do we face the fact that there are some allies who help us to fight?
There are some problems, like "what allies are involved in our fight? what is the area we are fighting on, and where we can check for allies?" ...the way i solved is this:

As you might have noted, recording is always 1 to 1. This is because (and to simplify too) i imagine combat as "scomposed" between different enemies. As a human i reason about the fact there's an officer to kill here, and a tank to avoid there, and the "there's an officer + a tank" representation mean almost nothing, since i will scompose them in 2 different analisys as a human too ("get close" and "stay away"). Machine is less intelligent than me so a combined representation messes up things even more.

A) Multiple enemies problem:

let's say we are unit X, and are facing enemies A and B
i know that my ratios are:
X-A = 50%
X-B = 25%
How did i combine these values? my objective is to get a total success probability for this tactical situation, which we will use to decide if engaging or moving (with the random 0-1 dice roll).

I decided my success probability is the probability to exit alive from this match, where exit alive means being able to kill all enemies.
This means i must (kill A) AND (kill B) -> statistically (kill A) * (kill B).
In this case i kill "A" 1/2 times. I kill "B" 1/4 times. So i kill both of them 1/8 of times. Pretty low but 1 vs 2 is not a happy situation for our X unit :D

B) Allies problem:

Now we need to reinforce our tactical evaluation with the fact there may be some allies to help us fighting.
Since probs are all 0.x, multiplying continues to lower probability.
We need to use the +, but adding probabs from a lot of ally units envolved in current fight to every enemy unit pumps up our reward too much.

I reasoned this way:
to win unit X must kill enemy A AND kill enemy B
if my ally Y is shooting at B, to win unit X must kill A AND (X kills B OR Y kills B).
In math therms means (XA) * (XB + YB).
This way we higher the probability of the precise enemy our ally is effectively shooting to. There's a max cap of 1 obv to probabs.

To be honest the enemy gets reinforced is not the one my ally is effectively shooting. This is because my ally might be moving, plus we could degenerate into some "all attacking / no one attacking" behaviours, because my prob changes a lot based on what others are doing (may be nice too but even mess things up a bit, so for now no).
Instead we get the best attack option for my ally ( = the unit that he will engage if he decides to engage), so we know on average to what enemy he will fire on (even if it's not actually firing yet), and reinforce anyway.

As you can see it's simple in the heart, but big on calculus and number of units to manage.

NEXT STEPS / TO DO:

- for sure there may be better way to intend our reward, thats a prototype
- train the AI
- create some test scenarios i expect my AI to beat* (and the default one cannot beat)
- add some combat elements to record and employ
- 2 days ago discovered that a "for" cycling on a table is not sequential but parallel (damn), and this generates spikes in cpu load. I sequentialized the main "for", but code needs more review.
- adjust a code i wrote to remember failed movement position until we can get a good one to move. This is to avoid units being stuck when try to move on impossible positions; actually code is there but doesn't work and need some further reviewing.
- write a lot of blablabla around it, subtracting precious time to coding and giving it to useless papers no one will ever care to read (joking :mrgreen: , i'll document it a bit and that's enough)
- oh, and make it public on github, but needs some cleaning actually

* Example scenarios:
- 1 infantry against 4 enemy inf, default engages, mine must flee
- tank in front of AT gun, default engages, mine must flank it (when angle will be recorded)
- scout (invisible unit) must reach objective, in between there are a lot of enemies to dodge and avoid; default scout goes get killed, mine avoid them

As a last note, by now my AI matches decently the default AI, competing almost au pair with it (considering it has very low training).
The hope is clearly to be able to outperform it, mainly focusing on correct usage of some heavy-power units (like tanks), and losing less units in fight they can't win (e.g. outnumbered). On his side the default AI has the advantage that they ALWAYS engage and fire (while mine sometimes move), and this makes these rambo guys dangerous :)

I think that's all for now...if there's something else, i'll add, but the main part is here. Cya! :)

Ps. as a side notes these days i'll be studying for my last exam, so until 19th jan i'll be less devoted to the project...fortunately the big part is already up and running.
0 x

User avatar
PauloMorfeo
Posts: 2004
Joined: 15 Dec 2004, 20:53

Re: Thesis: machine learning AI for spring 1944?

Post by PauloMorfeo » 09 Jan 2016, 18:30

Fil wrote:The focus on high complexity pleased me because i love historical/realisic focused games, and the more the gameplay adds elements, the more solving the problem gets complex.
In fact a lot of research / AI competitions often try to solve "very well" relatively simple problems, with a low number of units, or some rock-paper-scissors reasoning, or simple terrain evaluation. In contrast, i aim to create a framework able to solve "less perfectly" problems with high numbers of variables/elements, and consequently complex. Thats the case for example of vehicles carrying more than 1 weapon type, each one efficient against a particular type of enemy, or combined arms/terrain/visibility problems. For sure i will not be able to solve all of these problems, since my time is short, but i can create a proof-of-concept for some elements that can be hopefully extended to other problems.
I absolutely agree. I see lots of (frequently academic) "let's do an awesome AI for one of the greatest challenges: RTS games" and then they end up spending all their effort optimizing to death mini games of skirmishing 10v10 units in Starcrap 1. I like your approach. (I would just point that your approach doesn't appear to qualify as "trial and error" as you said - I wouldn't recommend you stating that in an academic report)

Fil wrote: THE IMPLEMENTATION:

The model is full-agent based. This means it controls every unit at a time, and i expect some coordination/good tactics to emerge from individual behaviours.

1) Stats recording
You mean you have a regular "Skirmishing AI" playing the game that is also recording those events (deaths, damages, etc) that it is capturing from the game, right?

Did you adapt an existing AI to record the events (by acting as a wrapper/gateway)? Or did you build your own AI to play it (massive undertaking by itself, even without adding to it any machine learning)? You're working with Lua, as you've told me on Spring's IRC.

Fil wrote: Actually, everytime a unit kills another unit (or is killed by), this event is recorded on a table, with additional details on tactical elements we decide to record.

By now the only combat factor recorded/implemented is the distance from enemy (the more important, since it determines most of "can attack/can be attacked" cases). Other may follow, like relative angle from units or unit stats (e.g. is suppressed).

Btw soon i'll substitute kill events with damages, to be more precise when recording.
I know understand how you're planning on doing the "learning" - trying to learn something from those tactical patterns.

Things you can track that I envision can be helpful in making a good prediction are:
- distance to target;
- Can I attack him with weapon A?
- Can I attack him with weapon B, ..?
- Can he attack me with weapon A? B? ..?
- How much HP do I have left?
- How much HP does he have left?
- How much Damage/s do I deal to him (in terms of game rules)?
- How much Damage/s does he deal to me (in terms of game rules)?
- How many secs do I need to destroy him (in terms of game rules)?
- How many secs does he need to destroy me (in terms of game rules)?
One of the problems of tactical engagements of that often units manoeuvre and shots end up missing, making these last 4 values not be fully reliable (not entirely sure that's the case in Spring'44 :p). If so, you might want to try to instead/also track:
- How much Damage/s do I take while in his range; (how much HP did I lose to him / how long have I been within his range)
- How much Damage/s does he take while in my range;
^ of course, some of this is metadata, that you don't need to record on the spot but can be calculated from other originally recorded data.

This depth of information might be a complete waste, depending on how you're planning on making any kind of predictive behaviour from the dataset. If you're doing some kind of simple linear regression (which is in itself not all that "simple"), having so many dimensions to the problem might be useless. If you're doing some kind of statistical analysis (as you appear to be doing), then trying to learn something from so many variations might be a nightmare.

Fil wrote: The so-called Combat Table (where values are stored) is structured this way:

"my unit type" -> "has killed / died by" -> "enemy unit type" -> "for this tactical element (e.g. distance)" -> "[element range]" -> X times.
I'm wondering, are you recording from mini-games that you've established and always start under the same conditions? Ex: 5v5 units, all starting with full health?

One of the issues with that data is that you don't know whether a unit was, ex, nearly dead, which would skew any conclusions taken from the data (it would appear unit X is more vulnerable to opponent Y than it actually is). Of course, if you record enough data, this problem won't ever disappear but will become increasingly negligible. If you're recording merely from such mini-games set up by you, you won't have that problem in the first place, so you'll be fine. The problem with that is that you won't be able /or/ will be less valuable to record real games, since that situation will occur many times, adding "noise" to the data recorded.

Fil wrote: Note: "killed or died" are 2 different sections of table, the first storing kills our unit did on enemy units, the second storing deaths by hand of this enemy type.
Note 2: "tactical elements" are table sections for different factors we decide to record, actually only distance.

This way, every time 2 units faces themselves at distance D, we are able to extract from the Combat Table how many times we have been able to kill the enemy and how many times the enemy killed us (at that distance). So:

Unit X -> killed -> unit Y -> at distance -> 200m -> tot times
Unit X -> died by -> unit Y -> at distance -> 200m -> tot times

From these 2 values we can get a kills/deaths ratio, who summarize the result of our past attempts to kill this enemy in this situation.
KD ratio is this:

R = K / (K + D) [0-1]

For example K = 3 and D = 2 -> 3/3+2 = 0.6 so 60% success possibility. This way we are also indipendent on number of scores recorded (works for 3 as for 4000 kills).
That's a reasonably effective and dead simple approach to "learning" something from the data. I like it. It won't allow you to learn anything very deep but it has its own advantages: very simple, very low tech, very "fast" to calculate, and very easy to understand (*).
* often quite important for success, since when we don't understand the learning mechanism we are often unable to use the mechanism properly.

The basis of your project is to learn how to predict outcomes of tactical engagements by learning from past recorded events. If you analyse a reasonably low amount of parameters but more than, ex, 1~2 simple ones regular statistical analysis will start to not cut it any more. If you want to make "good" predictions from the data, you'll need to start to move to more classic pattern recognition / prediction systems:


# Linear regression
The most classical method to learn something like your original data (distance + killed? + kill?) is linear regression, which can be reasonably simple (tricky to implement, if you're implementing it by hand, and tricky to use). You could create a function that predicts outcome of situations (ex: success = 0.3*distance + 0.7xdistance^2 + ...) that could then be used in a performant way in-game to predict whether an engagement would be successful or not.

The problem with linear regression is that as the number of "features" (different columns in the dataset that we want to consider) start to grow, ex: more than 5, linear regression starts to become unwieldy. For your original data (distance + killed? + kill?) linear regression would suffice but as you take more things into account (ex: taking into account multiple enemies and friends) it would quickly start to grow out of hand.

# Bayesian inference
I've got to be honest, I don't understand well enough how Bayesian inference works but, from what I know of it, it would suit this like a glove.

# Neural networks
As far as I can understand, Neural networks would be the absolute best (*) choice to use to learn how to make predictions of outcome in these situations. It would easily allow it to be fed things like:
- Friendly unit Commando at (X; Y);
- Friendly unit Sherman at (Z; K);
- Enemy ...
- Enemy ...
- Enemy ...
And, from that, automagically, pump out a prediction. And Neural Networks are very resilient to noise, they cope with it reasonably well (as Bayesian inference can too).
* in terms of how reliable the prediction is.

The problem with Neural Networks is that they're not trivial to understand how to work with, and if you don't understand it, it is easy to hand it data that is not very easy for a NN to learn or some other learning pitfalls. On the other hand, if you use some API/tool, it might be easy to run into learning pitfals but still easy to get some very positive results.

# Weka
https://en.wikipedia.org/wiki/Weka_%28m ... earning%29
Weka is not a learning mechanism, it is a tool for using learning mechanisms. I've used it before and it is really neat, very free and simple to use.

Example usage could be:
- Do those recordings, tracking a proper set of data (ex: column A is victory|loss|draw).
- Play around with Weka
---- open the dataset;
---- choose which parameters to use (ex: want a predictor, not a classifier; column A is the results;);
---- Try Neural Net with 10x100 neurons;
---- Try Neural Net with 50x200 neurons;
---- Try Bayesian Inference;
---- ...
---- Choose the one that gave you the best results;
- Have Weka pump out the executable that makes predictions (receives as input the same "columns" as in the dataset and outputs a prediction);
- Have the AI invoke that executable every time it wants to make a prediction.
Erm, simple in concept - not sure how easy it would be for you to invoke the executable from inside your AI... Not sure whether Weka can output, for example, a Bayes Algorithm / Neural Network as parameters that you can use in your AI or if it just outputs a native code executable.

^ This is something I could definitely help you with. Both Weka, what/how info to record, etc. (except that part of invoking the executable from inside the code.


Even if you end up not using any of those, you might want to describe why not in your academic report, since the evaluators are usually nitpicky (without any regard for how much time you had available or any sense of practicality) and might judge you negatively on the lack of pursuing other options.
0 x

Post Reply

Return to “AI”