Engine support for dynamic/generated music

Engine support for dynamic/generated music

Discuss the source code and development of Spring Engine in general from a technical point of view. Patches go here too.

Moderator: Moderators

Post Reply
User avatar
hoijui
Former Engine Dev
Posts: 4344
Joined: 22 Sep 2007, 09:51

Engine support for dynamic/generated music

Post by hoijui »

Engine support for dynamic/generated music
Image

As discussed here:
viewtopic.php?f=21&t=17094

I wrote an info & statistics out-stream/exporter for the engine, which is now merged into master.
It is disabled by default, so it will not waste any performance, as most will probably not use it.
Currently available spring-settings with default values are:

Code: Select all

OscStatsSenderEnabled=0
OscStatsSenderDestinationAddress=127.0.0.1
OscStatsSenderDestinationPort=6447
For the OSC part, the C++ library oscpack is used. It is public domain and works under Windows and POSIX. As we only have to send messages, only the part of the lib needed for sending is included (4 header files, 2 source files).
SCons and CMake are set up to build it as a static lib, and link it into spring(.exe) (as it is done with streflop). oscpack has no external dependencies.

what is available:
  • engine internal stuff to export stats through OSC (in git master)
    rts/lib/oscpack/*
    rts/Game/OSCStatsSender.cpp/.h
  • OscStatsSendFaker: (very ugly code) a small C++ test app (Linux only), that can send the same messages that spring sends, but with randomly generated data
  • SpringOSCInspector: (very ugly code, not supplied :D) a small Java GUI that can be connected to the spring OSC output, to see it as 2D Line graphs (see attached screenshots)
what is needed:
  • a sample implementation of a dynamics calculation "transponder" (for example in Java oir in C with LUA); calculates the dynamics of the game into values that can be used more directly to generate music; for example: ammountOfAction, currentWinLooseChance

possibly of use, utilities that dump all incoming OSC messages to stdout:

Windows OSC dump utility (binary):
http://luvtechno.net/d/1980/02/open_sou ... rol_3.html

Linux OSC dump utility (source code)
http://megaui.net/fukuchi/works/oscsend/index.en.html
Attachments
SpringOSCInspector-screenshots.zip
(111.68 KiB) Downloaded 36 times
SpringOSCInspector.zip
(1.68 MiB) Downloaded 37 times
OscStatsSendFaker.zip
(2.91 KiB) Downloaded 34 times
Last edited by hoijui on 09 Feb 2009, 13:17, edited 1 time in total.
User avatar
Hoi
Posts: 2917
Joined: 13 May 2008, 16:51

Re: Engine support for dynamic/generated music

Post by Hoi »

So, does it work? do you have a sample to listen to?
User avatar
hoijui
Former Engine Dev
Posts: 4344
Joined: 22 Sep 2007, 09:51

Re: Engine support for dynamic/generated music

Post by hoijui »

it works, adn i do have a sample that works with two values -> it would need a running dynamics calculater to conect with spring. but it creates so ugly sounds, that i rather not release it anyway, as it would be counter propaganda for the whole idea.
it just plays random tones, and the min and max frequency plus tone change interval of these tones can be adjusted over OSC.
I used the ChucK Sound Programming language, which is simple, but has relatively unstable tools, and is performance hungry.
http://chuck.cs.princeton.edu/
A more sophisticated and more wide-spread alternative is SuperCollider, which i was unable to get running.
http://www.audiosynth.com/

instead of audio programming languages, you can use other flavours of sound generating programms that support OSC input. I can not give more info here, as i do not know anything about that.
User avatar
Argh
Posts: 10920
Joined: 21 Feb 2005, 03:38

Re: Engine support for dynamic/generated music

Post by Argh »

Hey... this may be a stupid idea... but could you get it to run MIDI files? Especially considering what's going on with sound ATM, it's not a big stretch at all.

I know that the AI-theoretical is your focus, and do what you want to, but I think it would have more legs as a practical final idea if it was... AI of some sort ++ MIDI to generate music as humans understand it. MIDI's more flexible for those purposes than a static recording can be, and should serve well- short loops could be combined at random to integrate with an AI yet provide intelligent transitions and something people might actually like to listen to.

Just a thought, I absolutely am OK with you taking this wherever you think is fun.
User avatar
hoijui
Former Engine Dev
Posts: 4344
Joined: 22 Sep 2007, 09:51

Re: Engine support for dynamic/generated music

Post by hoijui »

A diagramm of the whole idea:
Image

The orange systems are interchangeable; they generate the raw statistical data.
The green systems are interchangeable too, as in: they both are made to receive data from the orange boxes. The Dynamics Analyzer is free to send all the raw data further to the music generator, though it may not make sense to send the raw statistics out, as they should be summarized in the generated dynamics values.
The red box then actually generates music. Instead of a separate Dynamics Analyzer as shown in this diagram, it could be integrated into the Music Generator, so it would be a single piece of software.

I am responsible for the orange stuff, and most likely i will also write a Dynamics Analyzer with a sample algorithm, but i will not do the red part.
@Argh: What you request is a good idea (i think), but it is in the red part, so none of my business.
User avatar
hoijui
Former Engine Dev
Posts: 4344
Joined: 22 Sep 2007, 09:51

Re: Engine support for dynamic/generated music

Post by hoijui »

A list of some tools that possibly by used as the red box. The criterias are:
  • generates music
  • supports OSC input
CSound
Csound is a sound design, music synthesis and signal processing system, providing facilities for composition and performance over a wide range of platforms. It is not restricted to any style of music, having been used for many years in the creation of classical, pop, techno, ambient, experimental, and (of course) computer music, as well as music for film and television.
http://www.csounds.com
http://en.wikipedia.org/wiki/Csound

Nyquist
is a sound synthesis and composition language offering a Lisp syntax as well as an imperative language syntax and a powerful integrated development environment.. Nyquist is an elegant and powerful system based on functional programming.
http://www.cs.cmu.edu/afs/cs.cmu.edu/pr ... tware.html

ChucK
ChucK is a new (and developing) audio programming language for real-time synthesis, composition, performance, and now, analysis - fully supported on MacOS X, Windows, and Linux. ChucK presents a new time-based, concurrent programming model that's highly precise and expressive (we call this strongly-timed), as well as dynamic control rates, and the ability to add and modify code on-the-fly. In addition, ChucK supports MIDI, OSC, HID device, and multi-channel audio. It's fun and easy to learn, and offers composers, researchers, and performers a powerful programming tool for building and experimenting with complex audio synthesis/analysis programs, and real-time interactive control.
http://chuck.cs.princeton.edu

SuperCollider
SuperCollider is an environment and programming language for real time audio synthesis and algorithmic composition. It provides an interpreted object-oriented language which functions as a network client to a state of the art, realtime sound synthesis server.
http://supercollider.sourceforge.net
User avatar
Argh
Posts: 10920
Joined: 21 Feb 2005, 03:38

Re: Engine support for dynamic/generated music

Post by Argh »

BTW... when you see references to "tracks", "patterns", "loops" and "sequencers"... that's all MIDI stuff. MIDI is fairly ancient tech, and pretty much does what everybody needs it to do.

It's built into every major OS, etc., including default samples for the various instruments in a Standard MIDI device (127, IIRC, but it's been ages since I dealt with any of this).

IOW... when you talk about output to end-users, you're basically looking at either MIDI, tone-generators, or lengthy static samples.

The clear advantage, for the purposes of initial experimentation, is probably MIDI, because small loops can be combined to create "music" of a sort- not great music, by human standards, but it'll be a lot less horrible than a typical tone generator, and from an AI development perspective, it's a lot less complex.

Unlike static samples, MIDI is "just data" until passed to an interpreter, and can be sped up, slowed down, etc., by software... without doing anything destructive to the final sounds.

So... unlike static samples, where very frequently the BPM (beats-per-minute) won't mesh, even if the songs are in the same time signature and key and roughly the same style, unless you've gotten extremely talented technical musicians to assist you... MIDI can be warped to fit your needs. Take four bars of Axel F, merge with three bars from Hotel California... make their musical keys equivalent, and hey presto... "music".

It will still be terrible, though, unless you can find a MIDI artist who wrote enough pieces that are thematically similar and would mesh well enough that slices and dices might work. I'd suggest genre music- country-western or classical, even arthouse jazz- over modern pop, myself.

It might be a really good starting-place for high-end theoretical approaches. Seriously, if you're going for this because it looks like an interesting paper opportunity, or if you're a professor, etc. (not that I mean to pry, of course, it's none of my business who you are when you aren't here)... well, I think that this might be a winner in the end.

If music-making AI actually writes the tracks at the start of a given game-state, then you'd have MIDI tracks that were unique to the game-state (nor not, if a neuronet is being used with a feedback system), played through an interpretive AI that attempted to accurately evaluate the true state of play and deliver the "right" music, in the human sense.

So... one AI, to write it... another to play it, and they're both separate doctoral theses. And heck, if it was tied into a Widget where users could give it feedback... maybe after a billion billion bad MIDI sequences, it might write something we'd actually want to listen to, other than for the sake of Science ;)
User avatar
AF
AI Developer
Posts: 20687
Joined: 14 Sep 2004, 11:32

Re: Engine support for dynamic/generated music

Post by AF »

black text on luminous red background makes for horrible reading.
User avatar
hoijui
Former Engine Dev
Posts: 4344
Joined: 22 Sep 2007, 09:51

Re: Engine support for dynamic/generated music

Post by hoijui »

updated first post.
minor adjustemnts to the image; visual design only.

@Argh
I do not understand what you mean with ine AI to write the music and one AI to play it. We could look as the data analyzer as an AI, and you could say the music generator is an AI plus a sound emmitting machine (wether it be as stream or files or directly to a speaker, whether as mp3, ogg or midi(this not directly to speaker of course)). i do not see how you splitt the AI part of the music generator into two separate AIs.
Other then that though, ... i am with you! :D

general note:
A big pro for artists in this deisgn is, that the OSC messages leaving the green box are not spring specific, so the same music generators could be used with other engines without adjustment, as long as the engine has a basic OSC output, which is codeable in a day.
User avatar
Argh
Posts: 10920
Joined: 21 Feb 2005, 03:38

Re: Engine support for dynamic/generated music

Post by Argh »

We could look as the data analyzer as an AI, and you could say the music generator is an AI plus a sound emitting machine (whether it be as stream or files or directly to a speaker, whether as mp3, ogg or midi(this not directly to speaker of course)).
That's essentially what I meant by "two AIs". I apologize for being unclear.

The way I see it, you'd have one to analyze the situation of play (locally) and respond to user feedback... and one to generate the music (globally or locally) based on user feedback from the analysis AI.

To make it a powerful interactive toy, users would just need a way to provide feedback- a button that says "I like this" or "I don't like this" would give the two AIs different feedback to act upon. The "conductor" AI, performing analysis, could adjust what songs or snippets were played when, based on its interpretation of user preferences, and the "composer" AI could attempt to find more-pleasing combinations of notes and speed based on what users seem to find more enjoyable.

In theory, that means that eventually the two AIs will play music perfectly customized for the user.

In reality, I don't think that that's remotely practical, but if AIs exist that "understand" how to write music in a Western European classical-music structure, since that's a practically ubiquitous form of music around the world with fairly formal rules, then chances are that they might be able to make something vaguely like music, given enough feedback.

Even more interesting, if the musical AIs could exchange data about what people actually like, then it might become a fairly powerful test of the concept that an AI could develop something like "creativity" under certain global caveats, using simple building-blocks of rules, if it has enough people to listen and give it "critique" of its compositions.
User avatar
hoijui
Former Engine Dev
Posts: 4344
Joined: 22 Sep 2007, 09:51

Re: Engine support for dynamic/generated music

Post by hoijui »

interesting that you think the AI would approach "creativity" when it listens more to the user; i would say it gets less "creative" then, and more streamlined. though, it is what we want.

the feedback thing would technically work like this:
eg through a widget with a button or slider which sends OSC messages like this:

Code: Select all

/spring/music/currentLikingFactor   0.3
the OSCStatsSender already contains a public method that accepts arbitrary osc messages, namely:

Code: Select all

	/**
	 * Generic method to send OSC messages to the configured receiver(s).
	 * @param  oscAdress  the messages title/address,
	 *                    eg: "/spring/stats/team/values"
	 * @param  fmt        describes the parrameter types of the message to send,
	 *                    eg: "fifs" means {float, int, float, const char*}
	 *                    allowed types (OSC base types):
	 *                    'i' 32bit integer [int]
	 *                    'f' 32bit floating point number [float]
	 *                    's' string [const char*]
	 *                    'b' blob; byte array [const unsigned char*]
	 * @param  params     pointers to the parameters described in fmt, eg:
	 *                    {&someFloat, &someInt, &otherFloat, &cStr}
	 * @return whether the sending was successfully done.
	 */
	bool SendPropertiesInfo(const char* oscAdress, const char* fmt,
			void* params[]);
the name of the method is not well choosen, will have to change that.

in the lua layer, we would use for this:

Code: Select all

#include "Game/OSCStatsSender.h"

...
   float linkingFactor = 0.3f;
   void* params[1];
   params[0] = &linkingFactor;
   bool ok = oscStatsSender->SendPropertiesInfo("/spring/music/currentLikingFactor", "f", params);
Of course, it would have to be generic in the LUA layer, and only this specific in the LUA widget.
User avatar
Argh
Posts: 10920
Joined: 21 Feb 2005, 03:38

Re: Engine support for dynamic/generated music

Post by Argh »

interesting that you think the AI would approach "creativity" when it listens more to the user; i would say it gets less "creative" then, and more streamlined. though, it is what we want.
Not really. I mean... human musicians get feedback from listeners about whether they're making good music or not... why shouldn't an AI?
Post Reply

Return to “Engine”