Just like it sounds... I want something with all of the basic features of CSimpleParticleSystem, but it needs to call a series of bitmaps, using the following variables:
startingTexture= ("somename" ending with a number from 001 to 999- the starting frame, usually "somename001")
numFrames= integer, number of frames, gets named bitmaps from startingTexture to startingTexture + N.
frameSpeed=integer, how many frames before each bitmap is replaced. Defaults to 1.
willLoop= boolean. Can this frame sequence repeat? If 1, then it starts at "startingTexture"
randomFrame= boolean. Can this animation pick a random frame between startingTexture and N? Good for preventing the texture from having that cheesy, "I've seen this a billion times" feel, when combined with other random variables.
And that's it... the already-existing ParticleLife covers the total length before the animation is removed, and the other variables are already covered by SimpleParticleSystem. And yeah, I'd really prefer that there was a z-buffer-sorted variant as well, for certain things (think complex fire, water, other stuff where z-buffer order actually matters).
This would be a very big deal, in terms of FX for Spring. It, and particles that can spawn other particles, are the only major things that Spring lacks, vs. any modern game engine, in terms of generic FX code. Having this would make it possible to generate very sophisticated-looking FX that used very little code, and while there is of course the overhead involved in swapping out bitmaps, I really don't think that's a huge cost, especially since most people using this will be optimizing their FX to use as few frames as possible.
Animated Bitmap Explosions
Moderator: Moderators
No, I strongly suspect that if I have this to work with, FX will get less torturing, because I can replace dozens of independent actors each obeying complex rulesets with one actor, or two or three at the most, interacting to create complex symphonies of color and light.
I mean... which is the more complex task for a computer to do... create an illusion of fire from three dozen particles for the fire, all obeying strict timing constraints, plus yet more particles for the smoke, etc., etc.... or one animated bitmap that is swapped every N frames? We see animated particles all the time in games to get around these issues of simulation complexity, because it's an excellent way to fake very complex behaviors.
Or, to put it bluntly... does the smoke from a Geothermal lag your PC? Because it's using the smoke code, which is essentially what I want, except with some more controls over certain things like alpha and color-over-time.
I mean... which is the more complex task for a computer to do... create an illusion of fire from three dozen particles for the fire, all obeying strict timing constraints, plus yet more particles for the smoke, etc., etc.... or one animated bitmap that is swapped every N frames? We see animated particles all the time in games to get around these issues of simulation complexity, because it's an excellent way to fake very complex behaviors.
Or, to put it bluntly... does the smoke from a Geothermal lag your PC? Because it's using the smoke code, which is essentially what I want, except with some more controls over certain things like alpha and color-over-time.
If you plan on having multiple images from the same animation set
rendered within any given frame, then it's better to use a texture atlas
(ex: a particle cloud where each particle is given a random time offset).
I'm not suggesting that you use the current atlas system, but rather
define the animation format such that it contains texture coordinates.
Then you can make your own atlas, preferably containing textures all
of the same size. This also enables animated translation and scaling
using only a single image.
example: frame1 <texture> <xn yn xp yp> <time>
texture: the texture to use for this frame
xn, yn, xp, yp: min and max texture coordinates (0.0 to 1.0)
time: time before switching to the next frame (in seconds, floating point)
rendered within any given frame, then it's better to use a texture atlas
(ex: a particle cloud where each particle is given a random time offset).
I'm not suggesting that you use the current atlas system, but rather
define the animation format such that it contains texture coordinates.
Then you can make your own atlas, preferably containing textures all
of the same size. This also enables animated translation and scaling
using only a single image.
example: frame1 <texture> <xn yn xp yp> <time>
texture: the texture to use for this frame
xn, yn, xp, yp: min and max texture coordinates (0.0 to 1.0)
time: time before switching to the next frame (in seconds, floating point)
AF:
The animated cursor code isn't really same situation (I should
know, I implemented the cursor format that allows for image
sharing between frames and variable length frames). The big
difference is that for any given cursor set, only one texture is
used per frame. That means fewer texture context switches,
which is good. I also took it a step farther and sorted the cursor
icons before rendering them (and removed duplicates), so you
get almost no texture context switches.
rattle:
Yup, that's basically it. You could also jump to different images
if the need arose. Shifting the texture coordinates around to
produced scaled frames might not be uncommon too.
P.S. Specifying x and y texture coordinate grid sizes would
make it a lot easier to read the files. Example:
frame atlas1.png 0.0 0.125 0.125 0.25 time=0.05
frame atlas1.png 0.125 0.5 0.25 0.625 time=0.04
frame atlas1.png 0.25 0.625 0.375 0.750 time=0.05
or
xscale 8
yscale 8
frame atlas1.png 0 1 1 2 time=0.05
frame atlas1.png 1 4 2 5 time=0.04
frame atlas1.png 2 5 3 6 time=0.05
You could also add a cell based format to make it even easier:
xscale 8
yscale 8
frame atlas1.png cell=0 1 time=0.05
frame atlas1.png cell=1 4 time=0.04
frame atlas1.png cell=2 5 time=0.05
P.P.S. A max texture LOD parameter might also be useful for
mipmapping control. This value would depend on the smaller
number of divisions along either dimension.
The animated cursor code isn't really same situation (I should
know, I implemented the cursor format that allows for image
sharing between frames and variable length frames). The big
difference is that for any given cursor set, only one texture is
used per frame. That means fewer texture context switches,
which is good. I also took it a step farther and sorted the cursor
icons before rendering them (and removed duplicates), so you
get almost no texture context switches.
rattle:
Yup, that's basically it. You could also jump to different images
if the need arose. Shifting the texture coordinates around to
produced scaled frames might not be uncommon too.
P.S. Specifying x and y texture coordinate grid sizes would
make it a lot easier to read the files. Example:
frame atlas1.png 0.0 0.125 0.125 0.25 time=0.05
frame atlas1.png 0.125 0.5 0.25 0.625 time=0.04
frame atlas1.png 0.25 0.625 0.375 0.750 time=0.05
or
xscale 8
yscale 8
frame atlas1.png 0 1 1 2 time=0.05
frame atlas1.png 1 4 2 5 time=0.04
frame atlas1.png 2 5 3 6 time=0.05
You could also add a cell based format to make it even easier:
xscale 8
yscale 8
frame atlas1.png cell=0 1 time=0.05
frame atlas1.png cell=1 4 time=0.04
frame atlas1.png cell=2 5 time=0.05
P.P.S. A max texture LOD parameter might also be useful for
mipmapping control. This value would depend on the smaller
number of divisions along either dimension.
KDR_11k:
For shearing, you're probably right, but for rotation, it should make little
difference (you lose a little from the corners). Either way, the reason I'm
keen on texture coordinate manipulation for particle animations is that
the fastest way to pump out a lot a particles is to use point sprites
(automatic billboarding, thinner data stream). When using point sprites,
you can't manipulate the geometry of the quad except to scale it.
edit: you're also right about rotation if drawing rectangular quads, guess
I was still in my point sprite mindset when I wrote this
For shearing, you're probably right, but for rotation, it should make little
difference (you lose a little from the corners). Either way, the reason I'm
keen on texture coordinate manipulation for particle animations is that
the fastest way to pump out a lot a particles is to use point sprites
(automatic billboarding, thinner data stream). When using point sprites,
you can't manipulate the geometry of the quad except to scale it.
edit: you're also right about rotation if drawing rectangular quads, guess
I was still in my point sprite mindset when I wrote this
