Texture size?
Moderators: MR.D, Moderators
Texture size?
I've been using 256x256 for the last 8 units I made..
I'm fucken tired of it, too.. The units are ugly and 256x256 sux.. Period.
Is there any negative (besides mod size..) to using a 512x512 for every unit..
Positives:
Better Mod Appearance
Better Looking Units
Modder's approval/happiness at seeing something that doesn't blow...
I'm fucken tired of it, too.. The units are ugly and 256x256 sux.. Period.
Is there any negative (besides mod size..) to using a 512x512 for every unit..
Positives:
Better Mod Appearance
Better Looking Units
Modder's approval/happiness at seeing something that doesn't blow...
-
- Imperial Winter Developer
- Posts: 3742
- Joined: 24 Aug 2004, 08:59
SWS decides on texture size based on the size and prevalence of our units.
If it is a larger unit, we will allow 512*512, but if it is small, we stick to 256*256 (or alternatively, make it 512*512, but try to fit two or three units onto it, so you can make certain elements larger (ie: the faces on infantry) in the map. The commander will definitely have 1024*1024, and potentially the ATAT too.
I'd say currently there is a 50/50 split between 512/256 textures - probably even leaning towards the amount of 512 texxes.
If it is a larger unit, we will allow 512*512, but if it is small, we stick to 256*256 (or alternatively, make it 512*512, but try to fit two or three units onto it, so you can make certain elements larger (ie: the faces on infantry) in the map. The commander will definitely have 1024*1024, and potentially the ATAT too.
I'd say currently there is a 50/50 split between 512/256 textures - probably even leaning towards the amount of 512 texxes.
It is by all the cards I care about
We're going to be able to do fricking LOD soon, and games can even include a performance Widget to allow users total control within their games. How much more scalability do we really need?
I, for one, fully intend to include a LOD widget that swaps S3Os and lowers texture sizes, for the seriously-impaired user- it's very easy to shrink the textures. And that's a fairly major performance issue for people, so I think it's better to support that than pretend it doesn't matter.
But wasting time building support for pre-DirectX 8 cards? Gimme a break. I could care less about a GeForce II user who's trying to play Spring, frankly. They can pick up GeForce 3 and very nice GeForce 4 cards for practically nothing now- the only excuse for not having that level of hardware is unbelievable laziness, frankly. Not money. Because if you can afford to have a computer, be online, and etc., etc.... you've got enough money for a decent, albeit used, graphics card, that was very good in its day, but is crap by today's standards.
Now, getting to the issue.
Snipa, each doubling in size of the skin results in > doubling of GPU usage, and slightly < doubling in texture RAM assigned. One of the most important things you need to make sure to do, when you're making the DDS files, is include all the mipmap levels. Mipmaps greatly improve performance, for cards that support them (i.e., anything that's DirectX-8 compliant, or had its drivers upgraded to make the cut, i.e., the GeForce-3 series).
Worrying about running out of texture memory is not a non-issue. I don't really know how gracefully Spring handles this, because I've always used played using fairly modern hardware, but I suspect that Bad Things will happen.
However, most people are using graphics cards with > 64MB of texture memory these days, which is enough (barely) to handle the load of a fairly large mod. And, so far as I am aware, these textures don't get sent to the GPU until the S3O is actually loaded. This is actually a bad thing for performance (I've ranted about this before, I'm shutting up now), but it also means that if you're not expecting every unit in the mod to show up every single game, then you can even fudge a bit. Just keep in mind that:
1. Less is always better for performance.
2. Quality is relative. If you have a mod with gazillions of tiny things, they can probably get away with small textures, and the inverse.
3. Using the shaders is another big GPU cost that most people don't pay much attention to. The reflective shader, in particular, requires tesselation, which effectively doubles or triples polycount during one of the rendering passes.
We're going to be able to do fricking LOD soon, and games can even include a performance Widget to allow users total control within their games. How much more scalability do we really need?
I, for one, fully intend to include a LOD widget that swaps S3Os and lowers texture sizes, for the seriously-impaired user- it's very easy to shrink the textures. And that's a fairly major performance issue for people, so I think it's better to support that than pretend it doesn't matter.
But wasting time building support for pre-DirectX 8 cards? Gimme a break. I could care less about a GeForce II user who's trying to play Spring, frankly. They can pick up GeForce 3 and very nice GeForce 4 cards for practically nothing now- the only excuse for not having that level of hardware is unbelievable laziness, frankly. Not money. Because if you can afford to have a computer, be online, and etc., etc.... you've got enough money for a decent, albeit used, graphics card, that was very good in its day, but is crap by today's standards.
Now, getting to the issue.
Snipa, each doubling in size of the skin results in > doubling of GPU usage, and slightly < doubling in texture RAM assigned. One of the most important things you need to make sure to do, when you're making the DDS files, is include all the mipmap levels. Mipmaps greatly improve performance, for cards that support them (i.e., anything that's DirectX-8 compliant, or had its drivers upgraded to make the cut, i.e., the GeForce-3 series).
Worrying about running out of texture memory is not a non-issue. I don't really know how gracefully Spring handles this, because I've always used played using fairly modern hardware, but I suspect that Bad Things will happen.
However, most people are using graphics cards with > 64MB of texture memory these days, which is enough (barely) to handle the load of a fairly large mod. And, so far as I am aware, these textures don't get sent to the GPU until the S3O is actually loaded. This is actually a bad thing for performance (I've ranted about this before, I'm shutting up now), but it also means that if you're not expecting every unit in the mod to show up every single game, then you can even fudge a bit. Just keep in mind that:
1. Less is always better for performance.
2. Quality is relative. If you have a mod with gazillions of tiny things, they can probably get away with small textures, and the inverse.
3. Using the shaders is another big GPU cost that most people don't pay much attention to. The reflective shader, in particular, requires tesselation, which effectively doubles or triples polycount during one of the rendering passes.
Last edited by Argh on 28 Jun 2007, 04:33, edited 1 time in total.
DDS has configurable levels of mip- in theory, you could just use one mip level, for the greatest possible savings in size.
As for whether it uses it or not... hmm... ya know, I'm pretty sure it does. I saw a huge improvement in performance, when I went from TGA to DDS, back in the early days with NanoBlobs. But whether "loading DDS" and "taking full advantage of mipmaps" is the same thing is a question I'm not qualified to answer, frankly, so we'd better ask the devs.
As for whether it uses it or not... hmm... ya know, I'm pretty sure it does. I saw a huge improvement in performance, when I went from TGA to DDS, back in the early days with NanoBlobs. But whether "loading DDS" and "taking full advantage of mipmaps" is the same thing is a question I'm not qualified to answer, frankly, so we'd better ask the devs.
Spring doesn't handle lack of texture memory because it uses OpenGL, there is no way to query whether a specific texture resides in texture memory or in system memory. (There is only glAreTexturesResident() with which one can check whether all textures are in VRAM, but I don't think that is used by Spring.)
So basically you'll just get big slowdowns if you run out of texture mem because the drivers will need to swap out textures from system to video ram over the PCIE (or AGP on older cards) bus multiple times per frame.
EDIT: AFAICS S3O should work fine with DDS, and I'm sure mipmapping is enabled for non-DDS textures. For DDS textures I noticed that Spring doesn't set GL_TEXTURE_MIN_FILTER so it then depends on the default setting for GL_COMPRESSED_RGBA_S3TC_DXT1_EXT textures. I have no clue whether that has mipmaps enabled or disabled by default.
I'd say, just make a unit with a way too big checkerboard-pattern texture and test for yourself. If mipmapping isn't on (for either type of texture) it is a bug.
EDIT 2: I think it is a bug, I see nothing about a change in default settings for mipmapping in the spec
So basically you'll just get big slowdowns if you run out of texture mem because the drivers will need to swap out textures from system to video ram over the PCIE (or AGP on older cards) bus multiple times per frame.
EDIT: AFAICS S3O should work fine with DDS, and I'm sure mipmapping is enabled for non-DDS textures. For DDS textures I noticed that Spring doesn't set GL_TEXTURE_MIN_FILTER so it then depends on the default setting for GL_COMPRESSED_RGBA_S3TC_DXT1_EXT textures. I have no clue whether that has mipmaps enabled or disabled by default.
Code: Select all
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_LINEAR);
EDIT 2: I think it is a bug, I see nothing about a change in default settings for mipmapping in the spec