Framerate, after glGet changes
Moderator: Moderators
Framerate, after glGet changes
At least on my side, with my 7800GT the one glGet call removed (IIRC, there were three of them) seems to have had an enormous impact on framerate. I will do more scientific testing tonight, but I did a test last night with some very high-poly models, 4096 shadowmap, reflect / refract water, and was getting over 70FPS, with no drops, period. This wasn't the case before. I didn't crank everything up to max, I'll look at a series tonight.
Methinks that the glGet stuff might be a really big deal. More when I have done more rigorous testing, I just wanted to see if other people using the SVN builds could do some static tests vs. 0.75b2, to compare performance, especially with GeForce 7600s or higher chipsets...
Methinks that the glGet stuff might be a really big deal. More when I have done more rigorous testing, I just wanted to see if other people using the SVN builds could do some static tests vs. 0.75b2, to compare performance, especially with GeForce 7600s or higher chipsets...
I'm just using the auto-installers from the buildbot, which I've been installing about every other day for the last three weeks, so that I can test features as we get closer to release. Like I always do, I'm basically testing stuff and trying to cram the parts I like into what I'm working on as fast as possible 
And, mind ye, maybe it's just that I picked some lucky setup, or the stars aligned, or something- I have not taken it through a full battery of real tests, was just screwing around with some stuff I'm doing with the whole model-archive project. So, maybe I'm wrong. But I was rather surprised when I saw the FPS counter not moving, with over 100K triangles, reflect, glowmap, 4096 shadows, etc.
Low CPU loading, of course, but meh, that's a separate issue entirely... although this is all starting to make me wonder whether the whole issue of timing models needs to be revisited... if glGet calls turn out to actually be really significant, it could also be possible that the lockstep timing on the frames between frames sent (IIRC, isn't Spring just sending 5/second, out've 30 cycled through the CPU) is causing a lot of the "CPU lag"- it's a problem of pileups, rather than total loading, and might be alleviated if certain events occurred every second 60th / second, some occured every 45th of a second, etc., etc, to distribute the load better. But that's just a theory, and probably a stupid one. First let's see if things really have changed much.

And, mind ye, maybe it's just that I picked some lucky setup, or the stars aligned, or something- I have not taken it through a full battery of real tests, was just screwing around with some stuff I'm doing with the whole model-archive project. So, maybe I'm wrong. But I was rather surprised when I saw the FPS counter not moving, with over 100K triangles, reflect, glowmap, 4096 shadows, etc.
Low CPU loading, of course, but meh, that's a separate issue entirely... although this is all starting to make me wonder whether the whole issue of timing models needs to be revisited... if glGet calls turn out to actually be really significant, it could also be possible that the lockstep timing on the frames between frames sent (IIRC, isn't Spring just sending 5/second, out've 30 cycled through the CPU) is causing a lot of the "CPU lag"- it's a problem of pileups, rather than total loading, and might be alleviated if certain events occurred every second 60th / second, some occured every 45th of a second, etc., etc, to distribute the load better. But that's just a theory, and probably a stupid one. First let's see if things really have changed much.
So, erm, you didn't really test removing a glGet call? (cause that's what I understood)
As far as I have seen no one has changed the glGet calls in the repository, so I guess then either the stars were aligned or GCC 4.2.1 just compiles faster code then GCC 4.2 (I don't recall any commits specifically targetting optimization)
As far as I have seen no one has changed the glGet calls in the repository, so I guess then either the stars were aligned or GCC 4.2.1 just compiles faster code then GCC 4.2 (I don't recall any commits specifically targetting optimization)
Ah cool!kujeger wrote:I'm doing a recompile of svn rev. 4890 and 0.75b2 now, I'll do a couple of tests and see if there's much difference for me.
Last edited by Tobi on 28 Nov 2007, 21:54, edited 1 time in total.
KK. Let me know what environment you test with, and make sure to just do static tests with low CPU loads, I'm just looking at raw rendering speed.
Kloot removed one glGet... lemme check SVN, I saw it, then I went and got that build... oh, nm, we're on the same page now... I swear I wasn't just imagining that, I'd been watching for it since the topic came up. Also, the funky place where instead of a glGet, it appears that Gabba / somebody manually did a matrix calc cpu-side then returned the results might be worth looking at again, since the general agreement in the thread I read back from 2005 was that that was much faster than doing it on the GPU, in the vast majority of cases... but it looks like it was done in one place, but not others, where it appears to me that the code's doing very similar stuff. Probably just me not understanding what it's doing, but it could be significant.
Kloot removed one glGet... lemme check SVN, I saw it, then I went and got that build... oh, nm, we're on the same page now... I swear I wasn't just imagining that, I'd been watching for it since the topic came up. Also, the funky place where instead of a glGet, it appears that Gabba / somebody manually did a matrix calc cpu-side then returned the results might be worth looking at again, since the general agreement in the thread I read back from 2005 was that that was much faster than doing it on the GPU, in the vast majority of cases... but it looks like it was done in one place, but not others, where it appears to me that the code's doing very similar stuff. Probably just me not understanding what it's doing, but it could be significant.
Last edited by Argh on 28 Nov 2007, 21:58, edited 2 times in total.
I'm guessing he's referring toTobi wrote:So, erm, you didn't really test removing a glGet call? (cause that's what I understood)
As far as I have seen no one has changed the glGet calls in the repository, so I guess then either the stars were aligned or GCC 4.2.1 just compiles faster code then GCC 4.2 (I don't recall any commits specifically targetting optimization)
Ah cool!kujeger wrote:I'm doing a recompile of svn rev. 4890 and 0.75b2 now, I'll do a couple of tests and see if there's much difference for me.
Code: Select all
4874 kloot
factor out some glGetDoublev() calls (may be faster on certain GPU architectures)
I tried starting the Islands at War map with reflective+refractive water plus other high/medium settings and centered at the oil derrick or whatever it is in the middle of the map at a middle distance. It's not exactly a stress test of the gpu but anyway:
It didn't show any difference at all between the versions, all settings were the exact same (including no lua widgets or anything activated).
gcc is 4.2.1, c2duo@3ghz, 2gb ram and a 8800gts card.
I can perhaps try to play a couple of replays later if there's any point.
It didn't show any difference at all between the versions, all settings were the exact same (including no lua widgets or anything activated).
gcc is 4.2.1, c2duo@3ghz, 2gb ram and a 8800gts card.
I can perhaps try to play a couple of replays later if there's any point.
Argh wrote:What card / chipset are you using?
kujeger wrote:gcc is 4.2.1, c2duo@3ghz, 2gb ram and a 8800gts card.
Is that even possible in linux? Unless of course you're referring to Argh.Tobi wrote:Maybe you could also try with threaded optimizations ON, instead of off. And then compare both versions.