No GL calls are made from Sim anymore - I have verified that.
That's awesome
Do you mean the rendering should be moved to separate classes, like UnitDrawer?
Well, my initial thought is that yes, it needs to be the way, to allow for several major things to be solved, such as TBN support and new model formats via new cases.
The way I see it... there should be a flow something like this (just brainstorming, obviously):
Sync sim behavior (what is happening from now until the next frame?) -->
1. Render world, base texture and self-shadow. Keep world mesh for deferred operations later, keep shadowmap -->
2. Unsync interpolation behavior (what are we doing with this Unit / Feature / CEG / Projectile from now until the next frame?) -->
3. Case modelType (what model / animation format am I?) -->
4. Mesh operations (what am I doing with my meshes?) -->
5. Resulting mesh (where are all my vertexes, in worldspace?) -->
6. If we need to do TBN per triangle, do it now -->
7. Render resulting mesh, using a shader defined by the Unit / Feature / CEG / Projectile, or the current code, which should be considered fallback / legacy from here on out.
We should be moving towards handling all of these things pretty much identically, they're all just meshes in the end, after all, and should be treated a such at this point in the rendering operations.
9. Once all visible meshes are available, perform any
deferred operations (lighting, shadows, reflections, etc.).
10. Perform deferred operations on world. Keeping the world-mesh result, we can do per-facet lighting, etc. with decent speed.
11. Render any Units currently being called for via Lua gl.Unit callout (special effects, etc.).
12. Render any other Lua elements here.
13. Render Screen elements (mainly GUI stuff here), in strict layer order (DrawScreenEffects and DrawScreen would just become DrawScreen, but the layer would determine order of operation when it's time to draw).
At that point, everything's straightened out, ideally, I think. Spring sync would no longer control mesh operations very strictly- it would simply say, "X has happened", and it would be up to the unsynced side to interpret X and act accordingly.
That framework's already there, I'm just saying that I think it should be modularized, and treated in a cleaner way- past the point of mesh interpolation, what we have is nothing more than a collection of meshes in worldspace. They should get treated that way.
Sync operations --> model type --> interpolation and mesh creation --> shader (or old fallback, for legacy support) --> draw
Lastly, with this in play, it would be almost trivial to let Lua load arbitrary meshes during Initialize() and store the resulting display lists for use later, which would greatly help on the production side (it's
way easier to design a special effect if you can build the mesh and uvmap it in a traditional modeling application than to code it by hand).