I've been working on a different open source RTS project, it's not announced yet (soon be on sourceforge) and not as far along as Spring; however, in the spirit of open-source sharing I'm going to release the HPI file code so that Spring can take advantage of it in the meantime.
The code is all C++, and reads in a TA (not Kingdoms) HPI file and dumps the contents to disk. It'll take all of 5 minutes to change it to dumping a particular file to memory. Remove the single _mkdir() and it'll be cross-platform. Oh, you'll also need a header to define UINT32/UINT16/UINT8 types. Enjoy: http://sourceforge.net/project/showfile ... _id=325204
Now, just to inject my 2 cents into the talk about cross-platform floating-point simulation:
My general thinking along these lines is that the for graphics-related purposes, each client can do floating-point however they want. When it comes to things like damage taken and building, these things will have to be dictated by either the server, or by the client running that unit. It will increase the traffic over a input-only system, but as far as I can tell it is necessary because:
It will be IMPOSSIBLE to keep the simulations concurrent on different systems.
First of all you have what some ppl have already mentioned, the fact that GCC will compile things differently to MSVC, and the x87 code will diverge. Somebody suggested using GCC to compile on windows too, except that can also generate different code to a linux-GCC build because of different ABI's. It gets even worse when you want to support 64-bit builds or a PowerPC: The 64-bit compilers are incapable of generating x87 instructions and do all floating-point math in SSE/SSE2, so you loose the 80-bit intermediaries and things will diverge. PowerPC will do something similar with AltiVec. Of course, if you really really wanted, you could decide to only support x87, and write every piece of floating-point code in x87 assembler.... :)
Anyway, I'd like to finish by congratulating the TA Spring developers for a great project, and for being the first to actually come out with a valid TA replacement (although I may have to start some flame wars if somebody doesn't give us the option for a TA-style mouse interface soon :P )
EDIT: Updated URL
On HPI files and cross-platform support
Moderator: Moderators
-
- Posts: 2
- Joined: 02 May 2005, 05:58
On HPI files and cross-platform support
Last edited by Relentless on 05 May 2005, 03:28, edited 1 time in total.
Hi!
Speaking for myself, this code interests me quite a lot, as it would solve part of our current porting problems. I'm a little surprised that nobody else has answered to this announcement though!
The port is too far off right now to consider integrating your code, but I urge SY to consider this for integration in the mainstream CVS, to ease our future porting efforts.
Thanks for sharing this with us! Hopefully we'll be able to contribute more in the future, when your project is announced :)
Speaking for myself, this code interests me quite a lot, as it would solve part of our current porting problems. I'm a little surprised that nobody else has answered to this announcement though!
The port is too far off right now to consider integrating your code, but I urge SY to consider this for integration in the mainstream CVS, to ease our future porting efforts.
Thanks for sharing this with us! Hopefully we'll be able to contribute more in the future, when your project is announced :)
The real situation now is to figure out which compression system to use.
We need more extensive testing of compression programs to see which one would do what we want.
First, we would need the mappers to create a very generic map that is as big as they would ever need (this is important). Then, we can get an idea of how big the files might be.
After this is done, someone needs to do tests of compressing the files and seeing which compression system can get it down to the size we want (which we really don't know right now). We would need to compare them and then the SY's could make a reasonable decision on it.
We simply need more data on the various things that are available to us (and there are quite a few).
7Zip, Zzlib, zlib, gz2, bz2, etc
We need more extensive testing of compression programs to see which one would do what we want.
First, we would need the mappers to create a very generic map that is as big as they would ever need (this is important). Then, we can get an idea of how big the files might be.
After this is done, someone needs to do tests of compressing the files and seeing which compression system can get it down to the size we want (which we really don't know right now). We would need to compare them and then the SY's could make a reasonable decision on it.
We simply need more data on the various things that are available to us (and there are quite a few).
7Zip, Zzlib, zlib, gz2, bz2, etc
related to what said ace07
i think following table in 7zhelp can give tips and help understanding how choose best one as it contains memory needed, speeds and so on (you need scroll 1 page down) ...
http://3web.dkm.cz/myie2cz/tas/7ziphelp.htm
i think following table in 7zhelp can give tips and help understanding how choose best one as it contains memory needed, speeds and so on (you need scroll 1 page down) ...
http://3web.dkm.cz/myie2cz/tas/7ziphelp.htm
What I was saying is to conduct primary research, and as a result of the primary research, we can plan which compression system to use based on the biggest map that mappers want to make. We could kill two birds with one stone in this way:Dwarden wrote:related to what said ace07
i think following table in 7zhelp can give tips and help understanding how choose best one as it contains memory needed, speeds and so on (you need scroll 1 page down) ...
http://3web.dkm.cz/myie2cz/tas/7ziphelp.htm
1) We could finalize the largest map size that mappers "want"
2) We could choose which compression system would be best for that map size
We would need to get input from the mappers first though.

Things are sometimes harder in Linux, but usually its based on a developers choice to develop for a certain platform. If a developer only thinks "Windowswindowswindows!!1" while he is developing something, porting the project will be extremely difficult, because of how the developer focused only on one platform. This is what we are trying to avoid.
We don't want any developers in this project saying "Windowswindowswindows!!1" while they develop. It just makes more work for us, because we have to fix up your code.
Again, my problem with 7Zip is that it might not be worth the effort. Other compression schemes yield less results (the differences are often negligible), but they actually have good portable code already built. The improvement over these other schemes is impressive, but are those little couple of bytes you shave off going to constitute the work it might take to impliment it?
That is what I would want to find out with research. It would be worlds easier to impliment Zip, or something better than Zip like bz2 or gz.
Either way, I think it should be up to the SY's to decide ultimately. It is their project, and I don't want to go running around acting like I have a say in any of this.
We don't want any developers in this project saying "Windowswindowswindows!!1" while they develop. It just makes more work for us, because we have to fix up your code.
Again, my problem with 7Zip is that it might not be worth the effort. Other compression schemes yield less results (the differences are often negligible), but they actually have good portable code already built. The improvement over these other schemes is impressive, but are those little couple of bytes you shave off going to constitute the work it might take to impliment it?
That is what I would want to find out with research. It would be worlds easier to impliment Zip, or something better than Zip like bz2 or gz.
Either way, I think it should be up to the SY's to decide ultimately. It is their project, and I don't want to go running around acting like I have a say in any of this.
-
- Posts: 2
- Joined: 02 May 2005, 05:58
Compression format choices
Realistically, the choice of compression format should boil down to decmpression speed, not compression ratio. It already takes long enough to load levels in games these days - the last thing the user wants is to save a few hundered megs on their 80GB drive in exchange for longer waits to load levels because things have been uber-compressed with bzip2. Doom 3 uses plain old Zip files as it's .pak files - the beauty of these is that they use Deflate compression (public domain, lot's of free code (zlib), and you can decompress it faster than you can read it off the disk).
Now, distribution is another matter entirely, obviously when downloading we want the best compression possible. The best compression comes from creating a solid archive (i.e. all files are compressed together, but you can't access one without decompressing them all). This is not suitable for accessing from the game since you need to get at file individually. Note that zipping a zip file gives similar results (but so quite as good) to creating a solid archive.
Given this, I propose the best method would be using basic Zip for the pack files, and then bzip2'ing those files for internet distribution. In fact, this is method that the RTS I'm working on will be using. And if you wait a week or so, the code that lets the programmer access all this in a simple, transparent manner will be ready soon (I hope), at which time I'll post it up with the HPI stuff.
Now, distribution is another matter entirely, obviously when downloading we want the best compression possible. The best compression comes from creating a solid archive (i.e. all files are compressed together, but you can't access one without decompressing them all). This is not suitable for accessing from the game since you need to get at file individually. Note that zipping a zip file gives similar results (but so quite as good) to creating a solid archive.
Given this, I propose the best method would be using basic Zip for the pack files, and then bzip2'ing those files for internet distribution. In fact, this is method that the RTS I'm working on will be using. And if you wait a week or so, the code that lets the programmer access all this in a simple, transparent manner will be ready soon (I hope), at which time I'll post it up with the HPI stuff.
Re: On HPI files and cross-platform support
I love you.Relentless wrote: Enjoy: http://members.optusnet.com.au/~wesleyhill/hpifile.zip
-
- Posts: 436
- Joined: 26 Aug 2004, 08:11
I dunno about the mappers, but we are having discussions about what size the developers want to support, and I would like to steer people towards 128 x 128 ota style. I think that the largest map that the gameplay can reasonably support without making the game take more than 16 hours is smaller than that, but the code needs things to be a power of 2, and 64 x 64 is not big enough. Currently we are limited to 16 x16, and I know mappers want to up that a bunch.1) We could finalize the largest map size that mappers "want"
2) We could choose which compression system would be best for that map size
We would need to get input from the mappers first though.