Fast Gravitational field formula

Fast Gravitational field formula

Here is where ideas can be collected for the skirmish AI in development

Moderators: hoijui, Moderators

Post Reply
User avatar
krogothe
AI Developer
Posts: 1050
Joined: 14 Nov 2005, 17:07

Fast Gravitational field formula

Post by krogothe »

I expect sub to know that well, but probably other people here too, so i might as well ask in the AI forum:
I want to make a formula/algorithm that will calculate the gravitational force on asteroids quickly. I could calculate the force on an asteroid by doing (M1xM2)/(G*D^2) for every other asteroid then adding up the vectors, where M1 and M2 are masses, G a constant and D the distance between the two, but if i plan on using thousands of them, the CPU time increases exponentially. Does anyone know of any approximations for that which can compute quicker (maybe by clustering)?
thanks!
User avatar
jcnossen
Former Engine Dev
Posts: 2440
Joined: 05 Jun 2005, 19:13

Post by jcnossen »

I'd think clustering too... put all the asteroids in a grid, calculate the gravity normally for the nearby grid cells, and calculate a total mass for every grid cell you can use for cells that are further away.

You could optimize it better by using an octree (for 3d) or a quadtree (for 2d) instead of a grid...
IMSabbel
Posts: 747
Joined: 30 Jul 2005, 13:29

Post by IMSabbel »

Thats a pretty standart problem. (search for "n-body").
The simple linear approach is pretty fast up to maybe 1000 or so particles, then the O(N^2/2) scaling really starts to drag it down.
You CAN speed it up quite a bit if you only use single precission and use SSE/3dnow (there is a instruction for a fast inverse square root thats JUST made for stuff like this).


Otherwise, there are 2 main approaches: PPM (particle particle mesh) or tree based (barnes-hut). The basic principle are as following:

With barnes-hut, you use an octree. Just build it recursivly, and then traverse it. Store the sum of all subparticles in each node, and while traversing, compare the istance between your particle to the size of the subnode (which is totalsitze/n^2, with n being the depth). It its lower than a threshhold, than just use the mass-sum for calculation and skip the evaluation of the single items beneath it.
This will be slightly inacucurate (as you more or less average over distant particles), but not too much so, _and_ it scales with O(N+N log N), usually, which can make it easily 100s of times faster than the normal one with higher particles counts.
One problem here is code locality. This pointer jumping of tree traversal just EATS cache misses. My code became about 15 times faster when i parsed it into a some kind of array before traversal (which resulted in only forward jumps into precached data).

The PPM method just works by adding the potential of all bodies together, and then evaluate the force of every body in the resulting field. As its not really possible to do this really exctly (because of memory constrains), one creates a adaptive mesh over the space and just uses standard methods for evaluation. Optimal scaling would be O(N), but the constant factor can be quite high, plus its algorithmically MUCH more complex than the tree based one.
BUT this model is also quite good for capturing stuff like gases or other non-colliding (dark) matter.

Also, for the simple (and with a bit more difficulty, for the others), quite a speedup can be archived by making the integration step dynamic. A easy way would be:
While evalutating the forces, store the distance to the next neighbour with each particle, devided through the particle velocity. At the end, check for the lowest ratio (i.e. the particle that would possible reach another one the soonest/cause the most inacurracies by moving), and set the timestep to a certainfraction of it (like 1% or so. 10 if you want it quick).

If you have really many particles, you can also make per-particle integration steps. Thats a bit more complicated to keep synchronicesd, especially with the tree/mesh codes (where recalculating the tree takes signifficant time), but it _can_ be done, and it _can_ speed up dramatically (so a close binary doesnt slow down 50k stars in the halo).

But thats just a quick introduction to the subject. The tree-code should be able to be done in a day if you can program (including debugging:), and for non-scientific stuff you can just put down the parameter to make it faster and less accurate until it fits your needs ...

Or you could buy a grape6 :D
submarine
AI Developer
Posts: 834
Joined: 31 Jan 2005, 20:04

Post by submarine »

lol the famous n-body problem :)

another thing (if they are not packed to close together) would be neglecting all asteroids that are to far away since gravitational force decreases with 1/distance^2 - this is not really a good solution at all but very fast to implement. might be sufficient if you just want to have some "pseudo-physics" for a computer game
User avatar
AF
AI Developer
Posts: 20687
Joined: 14 Sep 2004, 11:32

Post by AF »

it all falls into place ;D krogothe some of these threads remind me of my own ideas
Post Reply

Return to “AI”