I have a world with 100s of simple objects on screen at once. Most objects don't have more than 40-50 vertices. The vertex coordinates for these objects are very large (planetary scale), so I use the 'floating origin' method to draw my data (actual coordinates are close to the origin and then you apply a translation out to where you want the object).
Based on the user's position, these objects are dynamically created and destroyed. Other than that, the objects are static. With my previous implementation, I was drawing these objects separately, and the result was significant slow down (I'm targeting mobile devices)... since the total number of triangles on screen is sort of trivial (even though there are many objects), I assume that the slow down is because of the multiple draw calls.
So I tried packing my objects into a single buffer to draw them all with a single call. This works, but I also use a 'floating origin' transform to position the geometry. The final transform matrix is calculated with the CPU to maintain precision and then sent to my vertex shader. The problem is applying the floating origin fix is CPU intensive:
If I use a single floating origin for the vertex buffer:
- (good) only need to calculate one transform matrix
- (bad) I need to perform an offset subtraction on every vertex in the buffer. The upper limits of scene complexity reaches 50k+ vertices, so this is expensive.
If I use multiple objects:
- (sorta good) only need to compute as many matrices as there are objects (this is still really expensive though, since there are a ton of objects)
- (bad) I need to pass the shader a large array of matrix uniforms
I feel like I'm stuck. Are there any other ways to maintain precision I could use? Or maybe a better way to use the floating origin method? The problem is basically "how do I draw a ton of simple geometry with really large numerical coordinates with OpenGL ES 2?"... I'd appreciate any advice.
EDIT
Just some clarification on the offset. The vertex should always be a valid float. The maximum distance a vertex can be from the origin is the maximum radius of the earth +/- few thousand meters. Assume 1 unit = 1 meter. The distances are never so large that they can't be represented by floats at all.
The objects themselves are things you find in a map on the surface of the earth: buildings/roads/etc. So relative to the coordinates being used, the objects are very small.
I'm not confident on the floating origin stuff, but here's my take...
When you calculate the MVP on the CPU where your vertex is offset, you're getting something you calculate with double precision that goes from an offset vertex position (say its 10.1,10.2,10.3) to the normalized opengl coordinates (I think thats +/- 1 in xy and 0-1 in z... I don't remember right now). You then pass the shader a vertex thats close to the origin, so no huge loss in precision, and a transform matrix that was calculated with double precision.
That's way better than sending the shader a vertex with much larger values since you lose a bunch of precision right away.