Friday, January 10, 2014

iOS OpenGL Wavefront objects

So a while ago I had a task to make a simple iOS App, which would contain a list of 3D objects and fullscreen OpenGL view, where user could pinch zoom, rotate and pan around corresponding object.
As I've never had any real experience in OpenGL, I figured - best solution would be search for some working examples, which would be easily adaptable for my needs.

I found this article: (code here) and I decided to use this code as a base. It's ES 1, but hey - it's a working demo. Kinda. Well - it works great using tiny wavefront objects. But for objects that are larger than 500kb in size - I start to notice, that it takes a while on device (and even on simulator) before wavefront object file has been parsed and object appears. In some cases, when object file was 5-10 mb in size - it would take few minutes to parse wavefront object file and in most cases - it would crash (either memory run out, or this example code is not optimised enough to parse such big objects (could be some memory leaks?)).

So what can I do about it? I decided to use Blender Decimate feature (only a bit, so that object would still look more or less like original): to decrease object size and also - implemented encoding / decoding feature, so that Device would only parse wavefront object file only once and then load from memory already parsed data using these functions:
- (void)encodeWithCoder:(NSCoder *)encoder

- (id)initWithCoder:(NSCoder *)decoder

Object parsing was then done in background thread - but because of that, it took a bit longer than on main thread, but at least Application can be used while that happens (for example, read accompanying article, while objects are prepared). So - imagine - first time opening application, all objects are downloaded from server, and then each object is parsed within few seconds to 5 minutes (according to it's size). I could leave it like that because - it is more or less working as intended. Just a bit delay until objects will be accessible to user, but I decide to search for some other solutions.

Then I found this one: VRToolKit. (More info here) This example also uses wavefront objects, BUT!! - parsing is done on computer using a perl script, and then in Application You have already prepared parsed data arrays, which can be directly loaded in OpenGL. Sounds perfect! And it is! :
  • no more parsing on device,
  • objects would be directly loaded in memory (almost seamless delay before seeing object (even for really 'big' objects). (Tested with 27 mb large .obj file (147 mb, after conversion)).

Unfortunately there were few problems.

VRToolKit solution would only work in case You have static objects in Application. So - I need object array files to be included in bundle, in order to load them in OpenGL. Other downside is 5x larger converted file from .obj file.
For file size problem I found a solution: mtl2opengl - more optimised perl script, which could reduce size from 147mb to 49 mb. (Removed unnecessary comments, float numbers reduced to 3 digits after a comma).
But I noticed a way to reduce size even more - I removed unnecessary spaces between floats, and also removed new-lines and I got from 49mb to 41mb.  But this could be reduced even more - put this file in archive and we go from 41mb to 2mb. (Then on Application side - simply unarchive).

Ok, but what about main problem - we need dynamically downloadable objects not static ones!?
The solution was simple - combine VRToolKit solution with decoding. When previously experimenting with Encoding / Decoding, I could convert arrays and variables to binary data, and then later decode them back so that I can fill in necessary arrays and variables with that data (thus - skipping wavefront object file from parsing again). But what if I could prepare encoded data on computer (when wavefront object files are parsed), store binary data in files, which would later be downloaded in Application? It turns out it is that simple. By modifying more mtl2opengl perl script, I was now able to provide a wavefront object file to the script, and it would generate 4 files:
  • header,
  • normals,
  • textureCoords,
  • vertices.
Normals, TextureCoords, Vertices obviously contains binary data, which we use to fill in necessary OpenGL arrays, but header file contains number of faces and object center coordinates. Faces count is used to allocate OpenGL array, before filling with binary data.
In the end - objects would still appear almost immediately and it is possible to load 'big' objects - problem solved!
Ofcourse - objects cannot be infinite big. We have limited memory and computing power on device, so - the bigger the object, the bigger possibility of laggs. But using this solution, we don't have to put up with only tiny objects.
Why is the perl script generating 3 separate binary files instead of one, You might ask? At the beginning it generated one big file. But I noticed, that if file is too large, and it is archived, then downloaded in Application - all possible unarchiving solutions appear to incorrectly unarchive this file, in case it is really big and there is too little memory available. So - by splitting this big file in 3 parts, we rise the size limit bar.

Because I used OpenGL example to start with, and VRToolKit solution to parse objects on computer, I feel obliged to give back to community, so I scraped together a demo project: DWO.




It contains about 8 example wavefront objects (with header file and binary normals, textureCoords, vertices files), OpenGL fullscreen view with pinch zoom / panning / rotating / auto rotate capabilities. Also, I included my modified version of mtl2opengl perl script. (Takes in wavefront object file, generates folder with mentioned 4 files).
In order to implement any new wavefront object in this demo application, It probably needs to be scaled down in blender (at least all of wavefront object files that I included - needed a scale down to 0.01 or more). It is possible (or it was in the original mtl2opengl script), that it would scale down automatically, before exporting, but for my project, I decided to do it manually (so that I could scale down manually to have perfect size for my needs).  Feel free to implement this feature in script if You need it!
Unlike original and all other OpenGL example xcode projects, In my example I don't use mtl files at all. Instead I simply load a texture image, by corresponding prefix name. At the time I am writing this, I think that it sucks that I did not use mtl files, because now I am limited to only one texture file. But… it is enough for my project.

Anyways - I hope You can gain at least something from this solution or from modified perl script or from provided demo Application.

No comments:

Post a Comment