Enhancing Molecules using OpenGL ES 2.0

Molecules icon

The 2.0 version of Molecules brings with it a brand new rendering engine that utilizes OpenGL ES 2.0 to deliver realistic 3-D representations of molecular structures. This is a long way from the original OpenGL ES 1.1 renderer that I first wrote about here, so I want to describe in detail how this new version works. The source code for Molecules is available under the BSD license, so you are free to download the project from the main application page and follow along as I walk through the process.

Up until now, Molecules had used OpenGL ES 1.1 to display 3-D representations of molecules, which worked, but I was never happy with the results. For the last two and a half years I've had a research paper sitting on my desk that describes a technique for rendering molecules with stunning results, but I had no idea how to implement this on iOS, or even if it was possible on a mobile device. It turns out that not only is it possible, but with proper tuning a newer iOS device can deliver nearly the same rendering quality as a desktop, and do so at a pretty good framerate.

To do this, I used the newer OpenGL ES 2.0 API and its programmable shaders, instead of OpenGL ES 1.1 and its fixed function pipeline. OpenGL ES 1.1 has been supported on all iOS devices since the first iPhone, but OpenGL ES 2.0 capabilities were introduced with the iPhone 3G S and have been in every iOS device since then (iPhone 4, third and fourth generation iPod touches, and both iPads).

What do I mean when I talk about improved image quality using OpenGL ES 2.0? I used these comparison images in my post announcing the new version, but I'll repeat them because I believe they clearly illustrate the huge difference this new approach makes. The old OpenGL ES 1.1 implementation is on the left, and the new OpenGL ES 2.0 one on the right:

  • Old renderer
  • New renderer

OpenGL ES 2.0 was the key to accomplishing this style of graphics, but it took me a long while to understand the new API. It allows you to write little programs, called shaders, that run on the GPU to perform custom effects. If you want to learn more about the API, you can check out the video for the class I taught about it on iTunes U, Jeff LaMarche's posted chapters from his unfinished book, or Philip Rideout's great iPhone 3D Programming book. I don't have the space in this article to bring you completely up to speed on OpenGL ES 2.0 and its shaders, so I'll assume a little familiarity with them as I describe things here.

As I mentioned earlier, this rendering technique is entirely drawn from Marco Tarini, Paolo Cignoni, and Claudio Montain at Universita dell’Insubria and I.S.T.I. - C.N.R., as described in their paper titled "Ambient Occlusion and Edge Cueing to Enhance Real Time Molecular Visualization" published in 2006 in the IEEE Transactions on Visualization and Computer Graphics. I owe a tremendous debt to them for developing this process. They did create a companion application for Mac, Windows, and Linux called QuteMol which embodies this rendering model in a GPL-licensed open source project. While QuteMol provides an excellent example, I chose to write my own implementation from scratch so that it would integrate well with iOS and the capabilities of mobile GPUs.

There are several key elements involved in rendering a frame for a molecule onscreen: procedurally generated impostors, a depth texture to manage impostor intersections, and a precalculated ambient occlusion texture for shadowing the surface of the molecule. I'll step through each of these stages, as well as some optimizations I put into place to help make this all render at a reasonable framerate.

The sphere is a lie

One of the challenges I encountered early on when building the original version of Molecules was how to express spheres and cylinders using 3-D geometry. To make truly smooth versions of these objects by normal means, you need to use an extremely large number of triangles. If you have a molecule with thousands of atoms, the number of triangles required to represent the structure could be staggering, and would challenge even the fastest of desktop GPUs. Not only that, but this geometry would have a very large memory footprint.

The solution proposed by Tarini, et al. was to fool the eye by drawing squares and rectangles that always face the user, then procedurally color each pixel within those simple 2-D faces as if they were windows looking in on 3-D spheres and cylinders. These so-called procedural impostors are conceptually similar to texture billboards, only instead of using prerendered images, these sphere and cylinder representations are drawn on the fly.

By doing this pixel-based raster drawing of the atoms and bonds, a minimal amount of geometry is used (only two triangles for each sphere or cylinder), yet these objects look perfectly sharp at any magnification.

Custom vertex and fragment shaders are used to do this drawing. For spheres, four identical vertices are sent to the GPU corresponding to the center of the sphere, as well as four coordinates that represent the corners of the sphere impostor square (-1, -1; 1, -1; -1, 1; 1, 1). The vertex shader then takes each of the four coordinates, transforms them according to the model view and orthographic matrices (to handle rotation and scaling of the model, as well as the rectangular nature of the OpenGL scene), and then displaces them relative to the viewer using the impostor space coordinates so that the square is always facing the user. This process is depicted below:

  • Sphere impostor process

Once the square has been generated, every fragment within that square (roughly, every pixel) needs to be colored as if it were from a lit sphere behind that point. For this, the normal of the sphere at that point is calculated as the vector (impostor space X, impostor space Y, normalized depth). The calculation and use of the depth component is discussed later. The dot product of the normal and the light direction is calculated and used to determine the strength of the illumination at that point for both ambient light and the specular highlight. The resulting color is written to the screen at that point, except for fragments that lie outside of the sphere, which are output as transparent.

Cylinders are a little more complicated, but the same general process applies. Four vertices are fed to the GPU (two for the starting coordinate and two for the ending coordinate), along with four impostor space coordinates and four direction vectors that point from the beginning to the end of the cylinder center. The beginning and ending points are transformed at each vertex, then by using the transformed directions the vertices at each end are displaced perpendicular to the axis of the cylinder as viewed by the user. Additionally, the vertices at one end of the cylinder are displaced along the axis to account for the curving out of the cylinder at that end. This is shown below:

  • Cylinder impostor process

Like for the spheres, the normal at each fragment on the cylinder is calculated to use in determining illumination, but the calculations here aren't as simple as those for the spheres. Many values are calculated in the vertex shader for points on the center axis, then adjusted in the fragment shader as a function of the distance from the axis.

Problem: no gl_FragDepth in OpenGL ES

One of the significant challenges with using 2-D impostors to represent 3-D objects lies in how to deal with overlapping objects. In a spacefilling model, you have spheres that intersect one another, and in a ball-and-stick visualization spheres and cylinders intersect. You can't rely on the GPU to handle these objects correctly using standard clipping, because the spheres are on flat squares that never cross and the cylinders use rectangles that would run right to the center of an intersecting sphere.

How, then, do you draw the curved boundaries of these objects and hide the appropriate areas on them? In the original implementation by Tarini, et al., they used the capability in OpenGL to write out a custom depth value for each fragment in their fragment shader to the variable gl_FragDepth. The GPU could then figure out which fragments of which object were in front of others. Unfortunately, this variable is missing in OpenGL ES, probably due to the specific optimizations used in mobile GPUs.

To work around this, I implemented my own custom depth buffer using a frame buffer object that was bound to a texture the size of the screen. For each frame, I first do a rendering pass where the only value that is output is a color value corresponding to the depth at that point. In order to handle multiple overlapping objects that might write to the same fragment, I enable color blending and use the GL_MIN_EXT blending equation. This means that the color components used for that fragment (R, G, and B) are the minimum of all the components that objects have tried to write to that fragment (in my coordinate system, a depth of 0.0 is near the viewer, and 1.0 is far away). In order to increase the precision of depth values written to this texture, I encode depth to color in such a way that as depth values increase, red fills up first, then green, and finally blue. This gives me 768 depth levels, which works reasonably well.

The following demonstrates a rendered model and its generated depth map:

  • Rendered model
  • Depth texture

Once generated for a frame, this depth map is then passed into the color rendering pass, where each procedural sphere or cylinder impostor calculates the depth of each fragment it's trying to write out. If the depth of that fragment is greater than the value at that point in the depth texture, that fragment is made transparent so that it doesn't appear onscreen. That way, only the parts of objects that are nearest the user (the only ones that would be seen by the eye) are displayed.

Unfortunately, given the limited encoding range in my color scheme sometimes rendering artifacts seep into these depth checks, causing ugly artifacts. I've done my best to work around the cases I've seen, but there may be a way to make this a little cleaner.

Bringing out details using ambient occlusion lighting

Using raytraced impostors for the atoms and bonds adds a level of sharpness to the renderings that was missing under OpenGL ES 1.1, but Tarini, et al. suggest a technique for bringing out even more detail in molecular structures using ambient occlusion lighting. Ambient occlusion lighting is a rendering process where the intensity of the light at a point on a model is adjusted based on how ambient light hitting that point would be reduced by other nearby objects blocking that light. It produces rendered scenes that look more realistic to us than standard shading, because it's closer to the way illumination works in the world around us. For more on this technique, see Chapter 17 by Matt Pharr and Simon Green in the GPU Gems book (available for free to read online).

How can ambient occlusion lighting help us to get a better feel for the 3-D structure of a molecule? The following illustration provides a good example of this, where the image on the left is a molecule rendered using all of the processes described this far, and the one on the right with ambient occlusion lighting added in:

  • Without ambient occlusion
  • With ambient occlusion

As you can see, the use of ambient occlusion lighting not only makes the molecule look more real, it also exposes structural features hidden in a normally lit model. The folds and pockets within the surface of the molecule are clearly visible now, giving you a much better idea of the true shape of this molecule.

The first step in enabling ambient occlusion lighting for a molecule is to determine how much light hits each point on the surface. To do this, I rotate the molecule, generate the depth texture for that rotation, and use another shader to determine which points on the surface of the molecule would be lit in that orientation. This shader writes out a lit or unlit value to a texture that maps this illumination to the surface of each sphere and cylinder. An additive blending mode is used to increase the brightness of a portion of the texture every time that part of the surface is exposed.

The resulting texture storing these ambient occlusion intensities looks something like this:

  • Sample sphere AO texture

This particular image is from just the portion of the texture encoding the sphere surfaces for a ball-and-stick visualization mode. You can see the dark areas that correspond to where the cylinders from bonds intersect the spheres, and consequently block all light from hitting those points on the spheres.

A mapping function is used to associate the values from this ambient occlusion texture with the fragments generated in the sphere and cylinder impostors, since there isn't really a physical surface to those virtual objects.

The ambient occlusion mapping occurs just once when the molecule is first loaded. QuteMol uses 128 sampling points for their ambient occlusion calculations, but I just use 44 in the interest of speed on the mobile devices. The resulting ambient occlusion texture is then used for each subsequent rendered frame.

The fragment shader that does the color calculation for the spheres and cylinders loads the appropriate ambient occlusion value from this texture and scales the ambient and specular lighting intensities based on it, leading to a realistic representation of the lighting of a molecule.

Optimizations

The most significant downside to this gorgeous new rendering engine is that it's noticeably slower than the plain flat shaded, simple geometry approach of the original version of Molecules. It's a tradeoff I'm more than willing to make, but optimizations are always welcome. I've only just started with trying to improve the performance of this new approach, but I'll describe a few of the tweaks I've made so far.

The first thing I always do when confronted with a performance problem is to profile the application as best I can. Apple gives you a great set of tools for doing this within Instruments, so that's the first place to look. In particular, the new OpenGL ES Analyzer gives you a great view into your OpenGL ES rendering pipeline.

In my case, it made apparent that the bottleneck here is clearly the fragment shaders for my various rendering steps. This is also obvious when you zoom in on a model and the rendering slows down due to the greater number of pixels that need to be drawn.

Another tool in my shader tuning arsenal is the PVRUniSCo Editor put out by Imagination Technologies as part of their free PowerVR SDK. The PowerVR SGX series of GPUs are what power the current iterations of the iOS devices, and the PVRUniSCo Editor gives you the ability to load in your shaders and get a line-by-line readout of the number of cycles on the GPU that will be required to execute that part of your shader code. A more accurate whole-shader estimate is provided for the best- and worst-case number of cycles that would be used when running your shader. This pointed out several inefficient areas in my shaders (vector arithmetic being done in the wrong order, etc.), and highlighted the most expensive calculations I was performing.

As a result, one of the first optimizations I undertook was to precalculate many of the lighting values for my sphere impostors. This was inspired by Aras Pranckevičius' article on iOS shader tuning, where he found that storing commonly calculated values in a texture and looking them up was far faster than recalculating those same values over and over again. I was able to boost my sphere rendering performance by about 20% in this manner.

Another significant improvement came from using branching in my shaders to avoid performing expensive calculations in cases where a fragment would be ignored. I was under the impression that all forms of branching in shaders were terrible, but by applying this early-bailout testing here to avoid these later calculations, I was able to improve rendering performance by 40%.

One last interesting optimization was the use of Grand Central Dispatch to manage my rendering operations. I needed a solid, performant way to ensure that all rendering operations using the same OpenGL ES context would never execute at the same time on multiple threads. In the previous version of Molecules, I had done this by making every OpenGL call run on the main thread. This was not a good solution, and it became even worse with the longer frame rendering times of this new engine.

Instead, I created a GCD dispatch queue that would only execute one item at a time, but would run these items on a background thread, and wrapped every operation that touched the OpenGL ES context in a block to be performed on this dispatch queue. By moving these operations from the main thread, performance increased by 10-20% on most devices. On the iPad 2, however, performance jumped by nearly 50%. I'm guessing that fully utilizing that second processor core on the iPad 2 really helped here.

As I said, I'm only just getting started with the performance tuning of this new renderer. Because this is a lot more complex than the old fixed-function renderer, there are many more ways of optimizing this.

Conclusion

Again, I highly recommend reading the paper by Tarini, et al. to see the original description of this rendering technique, as well as some of the math that I may have skipped over here. Hopefully, by going between this overview, their math, and the published Molecules code, you should be able to see how this application operates.

I'm incredibly happy that I was finally able to make this rendering model work. It took a few months to pull together once I understood the basic concepts, and I think it was definitely worth the effort.

Comments

Molecules is a splendid program for every structure-loving scientist, especially since the pubchem db has been included. A simple sticks-only model would be useful for both large and small molecules, keep improving.

Very interesting! This makes me want to play more with OpenGL ES 2.0.

Great job with the update - with the previous version, I was avoiding the spacefilling models, but now I LOVE THEM! I'm teaching my high school students a section on catalysis, and being able to show them enyme structures makes a HUGE difference. And it's Free! Brad Larson, you're my hero! Now how about some scientific notation for PiCubed?

Glad you like it.

My Pi Cubed work got a little delayed as I poured time into this, but now I can focus on that once again. Scientific notation is at the top of my list for things to add there.

iPhone1 can only have iOS313 !
New version is now iOS4: we can't use it anymore !
Can you give back compatibility ?

No I can't. I needed to use Grand Central Dispatch to manage the rendering operations on a background thread for maximum performance, so the minimum supported version is now iOS 4.0. I also plan on using other iOS 4.0 capabilities to enhance performance going forward.

Dropping iPhone OS 3.x support was going to happen at some point, and I maintained backwards compatibility with 3.x for a year. I'm sorry that you can't run the new version on the oldest iOS devices, but you can still keep using the previous version if you don't upgrade the application in iTunes on the desktop or on another of your newer devices.

When I downloaded the update this morning, the difference made me immediately jump over here to read about it.
This really looks great! The links to QuteMol and your explanation as to the optimizations that you did to make it all come together were great and really inspire me to start using OpenGL 2.0 instead of 1.1.
Your course in iTunes was wonderful to watch and I have recommended it to any experienced programmer who even mentions getting started in IOS work.
Now to download the source and spend some time understanding the details (got to download the QuteMol code as well to see the differences).

Thank you

This App is great. Although the quality of the new rendering cannot be denied, the zooming and turning performance has suffered quite a lot. I use this app for teaching in combination with either an overhead camera or DisplayOut on a jail broken iPad (original version). By the way, I can only get landscape output using the latter method (not checked the latest version yet though).

As a few of the reviews on iTunes now suggest, some users would really welcome the opportunity to render using either the old (speed) or new (quality) methods using a software switch. Although I updated without hesitation, I shall likely downgrade using iTunes until such a change can be implemented. For me, the super smooth rendering on the more simpllstic former version wins. I assume the iPad 2 does a much better job with the new version?

Finally, I just wanted to thank you again for this great application and to let you know that I would be more than happy to use a paid version of this software or click on a donate link if you make one available.

To be honest, I've been surprised by the complaints about the new rendering model. For years, people wrote me and left scathing reviews on iTunes about how the low-quality rendering for the ball-and-stick and spacefilling models made the application a fun toy, but nothing more than that. The lack of depth information in the external structure of a rendered molecule meant that you couldn't see structural details that were critical in a protein. Additionally, small molecules looked terrible when represented by a few icosahedrons and boxes.

Therefore, I set out to address these concerns by dramatically improving the rendering quality so that more information would be expressed onscreen. This took months of work, and flew in the face of people who told me this wasn't possible on a mobile GPU. I knew that the first pass of this would not be as fast as the original simplistic rendering, but I would consider it to be usable if it maintained an interactive framerate.

For modeling purposes, I set the bar for such a framerate at 10 frames per second. Given the increased depth information onscreen for a static model, fluid animation is less important as long as you are able to visualize the structure of the model cleanly at each frame. On the iPad 1, the worst performing of all devices, I hit this framerate on spacefilling models that take up less than 3/4 of the screen (my rendering is fill-rate limited now, so more pixels to render means more work for the GPU). Ball-and-stick models of small molecules (which seem to be far more popular than proteins, from the use cases I'm seeing in the field) render at 30 FPS on the same device. The iPad 2, with its much faster GPU, only drops below 30 FPS for the largest of structures (and I've improved rendering performance for that device in the 2.01 update I just submitted). This was acceptable to me.

I guess my biggest problem was having the old version on the App Store to begin with. People keep pointing back at that in regards to its rendering performance, questioning why they can't have that version back because it was faster. Had this been the first version of the application that I'd ever released, none of these complaints would exist. The reason the old renderer ran so fast was that it did very little. It was built for the first iOS devices, with their relatively limited processing power, and highly tuned over the span of three years. However, it was not as useful for actually showing someone a molecular structure, so its speed doesn't really matter.

What you're seeing right now is the slowest this will ever get. I've implemented some optimizations (as described above), but I've identified a number of bottlenecks within my rendering pipeline that could lead to dramatic speedups if removed. I have ideas on how to fix these, and I'll be seeking out help from other engineers to improve upon what I've started. As the source code is downloadable for this application, I welcome any fixes or contributions others have, although I realize that I'll probably receive no meaningful help in that fashion (in three years and after tens of thousands of source code downloads, I've only received four small source code patches for the application).

Also, I think it goes without saying that a jailbreak hack for mirroring the display is not something I optimize or test for. It's no surprise that mirroring the display puts even more stress on the iPad 1's GPU, because it doesn't have dedicated hardware to do what you're trying. I may complete my experimental code for rendering on an external display at some point to support this capability on iPad 1 and iPhone 4, but I'm a lot less motivated to put the time into this now that the iPad 2 has hardware support for mirroring. The iPad 2 handles this brilliantly, with no noticeable slowdown in my testing when driving both the touchscreen and external display.

For some perspective, QuteMol, the desktop rendering package that first illustrated this technique, ran just about as fast on my old white MacBook as Molecules does on an iPad 1. If I seem a little irritated in my response here, it's because I work on this application because I enjoy doing so, which is why I give everything about it away for free. When I spend months working on something in response to years of complaints, and then I'm greeted with even more complaints on its release, it really hurts and it makes me question why I'm doing this in the first place. Thankfully, enough people have responded well to the new version to offset the spoiled children (some with Ph.D.s) leaving one star reviews on iTunes.

Thanks for your detailed response to my (in hindsight) unreasonable comments - you have given many reasons why I (and others) should be very satisfied with this new version. Please continue your great work!

Using some significant optimizations, I have now been able to get the new rendering engine running almost as smoothly as the old one did on the iPad 1. Version 2.02, which is now live on the App Store should hopefully address the complaints that people have had about the application's rendering speed.

As promised, 2.0 was the slowest the application will ever be.

I have to say your app is great! Absolutely amazing. Brilliant!
I stumbled upon it while searching for chem apps for my iPhone 4 on the App store. And i love it!
Wonderfull job you did here!
There are a few thing that I would personally like to see implemented, like a few other visualization engines and atom labeling. I know I'll be pushing your buttons with following suggestion, but I would really love it if it would be possible to view dynamics output files (PDB, XYZ or any other format). Also, a possibility to view xyz and mol2 files would be a bless.

Again, I already love your app and those are only suggestions above, and not to be taken as a criticism of any kind.

Cheers!

No, I appreciate feature requests, because they help me know what to work on next. If no one asks for anything, how do I know they want it?

New file formats are certainly something I'll be adding, because those are pretty trivial once I have the specification for them. For example, I was able to add the PubChem SDF format in a few hours. Other file formats weren't that high a priority for me, given that most people will use the search interface to find new structures, but if enough people would use I format I'd be glad to add it.

I just was letting off steam above in response to people who were stating that the application was useless without some particular feature they wanted, or were leaving one-star reviews on iTunes because the application only rendered at 10 FPS on their device under the new engine. It was more the tone of the complaints on the App Store that bothered me, not even necessarily the content. How many of the people leaving one star reviews would have the courage to say those things to my face?

Brad Larson wrote:
No, I appreciate feature requests, because they help me know what to work on next. If no one asks for anything, how do I know they want it?

This is true. I completely agree with you on this point.

Brad Larson wrote:
New file formats are certainly something I'll be adding, because those are pretty trivial once I have the specification for them. For example, I was able to add the PubChem SDF format in a few hours. Other file formats weren't that high a priority for me, given that most people will use the search interface to find new structures, but if enough people would use I format I'd be glad to add it.

This would be wonderful, since I would be able to directly pull calculation output files from the server using zaTelNet with no conversion of the files in-between. This would really help me get rid of my laptop and use iPhone to run the calculations on a server.

Brad Larson wrote:
I just was letting off steam above

I know, but I just wanted to make my self clear that there is nothing for me to criticize about.

Brad Larson wrote:
I just was letting off steam above in response to people who were stating that the application was useless without some particular feature they wanted, or were leaving one-star reviews on iTunes because the application only rendered at 10 FPS on their device under the new engine.

I was using the app with the old engine (then switched to new one through update), and I have to say the new one is slower while rendering but I would switch to the new engine in a heart beat, if it wasn't done automatically. The benefits of the new engine is way greater than the speed of the old one. So, yeah. The new engine is way cooler than the old one.

Brad Larson wrote:
How many of the people leaving one star reviews would have the courage to say those things to my face?

Rare few would have the guts, but internet is a miracle in that sense. :)

Cheers!

Excellent work, the molecules in this new version look stunning. I've been working in a small iOS app in my spare time, based on OpenGL ES 1.1, and seeing your results I feel more inclined to consider 2.0 again (which scared me to start with). If you were starting Molecules now, would you even bother with OpenGL ES 1.1 and iOS < 4 at all?

If I were to start again today, I'd probably ignore the OpenGL ES 1.1 devices and go entirely 2.0. Also, as you can see I've dropped all 3.x support in this and my other applications. Only about four people have complained about the lack of 3.x support for hundreds of thousands who have upgraded, so I feel that it is time to move on to 4.0 and finally use technologies like GCD.

OpenGL ES 2.0 is far more interesting to me now, because of the flexibility it gives you. Now that I'm familiar with it, I honestly find it easier to deal with than OpenGL ES 1.1, because I don't need to remember what all of the built-in functions do and how to enable / disable them. If I want a particular effect, I just write the shader code to do exactly what I want. With my class and the various tutorials that seem to be popping up every day, it's easier than ever to get information on OpenGL ES 2.0.

If you go by statistics published by people like Marco Arment, he sees ~95% of his install base having devices capable of OpenGL ES 2.0. For an iPad application, you're guaranteed to be running on a 2.0-capable device.

Thanks for the confirmation - those figures by Marco are pretty conclusive. I'd better get going with shaders then!

You should be able to get much better than 768 levels of depth for pretty much the same cost using the technique in Aras's other article http://aras-p.info/blog/2008/06/20/encoding-floats-to-rgba-again/

This was how I initially attempted to encode color, but it presents a problem. For my depth buffer, I will have certain fragments that will be covered by multiple objects. Each of these objects will render a color to that fragment, and the order in which they render is not guaranteed. Therefore, an object closer to the viewer could render before one further away, and the far object would overwrite the depth color from the near one.

I use a minimum value blending mode for the depth buffer, which only stores the minimum value for each color component (red, green, blue). I then use this sort of "bucket-based" color encoding, where one color is filled up first, then the next, then the next. This guarantees that when I get the minimum (closer to the viewer) for each individual color component, the total value will always be the correct minimum.

For one of the higher-precision encoding methods, simply taking the minimum for each color component breaks down in certain cases (think of a per-digit comparison of 110 vs. 085). I couldn't think of an easy way to support multiple writes to the same fragment, yet preserve the higher precision of the other encodings.

Do you intend to open source the OpenGL 2.0 code as well?

All code that I used for this application is available on the main Molecules page. This page has the latest version of the application, which includes the OpenGL ES 2.0 rendering engine as used by the version on the App Store.

I am by no means a programmer, but am intrigued by your using of OpenGL2. Do you think it could be applied to displaying 3D solid models of tooling?

OpenGL ES 2.0 can be used to display pretty much anything, with the right code. It's a very flexible rendering API.

If you're asking whether the code used for this application could be used to render arbitrary 3-D models, no it can't. As I describe above, I use some tricks to fool the eye when rendering spheres and cylinders so that they appear to be perfectly smooth shapes. These are the only classes of objects that this can draw. More generic model loading functionality will require a completely different approach.

Great work. I will pass this along to a few bio-chem types I know.

I am particularly intrigued by the performance on my 4S and the IPAD2. How are the molecule spatial relationships and stciks defined? Is it possible to create an object, coordiante and relationship matrix to feed Open GL?

I would love this for representing a 3D relationship between conceptual objects in systems engineering.

ANy possibility of associating the each sphere or objects with information that can be summoned by touch within the view?

Hello,

the simplified versions of the sphere shaders were of a great use to me, I learned a lot from them. Do you have non-optimized shaders for the bonds (cylinders) as well?

Cheers

Syndicate content