I’ve been meaning to write this post for a few months now but never seemed to find the time. It’s related to my Packager for iPhone: Render Performance article and regards rendering differences between content written for AIR for Android and the Packager for iPhone.

In short, the article highlighted the performance problems when playing timeline animations on iPhone and instead suggested the use of ActionScript to create bitmap animations by flicking between a collection of BitmapData objects, where each BitmapData object represented a frame of the animation.

With GPU acceleration enabled, using ActionScript to run these bitmap animations was easily improving performance by a factor of at least 6 compared to the traditional timeline approach.

Even constructing timeline animations that held bitmaps on each frame (rather than vectors) failed to match the performance that could be achieved from using the ActionScript outlined in my examples.

This is shown in the video below, where 23 animating objects are randomly placed on the screen. Each animation is 4 frames long, with each frame being represented by a 274 x 366 bitmap. The first app launched in the video uses the traditional movie clip approach, with the animation being constructed on the timelime. The second app uses ActionScript to produce the same effect. Both apps use GPU rendering mode. The difference in frame rate between the two apps is quite noticeable.

So does the same hold true for AIR for Android?

Well to be honest I had initially assumed it would and hadn’t actually bothered checking. But when I eventually did get round to it I was pleasantly surprised. It turned out that when using GPU acceleration, bitmap animations on the timeline run just as fast as their ActionScript counterparts. When running both tests using CPU render mode, the ActionScript code was faster but only by about 50%.

The video below shows the same demo’s from the iPhone example, but running on Android. This time the traditional movie clip approach performs just as well as the ActionScript-based animation.

This news is good and bad.

It’s good because for AIR for Android projects I can start using movie clips and the timeline for creating animations again, which is a huge time saver and gives me all the benefits I expect from using Flash. On the other hand, it’s bad because the rendering differences between my iPhone and Android apps mean that if I want to target both platforms I’ll need to continue to use an ActionScript-based solution, which adds to the development time.

So why do apps written using the Packager for iPhone seem to struggle so much? Perhaps this Adobe MAX session by Adobe AIR engineer David Knight and platform evangelist Renaun Erickson sheds some light. It seems that Adobe’s implementation of GPU rendering mode for iOS works slightly differently to their Android implementation.

Rendering typically comprises of two parts: rasterizing and scene composition. When GPU rendering is used on iOS devices, the rasterizing always takes place on the CPU, with only the composition taking place on the CPU. For Android, both rasterizing and composition take place on the GPU. In other words, for movie clip animations, each frame the playhead moves to, must first be rendered into a pixel buffer in RAM before being copied to the GPU. This is expensive and I’m guessing is the reason Flash performance is impaired on iPhone.

So is this something that Adobe is likely to change in the future or is it a limitation of the iPhone’s hardware? Personally I don’t know the answers to that but it would be great to know.

Right now, this subtle rendering difference seems to be the reason why Flash developers are experience so much pain developing apps for iPhone.

For anyone who’s interested I ran the two tests across several devices and using both CPU and GPU rendering modes. The results are shown below and include the frame rate that the test managed and the number of pixels that were rendered per second.

It was hardly the most formal of tests but each app attempts to display 23 animating objects and measures the number of frames that were successfully rendered over a 5 second period. From that the frame rate is calculated. Each display object is 274 x 366 pixels in size, meaning that 2,306,532 pixels need to be rendered per frame in order to draw all 23 display objects.

Results

Device Animation Type Render Mode Frames Per Second Pixels Per Second
1st gen iPod touch ActionScript GPU 12 27,678,384
ActionScript CPU 5 11,532,660
Timeline GPU 2 4,613,064
Timeline CPU 2 4,613,064
2nd gen iPod touch ActionScript GPU 15 34,597,980
ActionScript CPU 7 16,145,724
Timeline GPU 2 4,613,064
Timeline CPU 2 4,613,064
iPhone 4 ActionScript GPU 56 129,165,792
ActionScript CPU 17 39,211,044
Timeline GPU 5 11,532,660
Timeline CPU 5 11,532,660
Google Nexus One ActionScript GPU 23 53,050,236
ActionScript CPU 12 27,678,384
Timeline GPU 23 53,050,236
Timeline CPU 8 18,452,256
Samsung Galaxy S ActionScript GPU 29 66,889,428
ActionScript CPU 12 27,678,384
Timeline GPU 29 66,889,428
Timeline CPU 7 16,145,724

If any of my calculations look disastrously wrong then let me know.

You can find the source files for the tests here.

Feel free to test on other devices and send me your results.

Oh, and for faster devices you may want to increase the FPS setting within the FLAs. At the moment the Android FLAs are capped at 30fps while the iPhone FLAs are capped at 15fps. I had to increase the frame rate setting when testing on iPhone 4 to 50fps after noticing the original test results were reporting 14/15fps.