Cocoa

GPUImageLogo-Small.png

Back in 2010, I gave a talk about the use of OpenGL shaders to accelerate image and video processing on mobile devices. The response from that talk was strong enough that two years later I started work on the open source framework GPUImage with the goal of making this kind of processing more accessible to developers. In an attempt to broaden the reach of this framework, today I'm introducing the completely-rewritten-in-Swift GPUImage 2 with support for Mac, iOS, and now Linux. This isn't just a port, it's a complete rewrite of the framework.

Read on for how GPUImage 2 differs from the previous iteration, and why I rebuilt it from scratch.

Apple_Swift_Logo.png

At the beginning of this year we started a complete rewrite of our robotics software from Objective-C to Swift, for reasons described here. That rewrite was concluded two months ago, passed testing a month ago, and has been deployed to all of our customers. As a result, I wanted to do a final analysis of the rewrite and what we learned from it.

Read on for the results from this rewrite in Swift.

Apple_Swift_Logo.png

This week, Apple enhanced Swift with several additions or changes, bumping it to version 2. Perhaps the biggest change was the introduction of an official error-handling model based on try/catch semantics. At first, I was skeptical about this approach when compared to other techniques (primarily the use of a Result return value), but after applying it to real-world code, I've found it to work extremely well.

Read on for more detail about the new Swift error handling model.

Apple_Swift_Logo.png

At SonoPlot, we just undertook a full rewrite of our robotic control software from Objective-C to Swift. While at first it might appear crazy to rework a large, stable project in a brand-new language, we did so after carefully examining sources of bugs in our Objective-C application and determining that Swift would prevent a large percentage of them. While we've only just started, we've learned enough so far that I thought there would be value in sharing this.

Read on for more about what led us to rewrite this application, and what we observed when doing so.

GPUImageLogo-Small.png

I always find it more effective to learn new programming concepts by building projects using them, so I decided to do the same for Apple's new Swift language. I also wanted to see how well it would interact with my open source GPUImage framework. As a result, I made GPUImage fully Swift-compatible and I've built and committed to the GitHub repository a couple of Swift sample applications. I wanted to write down some of the things that I learned when building these.

Read on for more about my explorations of Swift using GPUImage.

GPUImageLogo-Small.png

I recently pushed a significant set of changes to the GPUImage repository, and wanted to explain them in a little more detail, especially since this changes one part of the manual photo filtering process. These changes should dramatically reduce the overall memory usage of the framework, help to prevent memory-related crashes, and fix a number of subtle bugs that have plagued the framework since I started it.

Read on for more about these changes.

GPUImageLogo-Small.png

With the launch of iOS 7, and its use of blurs throughout the interface, there's been a lot of interest in fast ways of blurring content. GPUImage has had a reasonably performant, but somewhat limited Gaussian blur implementation for a while, but I've not been happy with it. I've completely rewritten this Gaussian blur, and it now supports arbitrary blur radii while still being tuned for the iOS GPUs. At higher blur radii, I'm slower than Core Image, though, and I don't quite understand why.

Read on for more about the techniques I've used for GPU optimization of a Gaussian blur, as well as some interesting iOS benchmarks.

SecondConf logo

I'd like to introduce a new open source framework that I've written, called GPUImage. The GPUImage framework is a BSD-licensed iOS library (for which the source code can be found on GitHub) that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies. In comparison to Core Image (part of iOS 5.0), GPUImage allows you to write your own custom filters, supports deployment to iOS 4.0, and has a slightly simpler interface. However, it currently lacks some of the more advanced features of Core Image, such as facial detection.

Read on for more about the GPUImage framework.

New MATC logo

Last week, Apple unveiled two new education-related products: iBooks textbooks and the new iTunes U courses. While both interest me, I was particularly fascinated by the new iTunes U courses and how they bundle information together. I converted my existing Advanced iPhone Development iTunes U class into a full course (which you can subscribe to for free) a few days ago. I wanted to write about what I learned in the process of doing this.

Read on for more about my experience with the iTunes U Course Manager.

Rose-Hulman logo

Last quarter, Dr. David Fisher taught an introductory course on iOS development at my alma mater, the Rose-Hulman Institute of Technology. He's made that course publicly available, including videos of the sessions, assignments, and tests. If you want, you can also grab the videos from his podcast on the subject.

I highly recommend checking out this course, because it has a huge amount of content within it and Dr. Fisher does a great job in presenting the material.

Syndicate content