The Pros and Cons of RxSwift…

I recently wrapped up a major app project for Google’s Cloud Next ’18 conference. We made extensive use of RxSwift throughout the project, centered around a couple of key objectives:

  1. Model-layer parity with Android – Design nearly-identical APIs for the view model and persistence layers on iOS and Android to help avoid the “iOS does one thing, but Android does another” class of bugs. Internally the implementations are quite different, but having their public interfaces documented in a Wiki allowed each platform to feel out the requirements for a particular object, then document it for the other team. This saved massive amounts of time over the course of the project and resulted in very few platform discrepancy bugs.

  2. React to schedule changes in real time – Google Cloud Next is a BIG conference; second only to Google I/O in terms of attendance. With limited seating at the various sessions, it is very important that attendees know exactly which sessions have available seats as well as receiving important updates about their reserved sessions ASAP. To accomplish this, we used RxSwift to transform observations of the real-time session details coming out of our Firebase Cloud Firestore back end into data streams that could be easily bound to views in the user’s schedule.

This was my first production experience with Rx and, as such, I experienced a considerable ramp-up curve over the course of the first month. If you are working on your first Rx project, expect to lose about 50% of your productivity for the first month and 20% of it the second month as you come to grips with the different style of data flow that Rx enables as well as learning which tool in the Rx toolbox is appropriate for each different data scenario. The up-side is that techniques learned for Rx on one platform are broadly applicable to others (which was a big reason why we chose to work with it on this project).

Once I became more familiar with Rx, I started being able to model data transformations in my head and implement them with a bare minimum of fuss. This was gratifying, but it always felt like there were some sharp edges around the boundaries between Rx code and more traditional UIKit code governing things like user interaction. I’ve created a quick list of the major points to be aware of when considering RxSwift for your iOS project:

Pros:

  • Able to describe a common interface for model layer APIs between iOS and Android. This was the biggest win for us on this project, saving many dozens of hours of QA bug fixing time.
  • Avoid nested-closure hell that typifies complex asynchronous data transformations in Foundation/UIKit.

Cons:

  • Steep learning curve makes ramping new developers onto the project difficult (and toward the end of the project, completely impractical). This is the #1 reason you should consider avoiding RxSwift: when it’s crunch time, you won’t be able to add developers to the project unless they’re already Rx veterans.
  • Debugging Rx data transformations is horrible. When Rx is working as intended, it’s borderline magical. When it has a problem, the debugging process is considerably more difficult. Any breakpoint you hit within a data stream will present a 40+ entry backtrace stack with dozens of inscrutable internal Rx methods separating and obscuring the code you actually wrote.
  • Rx metastasizes throughout your code base. The entry and exit-points where Rx interacts with UIKit are awkward and difficult to parse. We often found ourselves saying “oh, well if this service’s method returned an Observable instead of a variable, we could do this particular transformation more easily…” and so Rx spreads to another class in your app.

In the end, we can’t say we dramatically cut development time by using RxSwift, it simply replaced one class of problems (maintaining cross-platform consistency) with another (figuring out how to best use RxSwift). We will be launching into the next phase of the project soon, updating the app for Google Cloud Next ’19. I’m sure I will have more to talk about once that effort has completed next year.

Getting Swift 4 KVO working…

Here are 2 common pitfalls to avoid when you’re trying to use Swift 4 Key-Value Observation for the first time:

Keep that Observation object!

Calling YourObject.observe(_:options:changeHandler:) returns a NSKeyValueObservation object. The observation will only continue for as long as the observation object exists! If you fail to store the observation object in a persistent variable or array, then the it will immediately deallocate and no observations will occur.

Always Specify Options!

The 2nd parameter of the observe() method has a default value which is unhelpfully just called default in the quick documentation. It is the equivalent of providing an empty option set, which means your change handler closure will be invoked when the value changes, but you will not get any information about the new or old value! Even if this is the behavior you want, it is better to explicitly specify an empty option set so that someone else looking at your code immediately knows not to expect a value in the change handler.

Handling Drag & Drop Raw Photos

There is no shortage of tutorials on the topic of Drag & Drop, but I wanted to get into a particular special case which creators of apps that support dropped images should be aware of. If your user is a photographer that shoots with a SLR and imports their images in camera raw format, those images will not be accepted for drop unless you handle them specially.

Specifically, Raw images conform to the kUTTypeRawImage URI, which is not included in UIImage’s readableTypeIdentifiersForItemProvider. This is because UIImage cannot import raw images directly, requiring a detour through Core Image. Luckily, this is pretty easy to accomplish, as Core Image has a simple way to handle Raw images.

Note: My project is using a UICollection view as the drop target, so I’m working with the methods in the UICollectionViewDropDelegate protocol. Working with custom views should be broadly similar, but I haven’t tried it out yet.

Step 1: Getting the Raw Image

When a Raw image is dropped on an app, the image data is not sent as with UIImage-compatible images accessed via UIDropSession’s loadObjects(ofClass:completion:) convenience method. Instead, a URL is provided to your app. However, if you attempt to read or copy the file at the URL, it will always fail as being unavailable. You may be tempted to try to use NSItemProvider’s loadItem(forTypeIdentifier:options:completionHandler:) method, but it has awful ergonomics in Swift (you can’t implicitly coerce a protocol into a conforming type) and what’s more, even if you do force it to give you the URLs for the Raw images, they will all be unavailable and useless, possibly due to sandbox restrictions.

The correct way to do this is to iterate over the list of UIDragItems, filtering by those which have an NSItemProvider that responds affirmatively to hasItemConformingToTypeIdentifier(_:) for kUTTypeRawImage. You can then iterate over the filtered NSItemProviders and call loadFileRepresentation(forTypeIdentifier:completionHandler:) on each one:

This method makes an accessible copy of the Raw image and returns the URL of the copy to the completion block. Why use this method and not, say, loadDataRepresentation(forTypeIdentifier:completionHandler:)? The answer is memory: Raw images tend to be very large (dozens of MB each) and if your user has dropped a bunch of them on your app, attempting to hold all of them in memory could cause the system to kill your app for eating up too much memory. Using the URL instead of the contents of the file consumes basically no memory until a specific image needs to be loaded for processing. In tests, I was able to drop 10+ Raw images onto my app for processing and never see the memory go above about 70MB, dropping back down to 20-30MB when processing completed.

Warning: The URLs provided by loadDataRepresentation only seem to be valid for the scope of the completion closure. You shouldn’t try to hold on to them and load them later, because it will fail. Instead, copy the file in your app’s sandbox (such as the Caches directory) and use the URL of the local copy to access the Raw image later.

Step 2: Converting the Raw Image

Now we have a local URL for the Raw images, but we still can’t do much with them since UIImage is unable to initialize with Raw image data. Thankfully, Apple has a simple and robust conversion mechanism based on Core Image. Instantiating a CIFilter using the init(imageURL:options:) method creates an instance of CIRawFilter, which can process all of the various Raw image formats that Apple officially supports. The filter’s outputImage property (a CIImage) can then be sent to a CIContext for rendering to a CGImage or used directly to apply filter effects and image adjustments:

In practice, the first run of this conversion is pretty slow as the CIContext sets itself up, taking up to several seconds. Subsequent uses are very fast, requiring only a faction of a second. At the end of this process, you have a UIImage that can be used just like a regular dropped image.

ARKit + RGB Sampling

I’ve been working on an ARKit app to paint with “pixels” floating in space. When the ARSession invokes its delegate method for ARFrame capture, I want to capture the colors the camera sees at the detected feature points. I then create a simple 3D box at that point with the sampled color and can then pan around the pixel. This is pretty neat when you scan someone’s face and then have them leave.

Anyway, that whole “sample the color of the camera’s captured image at an arbitrary 2D coordinate” turned out to be a dramatically more difficult problem than I had anticipated. Obstacles include:

  • Image is in CVPixelBuffer format.
  • The pixel buffer is in YCbCr planar format (the camera’s raw format), not RGB.
  • Converting individual samples from YCbCr to RGB is non-trivial and involves doing matrix multiplication.
  • There are several different conversion matrices out for handling different color spaces, just in case you wanted to convert an image captured off a VHS tape, I guess?
  • Apple’s Accelerate framework can do this conversion on the entire image very quickly, but the setup is quite complex and consists of invoking a chain of complex C functions. Once properly configured, it is spectacularly fast, converting an entire camera image in roughly 1/2 of a millisecond.
  • The Accelerate framework has not received much love since Apple’s switch to the unified documentation style last year: hundreds of functions appear nowhere in the documentation. The only way to figure out that they exist and how to use them is to browse the Accelerate header files, which are robustly commented.
  • Swift’s type safety is a big pain in the butt when you’re dealing with unsafe data structures like image buffers.

Setting up ARKit to display the “pixels” took about 2 hours (my first ARKit experiment and my first exposure to SceneKit). Getting the colors samples to color the pixels took about 2 days. I don’t feel like this learning process is anything that is particularly valuable for your average ARKit developer to master, so I’ve tidied it up and released it as a gist.

Check it out: CapturedImageSampler.swift

Usage: when your app receives a new ARFrame via the ARSession’s delegate callback, instantiate a new CapturedImageSampler with it. You are then free to query it for the color of a particular coordinate. I’m using scalar coordinates so that the sampling is scale-independent. If you want to find the color under a user’s tap, for instance, simply convert the x and y coordinates to scalars by dividing them by the screen width and screen height, respectively. When you’re done sampling (which must occur before the next frame arrives), simply discard the CapturedImageSampler by letting it go out of scope. Do not retain the sampler, use it asynchronously or pass it between threads. It should not live longer than the ARFrame that created it.

A word of warning: this object is not at all thread-safe due to the private use of a shared static buffer. I chose this implementation for maximum performance, since a new buffer does not need to be allocated for every frame received from ARSession. However, if you get into a situation where 2 instances of CapturedImageSampler are simultaneously attempting to access the shared buffer you will have a very bad day. If you need to have a thread-safe version of this, I suggest you make the rawRGBBuffer property non-static and add a “release” method that frees up the buffer’s memory when you’re done with it. Failure to manage this process correctly will result in a catastrophic memory leak that will get your app terminated within a couple of seconds.

Quick note on CoreML performance…

I just got done doing some benchmarking using the Oxford102 model to identify types of flowers on an iPhone 7 Plus from work. The Oxford102 is a moderately large model, weighing in around 229MB. As soon as the lone view in the app is instantiated, I’m loading an instance of the model into memory, which seems to allocate about 50MB.

The very first time the model is queried after a cold app launch, there is a high degree of latency. Across several runs I saw an average of around 900ms for the synchronous call to model to return. However, on subsequent uses the performance increases dramatically, with an average response time of around 35ms. That’s good enough to provide near-real-time analysis of video, when you factor in the overhead of scaling the source image to the appropriate input size for the model (in this case, 227×227). Even if you were only updating the results every 3-4 frames, it would still feel nearly instantaneous to the user.

From a practical standpoint, it would probably be a good idea to exercise the model once in the background before using it in a user-noticeable way. This will prevent the slow “first run” from being noticed.

A note on the Swift 4 Package Manager Syntax

I ran into some issues setting up a new Swift 4 project from Swift Package Manager. Specifically, my main.swift file couldn’t import the dependencies I specified in my Package.swift file. It turns out, you have to import your dependencies in the root dependencies: section, then refer to them by module name in the targets() portion of the package.

Omitting the declaration in your target means the module won’t be available to your app and your import statements will generate compiler errors for nonexistent modules.

Protobuf Server

Quick Links:
iOS AppVapor ServerPerfect Server

One of the primary challenges in learning to work with Protocol Buffers is finding an API to communicate with. Adoption is currently not wide-spread and I had trouble finding public APIs willing to communicate via protobufs. So, I decided to create my own API using server-side Swift, thus fulfilling the requirements (tenuously) for calling myself a full-stack developer. I looked at two of the most popular Swift web server frameworks currently available Vapor and Perfect.

The Contenders

Both offer easy setup via assistant tools (Vapor Toolbox and Perfect Assistant, respectively). However, the default projects’ setups are philosophically quite different. Vapor’s default setup is a fully-fledge web server with example HTML files, CSS, communication over multiple protocols, etc. Perfect’s default setup is extremely spartan, relying on the developer to add features as needed. Going head-to-head on documentation, I’d give the slight edge to Vapor, but both clearly explain how to handle requests and responses. Vapor has the reputation for having a larger and more approachable support community if you have questions, but I didn’t engage with either community so I cannot verify this.

Adding Protobufs to either default project is as simple as adding a dependency for it to the Package.swift file and running swift build:

.Package(url: "https://github.com/apple/swift-protobuf.git", Version(0,9,29))

Note: At the time of writing, the Swift Protobuf team considers 0.9.29 to be their release candidate, and may soon move the project to a full 1.0 release.

Once that is done, running swift build in the terminal from the root directory of the project will download the Swift Protobuf library and integrate it with your project. At this point, you’re ready to include the model files created from the .proto definitions. If you are unfamiliar with how to compile .proto files into Swift, I recommend this article as a primer. Once the models are in your Sources/ directory, you can use them in your request handling code to serialize and deserialize binary messages.

Working with Protobufs

Making a simple API server actually involves gutting the default implementation of both the Vapor and Perfect default projects, which are both set up to serve HTML responses. If you want to send Protobuf data to the server from your client app, you will need to use a POST route, as GET cannot transmit binary data. If you are simply going to request data from the server then GET is appropriate. If you’re receiving data, simply access the body data of the POST request and feed it into the init(serializedData:) initializer of your Protobuf model object to get copy you can manipulate as you see fit.

To send a Protobuf response to the client app, just follow these general steps:

  1. Create a new instance of the Protobuf model object.
  2. Assign values to the properties.
  3. Call try object.serializedData() to get the Data representation of the object.
  4. Assign the data to the body of the response.
  5. Set the content-type header to application/octet-stream. (This is optional, but is a good practice.)
  6. Send the response with a 200 OK response code.

The iOS app linked above shows the basics of using Protobufs with URLSession to parse the object(s) being returned by the server.

Protobufs 💕 BLE

Bluetooth Low Energy + Protocol Buffers

a.k.a. BLEProtobufs

Proto-whats?

Protocol buffers (protobufs) are the hot new data transport scheme created and open-sourced by Google. At it’s core, it is a way to describe well-structured model objects that can be compiled into native code for a wide variety of programming languages. The primary implementation provided by Google supports Objective-C, but not Swift. However, thanks to extensible capabilities, Apple has been able to release a Swift plug-in that enables the protocol declarations to be compiled into Swift 3 code.

The companion to this is a framework (distributed along with the plug-in) that handles the transformation of the model objects to and from JSON or a compressed binary format. It is this later capability that we are interested for the purposes of communicating with Bluetooth Low Energy (BLE) devices.

The primary selling point of protobufs is their ability to describe the data contract between devices running different programming languages, such as an iOS app and a .Net API server. There are dozens of excellent blog posts scattered about the web on protobufs, so that is all I will say about them here.

Here is the protobuf declaration for the message I will be sending between devices via BLE:

A Quick BLE Primer

There are two primary actors in a BLE network: peripherals and centrals. Peripherals are devices which exist to provide data; they advertise their presence for all nearby devices to see. When connected to, they deliver periodic data updates (usually on the order of 1-2 times per second or less). The second type of device is known as a “central”, it can connect to multiple peripherals in order to read data from and write data to them.

A peripheral’s data is arranged into semantically-related groups called “services”. Within each service exists one or more data points, known as characteristics. Centrals can subscribe to the peripheral’s characteristics and will be notified when the value changes. The BLE standard favors brevity and low power consumption, so the default data payload of a characteristic is only 20 bytes (not kilobytes).

Data from a characteristic is received as just that, a plain Data object containing the bytes of the value. Thus, it is often incumbent upon the iOS developer to parse this data into native types like Int, Float, String, etc. This process is complex and error-prone, as working with individual bytes is not a common use case for Swift.

Enter Protobufs

As I mentioned above, protocol buffers can encode themselves in a compressed binary format. This makes them ideal for data transport over BLE where space is at a premium. In the example project I link to below, I am transmitting a timestamp in the form of an NSTimeInterval (double) cast to a float and three Int32 values representing the spacial orientation of the host device. I converted the rotational units from floating-point radians to integer- based degrees because integers compress much better than floating-point numbers in protobufs. After I set the properties the model object, I request its Data representation, which I save as the value of the characteristic. The data payload ranges from 5 to 12 bytes, based largely on the magnitude of the orientation angles (larger magnitude angles compress less). This is well below the 20 byte goal size.

In action:

On the central (receiving) end, the app is notified via a delegate callback whenever the subscribed characteristic’s value changes. I take the Data value from the characteristic argument and pass it to the initializer of the protobuf-backed model object. Voila! Instant model object with populated properties that I can do with what I please.

In action:

I have a pair of example projects available. The sending app is designed to be run on an iOS device and the receiving app is a simple OS X command line app built using Swift Package Manager (because frameworks + Swift CLI apps = hell). I’ve written the core of both apps using only Foundation and CoreBluetooth, so the sending and receiving roles should be easy to swap between different platforms.

Peripheral (sender) app

Central (receiver) app

Beyond View Controllers

In a nutshell: Remove from ViewControllers all tasks which are not view-related.

Quick Links:
Architecture Diagram PDF
Example Project

Problems with ViewControllers in MVC

The View Controller is typically the highest level of organization in the iOS standard MVC app. This tends to make them accumulate a wide variety of functionality that causes them to grow in both size and complexity over the course of a project’s development. Here are the basic issues I have with the role of view controllers in the “standard” iOS MVC pattern:

  • Handle too many tasks:
    • View hierarchy management
    • API Interaction
    • Data persistence
    • Intra-Controller data flow
  • Need to have knowledge of other ViewControllers to pass state along.
  • Difficult to test business logic tied to the view structure.

Guiding Principles of Coordinated MVC

Tasks, not Screens

The architecture adds a level of organization above the View Controller called the Coordinator layer. The Coordinator objects break the user flow of your app into discrete tasks that can be performed in an arbitrary order. Example tasks for a simple shopping app might be: Login, Create Account, Browse Content, Checkout, and Help.

Each Coordinator manages the user flow through a single task. It is important to note that there is not a unique relationship between Coordinators and the screens they manage; multiple Coordinators can call upon the same screen as part of their flow. We want a Coordinator to completely define a task from beginning to completion, only changing to a different Coordinator when the task is complete or the user takes action to switch tasks in mid-flow.

Rationalle: When View Controllers must be aware of their role within a larger task, they tend to become specialized for that role and tightly coupled to it. Then, when the same view controller is needed elsewhere in the app, the developer is faced with the task of either putting branching logic all over the class to handle the different use cases or duplicating the class and making minor changes to it for each use case.

When combined with Model Isolation and Mindful State Mutation, having the control flow of the app determined at a higher level than the view controller solves this scenario, allowing the view controller to be repurposed more easily.

Model Isolation

View Controllers must define all of their data requirements in the form of a DataSource protocol. Every view controller will have a var dataSource: DataSource? property that will be its sole source of external information. Essentially, this is the same as a View Model in the MVVM pattern.

Rationale: When View Controllers start reaching out directly to the Model or service-layer objects (API clients, persistence stacks, etc.) they begin to couple the model tightly to their views, making testing increasingly difficult.

Mindful State Mutation

View Controllers shall define all of their external state mutations in the form of a Delegate protocol. Every view controller will have a var delegate: Delegate? property that will be the only object that the View Controller reaches out to in order to mutate external state. That is to say, the View Controller can take whatever actions are necessary to ensure proper view consistency, but when there is a need to change to a new screen or take some other action that takes place “outside” itself, it invokes a method on its delegate.

Rationale: In the traditional MVC architecture, View Controllers become tightly coupled to each other, either by instantiating their successor view controller and pushing it onto a Nav Controller, or by invoking a storyboard segue and then passing model and state information along in prepareForSegue(). This coupling makes it much more difficult to test that the user flow of your app is working as expected, particularly in situations with a lot of branching logic.

The Architecture in Depth


Download PDF Version

Task

A global enum that contains a case for every possible user flow within the app. Each task should have its own TaskCoordinator.

App Coordinator

The ultimate source of truth about what state the app should be in. It manages the transitions between the TaskCoordinator objects. It decides which Task should be started on app launch (useful when deciding whether to present a login screen, or take the user straight to content). The AppCoordinator decides what to do when a Task completes (in the form of a delegate callback from the currently active TaskCoordinator).

The AppCoordinator holds a reference to the root view controller of the app and uses it to parent the various TaskCoordinator view controllers. If not root view controller is specified, the AppCoordinator assumes it is being tested and does not attempt to perform view parenting.

The AppCoordinatorcreates and retains the service layer objects, using dependency injection to pass them to the TaskCoordinators which then inject them into the ViewModels.

Task Coordinator

Manages the user flow for a single Task through an arbitrary number of screens. It has no knowledge of any other TaskCoordinator and interacts with the AppCoordinator via a simple protocol that includes methods for completing its Task or notifying the AppCoordinator that a different Task should be switched to.

TaskCoordinators create and manage the ViewModel objects, assigning them as appropriate to the dataSource of the varous View Controllers that it manages.

Service Layer

Objects in the service layer encapsulate business logic that should be persisted and shared between objects. Some examples might be a UserAuthenticationService that tracks the global auth state for the current user or an APIClient that encapsulates the process of requesting data from a server.

Service layer objects should never be accessed directly by View Controllers! Only ViewModel and Coordinator objects are permitted to access services. If a View Controller needs information from a service, it should declare the requirement in its DataSource protocol and allow the ViewModel to fetch it.

Avoid giving in to the siren call of making your service layer objects as singletons. Doing so will make testing your Coordinator and ViewModel objects more difficult, because you will not be able to substitute mock services that return a well-defined result.

If you want to do data/API response mocking—say because the API your app relies on won’t be finished for another couple of weeks—these objects are where it should occur. You can build finished business logic into your ViewModel and Coordinator objects that doesn’t need to change at all once you stop mocking data and connect to a live API.

View Model

ViewModel objects are created and owned by TaskCoordinators. They should receive references to the service layer objects they require in their constructors (dependency injection). A single ViewModel may act as the DataSource for multiple View Controllers, if sharing state between those controllers is advantageous.

ViewModels should only send data down to the View Controller, and should not be the recipient of user actions. The TaskCoordinator that owns the ViewModel and is acting as the View Controller’s delegate will mutate the ViewModel with state changes resulting from user actions.

Putting it into Practice

I have created a simple “Weather App” example project that shows the architecture in action:

Example Project

Here’s how to follow flow:

  1. In the AppDelegate you can see the AppCoordinator being instantiated and handed the root view controller.
  2. In the AppCoordinator‘s init method, observe how it checks to see if the user has “logged in”.
    • If the user is not logged in, the user is directed to the Login task to complete logging in.
    • If the user is logged in, then they are taken directly to the Forecast task.
  3. When tasks have completed their objective, they call their delegate taskCooordinator(finished:) method. This triggers the AppCoordinator to determine what the next task is. In a fully-fledged app, there could be a considerable amount of state inspection as part of this process.

Quick Rules for Conformance

  1. No view controller should access information except from its dataSource (View Model).
  2. No view controller should attempt to mutate state outside of itself except through its delegate (usually a TaskCoordinator).
  3. No view controller should have knowledge of any other view controller save those which it directly parents (embed segue or custom containment).
  4. View Controllers should never access the Service layer directly; always mediate access through the delegate and dataSource.
  5. A view controller may be used by any number of TaskCoordinator objects, so long as they are able to fulfill its data and delegation needs.

Thanks

A big thank you to Soroush Khanlou and Chris Dzombak and their fantastic Fatal Error podcast for giving me inspiration to create this.

JTSSwiftTweener: Animating arbitrary numeric properties

UIView.animate() and CoreAnimation provide an excellent framework for animating changes to visual properties of views in iOS. However, what if you want to animate a non-visual numeric property? Some examples might be:

  • Animate the properties of a CIFilter.
  • Animate a number changing within a label.
  • Animate ANYTHING which isn’t a UIView or CALayer property.

There are some hacky solutions you can do such as making a custom CALayer subclass which uses a delegate callback to report the setting of some property. However, this is cumbersome to set up and maintain, so I created my own tweening library to fill in the gap.

How it works

Tweener is a fairly simple class. It has static methods for creating tweens as well as pausing and resuming animation. At it’s core is a CADisplayLink which provides “ticks” that drive the animation. The core measures the elapsed time since the last tick and advances each of its child animations by that amount. This approach allows animation to finish in constant time, even when the frame rate is fluctuating.

When the Tweener.tween(...) method is called, a new instance of Tweener is created and returned. Simultaneously, it is added to the internal array of managed instances so that it can receive ticks. If the CADisplayLink is paused, it is unpaused.

With each tick, the individual Tweener instances are told how much time has elapsed. They, in turn, calculate how far along through their duration they are and update their progress closures appropriately. If a Tweener instance determines that elapsed time has equaled or exceeded its duration, it calls its completion closure (if it has one) and flags itself as complete. At the end of every tick, the Tweener class scans its instances and removes the completed ones. If the number of active instances is reduced to zero, then the CADisplayLink is paused.

There is only one class file to include in your project, available here.

I also have a very simple example project for you to look at.

What’s next?

The two primary areas for improvement are:

  1. Performance – It seems to work pretty well, but I’ve not done extensive testing on the tick methods to ensure maximum efficiency.
  2. Additional Easing Functions – I only have two Easing families at the moment. There are dozens of variations documented online (see here), and adding a few more to the class would improve its flexibility.