Since time immemorial, iOS developers have been perplexed by a singular question:

“How do you resize an image?”

It’s a question of beguiling clarity, spurred on by a mutual mistrust of developer and platform. Myriad code samples litter Stack Overflow, each claiming to be the One True Solution™ — all others, mere pretenders.

In this week’s article, we’ll look at 5 distinct techniques to image resizing on iOS (and macOS, making the appropriate UIImage → NSImage conversions). But rather than prescribe a single approach for every situation, we’ll weigh ergonomics against performance benchmarks to better understand when to use one approach over another.

You can try out each of these image resizing techniques for yourself by downloading, building, and running this sample code project.

When and Why to Scale Images

Before we get too far ahead of ourselves, let’s establish why you’d need to resize images in the first place. After all, UIImage View automatically scales and crops images according to the behavior specified by its content Mode property. And in the vast majority of cases, .scale Aspect Fit , .scale Aspect Fill , or .scale To Fill provides exactly the behavior you need.

image View . content Mode = . scale Aspect Fit image View . image = image

So when does it make sense to resize an image?

When it’s significantly larger than the image view that’s displaying it.

Consider this stunning image of the Earth, from NASA ’s Visible Earth image catalog:

At its full resolution, this image measures 12,000 px square and weighs in at a whopping 20 MB. You might not think much of a few megabytes given today’s hardware, but that’s just its compressed size. To display it, a UIImage View needs to first decode that JPEG into a bitmap. If you were to set this full-sized image on an image view as-is, your app’s memory usage would balloon to hundreds of Megabytes of memory with no appreciable benefit to the user (a screen can only display so many pixels, after all).

By simply resizing that image to the size of the image view before setting its image property, you can use an order-of-magnitude less RAM:

Memory Usage (MB) Without Downsampling 220.2 With Downsampling 23.7

This technique is known as downsampling , and can significantly improve the performance of your app in these kinds of situations. If you’re interested in some more information about downsampling and other image and graphics best practices, please refer to this excellent session from WWDC 2018.

Now, few apps would ever try to load an image this large… but it’s not too far off from some of the assets I’ve gotten back from designer. (Seriously, a 3MB PNG for a color gradient?) So with that in mind, let’s take a look at the various ways that you can go about resizing and downsampling images.

This should go without saying, but all of the examples loading images from a URL are for local files. Remember, it’s never a good idea to do networking synchronously on the main thread of your app.

Image Resizing Techniques

There are a number of different approaches to resizing an image, each with different capabilities and performance characteristics. And the examples we’re looking at in this article span frameworks both low- and high-level, from Core Graphics, vImage, and Image I/O to Core Image and UIKit:

For consistency, each of the following techniques share a common interface:

func resized Image ( at url : URL , for size : CGSize ) -> UIImage ? { … } image View . image = resized Image ( at : url , for : size )

Here, size is a measure of point size, rather than pixel size. To calculate the equivalent pixel size for your resized image, scale the size of your image view frame by the scale of your main UIScreen :

let scale Factor = UIScreen . main . scale let scale = CGAffine Transform ( scale X : scale Factor , y : scale Factor ) let size = image View . bounds . size . applying ( scale )

If you’re loading a large image asynchronously, use a transition to have the image fade-in when set on the image view. For example: class View Controller : UIView Controller { @IBOutlet var image View : UIImage View ! override func view Will Appear ( _ animated : Bool ) { super . view Will Appear ( animated ) let url = Bundle . main . url ( for Resource : "Blue Marble West" , with Extension : "tiff" ) ! Dispatch Queue . global ( qos : . user Initiated ) . async { let image = resized Image ( at : url , for : self . image View . bounds . size ) Dispatch Queue . main . sync { UIView . transition ( with : self . image View , duration : 1.0 , options : [ . curve Ease Out , . transition Cross Dissolve ], animations : { self . image View . image = image }) } } } }

Technique #1: Drawing to a UIGraphicsImageRenderer

The highest-level APIs for image resizing are found in the UIKit framework. Given a UIImage , you can draw into a UIGraphics Image Renderer context to render a scaled-down version of that image:

import UIKit // Technique #1 func resized Image ( at url : URL , for size : CGSize ) -> UIImage ? { guard let image = UIImage ( contents Of File : url . path ) else { return nil } let renderer = UIGraphics Image Renderer ( size : size ) return renderer . image { ( context ) in image . draw ( in : CGRect ( origin : . zero , size : size )) } }

UIGraphics Image Renderer is a relatively new API, introduced in iOS 10 to replace the older, UIGraphics Begin Image Context With Options / UIGraphics End Image Context APIs. You construct a UIGraphics Image Renderer by specifying a point size . The image method takes a closure argument and returns a bitmap that results from executing the passed closure. In this case, the result is the original image scaled down to draw within the specified bounds.

It’s often useful to scale the original size to fit within a frame without changing the original aspect ratio. AVMake Rect(aspect Ratio:inside Rect:) is a handy function found in the AVFoundation framework that takes care of that calculation for you: import func AVFoundation . AVMake Rect let rect = AVMake Rect ( aspect Ratio : image . size , inside Rect : image View . bounds )

Technique #2: Drawing to a Core Graphics Context

Core Graphics / Quartz 2D offers a lower-level set of APIs that allow for more advanced configuration.

Given a CGImage , a temporary bitmap context is used to render the scaled image, using the draw(_:in:) method:

import UIKit import Core Graphics // Technique #2 func resized Image ( at url : URL , for size : CGSize ) -> UIImage ? { guard let image Source = CGImage Source Create With URL ( url as NSURL , nil ), let image = CGImage Source Create Image At Index ( image Source , 0 , nil ) else { return nil } let context = CGContext ( data : nil , width : Int ( size . width ), height : Int ( size . height ), bits Per Component : image . bits Per Component , bytes Per Row : image . bytes Per Row , space : image . color Space ?? CGColor Space ( name : CGColor Space . s RGB ) ! , bitmap Info : image . bitmap Info . raw Value ) context ? . interpolation Quality = . high context ? . draw ( image , in : CGRect ( origin : . zero , size : size )) guard let scaled Image = context ? . make Image () else { return nil } return UIImage ( cg Image : scaled Image ) }

This CGContext initializer takes several arguments to construct a context, including the desired dimensions and the amount of memory for each channel within a given color space. In this example, these parameters are fetched from the CGImage object. Next, setting the interpolation Quality property to .high instructs the context to interpolate pixels at a 👌 level of fidelity. The draw(_:in:) method draws the image at a given size and position, a allowing for the image to be cropped on a particular edge or to fit a set of image features, such as faces. Finally, the make Image() method captures the information from the context and renders it to a CGImage value (which is then used to construct a UIImage object).

Technique #3: Creating a Thumbnail with Image I/O

Image I/O is a powerful (albeit lesser-known) framework for working with images. Independent of Core Graphics, it can read and write between many different formats, access photo metadata, and perform common image processing operations. The framework offers the fastest image encoders and decoders on the platform, with advanced caching mechanisms — and even the ability to load images incrementally.

The important CGImage Source Create Thumbnail At Index offers a concise API with different options than found in equivalent Core Graphics calls:

import Image IO // Technique #3 func resized Image ( at url : URL , for size : CGSize ) -> UIImage ? { let options : [ CFString : Any ] = [ k CGImage Source Create Thumbnail From Image If Absent : true , k CGImage Source Create Thumbnail With Transform : true , k CGImage Source Should Cache Immediately : true , k CGImage Source Thumbnail Max Pixel Size : max ( size . width , size . height ) ] guard let image Source = CGImage Source Create With URL ( url as NSURL , nil ), let image = CGImage Source Create Thumbnail At Index ( image Source , 0 , options as CFDictionary ) else { return nil } return UIImage ( cg Image : image ) }

Given a CGImage Source and set of options, the CGImage Source Create Thumbnail At Index(_:_:_:) function creates a thumbnail of an image. Resizing is accomplished by the k CGImage Source Thumbnail Max Pixel Size option, which specifies the maximum dimension used to scale the image at its original aspect ratio. By setting either the k CGImage Source Create Thumbnail From Image If Absent or k CGImage Source Create Thumbnail From Image Always option, Image I/O automatically caches the scaled result for subsequent calls.

Technique #4: Lanczos Resampling with Core Image

Core Image provides built-in Lanczos resampling functionality by way of the eponymous CILanczos Scale Transform filter. Although arguably a higher-level API than UIKit, the pervasive use of key-value coding in Core Image makes it unwieldy.

That said, at least the pattern is consistent.

The process of creating a transform filter, configuring it, and rendering an output image is no different from any other Core Image workflow:

import UIKit import Core Image let shared Context = CIContext ( options : [ . use Software Renderer : false ]) // Technique #4 func resized Image ( at url : URL , scale : CGFloat , aspect Ratio : CGFloat ) -> UIImage ? { guard let image = CIImage ( contents Of : url ) else { return nil } let filter = CIFilter ( name : "CILanczos Scale Transform" ) filter ? . set Value ( image , for Key : k CIInput Image Key ) filter ? . set Value ( scale , for Key : k CIInput Scale Key ) filter ? . set Value ( aspect Ratio , for Key : k CIInput Aspect Ratio Key ) guard let output CIImage = filter ? . output Image , let output CGImage = shared Context . create CGImage ( output CIImage , from : output CIImage . extent ) else { return nil } return UIImage ( cg Image : output CGImage ) }

The Core Image filter named CILanczos Scale Transform accepts an input Image , an input Scale , and an input Aspect Ratio parameter, each of which are pretty self-explanatory.

More interestingly, a CIContext is used here to create a UIImage (by way of a CGImage Ref intermediary representation), since UIImage(CIImage:) doesn’t often work as expected. Creating a CIContext is an expensive operation, so a cached context is used for repeated resizing.

A CIContext can be created using either the GPU or the CPU (much slower) for rendering. Specify the .use Software Renderer the option in the initializer to choose which one to use. (Hint: Use the faster one, maybe?)

Technique #5: Image Scaling with vImage

Last up, it’s the venerable Accelerate framework — or more specifically, the v Image image-processing sub-framework.

vImage comes with a bevy of different functions for scaling an image buffer. These lower-level APIs promise high performance with low power consumption, but at the cost of managing the buffers yourself (not to mention, signficantly more code to write):

import UIKit import Accelerate . v Image // Technique #5 func resized Image ( at url : URL , for size : CGSize ) -> UIImage ? { // Decode the source image guard let image Source = CGImage Source Create With URL ( url as NSURL , nil ), let image = CGImage Source Create Image At Index ( image Source , 0 , nil ), let properties = CGImage Source Copy Properties At Index ( image Source , 0 , nil ) as? [ CFString : Any ], let image Width = properties [ k CGImage Property Pixel Width ] as? v Image Pixel Count , let image Height = properties [ k CGImage Property Pixel Height ] as? v Image Pixel Count else { return nil } // Define the image format var format = v Image_CGImage Format ( bits Per Component : 8 , bits Per Pixel : 32 , color Space : nil , bitmap Info : CGBitmap Info ( raw Value : CGImage Alpha Info . first . raw Value ), version : 0 , decode : nil , rendering Intent : . default Intent ) var error : v Image_Error // Create and initialize the source buffer var source Buffer = v Image_Buffer () defer { source Buffer . data . deallocate () } error = v Image Buffer_Init With CGImage ( & source Buffer , & format , nil , image , v Image_Flags ( kv Image No Flags )) guard error == kv Image No Error else { return nil } // Create and initialize the destination buffer var destination Buffer = v Image_Buffer () error = v Image Buffer_Init ( & destination Buffer , v Image Pixel Count ( size . height ), v Image Pixel Count ( size . width ), format . bits Per Pixel , v Image_Flags ( kv Image No Flags )) guard error == kv Image No Error else { return nil } // Scale the image error = v Image Scale_ARGB8888 ( & source Buffer , & destination Buffer , nil , v Image_Flags ( kv Image High Quality Resampling )) guard error == kv Image No Error else { return nil } // Create a CGImage from the destination buffer guard let resized Image = v Image Create CGImage From Buffer ( & destination Buffer , & format , nil , nil , v Image_Flags ( kv Image No Allocate ), & error )? . take Retained Value (), error == kv Image No Error else { return nil } return UIImage ( cg Image : resized Image ) }

The Accelerate APIs used here clearly operate at a much lower-level than any of the other resizing methods discussed so far. But get past the unfriendly-looking type and function names, and you’ll find that this approach is rather straightforward.

First, create a source buffer from your input image,

Then, create a destination buffer to hold the scaled image

Next, scale the image data in the source buffer to the destination buffer,

Finally, create an image from the resulting image data in the destination buffer.

Performance Benchmarks

So how do these various approaches stack up to one another?

Here are the results of some performance benchmarks performed on an iPhone 7 running iOS 12.2, in this project.

The following numbers show the average runtime across multiple iterations for loading, scaling, and displaying that jumbo-sized picture of the earth from before:

Time (seconds) Technique #1: UIKit 0.1420 Technique #2: Core Graphics 1 0.1722 Technique #3: Image I/O 0.1616 Technique #4: Core Image 2 2.4983 Technique #5: v Image 2.3126

1 Results were consistent across different values of CGInterpolation Quality , with negligible differences in performance benchmarks.

2 Setting k CIContext Use Software Renderer to true on the options passed on CIContext creation yielded results an order of magnitude slower than base results.

Conclusions