After spending 2000+ hours and releasing 4+ successful apps working with image transformations, we’ve decided to share our experience with the community.

In this article you’ll learn how to move, scale and rotate images on Android, enabling customers to create super fancy selfies (like the Snapchat’s).

Task

So the task is pretty simple: add the ability to move, scale and rotate stickers on Android.

Even though it sounds easy, there are a couple of challenges as well. First, there is a zillion of screen sizes of Android devices, and we’d better support them all (or as many as we can). Moreover, it could be the case that you would need to enable users to save/edit their selfies. And if they open their custom works on other devices — the screen size might change, the loaded images might be of a different quality, etc.

As you might have guessed, the task is getting more complicated now.

The solution needs to work on different screen sizes and be independent of the image quality.

So let the fun begin…

Model

Let’s define our model the following way:

Since you need to make it work on different screen sizes, sticking to absolute coordinates is a bad idea.

The position should be relative to the parent canvas — where the image will be drawn.

Relative coordinates of the model

The “scale” should also be relative. The initial image has scale 1.0 when its larger side matches the smallest size of the canvas. In other words — when the largest size fits within the canvas:

Initial scale of the images

Holy Scale

As you might have guessed by now, not all images fit the canvas like on the image above. The image needs to be pre-scaled to fit in. We called the param “holyScale”. It can be calculated the following way:

It is basically a dynamic variable that depends on the size of the sticker and the canvas.

What is good about this param is that you can resize canvas or take images of the different quality. And without changing the model — you’ll have the same relative sticker position and size. Only need to recalculate the holy scale of the image.

Also, you don’t need to save the value, just calculate it when you need it.

The use case of this is pretty clear. For the preview we can show user images of the lower quality, and when saving to disk — better ones.

Transformation

Ok, now that you have the model, the holy scale, it’s time to do some magic with the image.

When working with image transformation it’s usually best to use matrices. It could have a steep learning curve, yet when you get familiar with them — it does pay off.

“All problems in computer graphics can be solved with a matrix inversion”

Jim Blinn

In 2D users can scale theimage, rotate and translate it. Each of the transformations can be represented by its own transformation matrix: S, R, T. To get the correct transformation matrix L, we need to apply the transformation in the following way:

L = S * R * T

You can find on the web why it works like that. Note, that applying the transformations in a different order will have some interesting and funny consequences, just try it.

One more thing. You need to also apply the holy scale described above. Let the transformation matrix for holy scale be S`. Resulting matrix will look like:

L = S * R * T * S`

So, let’s transform a matrix with our model. Android has a fancy class Matrix. Using it we don’t need to perform all the math by ourselves (which can be tricky sometimes).

Gestures

Now you know how to draw stickers, let’s transform the user input into the model.

Translating all the touches on the screen to the right gestures could be a tricky part, and could take a lot of time for debugging. Android provides some basic functionality to track gestures, like GestureDetectorCompat. However, as it is usually with Android, it’s by far not enough, especially in our case.

Luckily, there are a couple of open source solutions. We’re going to use one of them, Android Gesture Detectors by @Almeros.

With an external gesture library updating the model becomes as easy as adding/subtracting the delta:

Some more math

If I stopped here, this tutorial would look like most coding tutorials:

Most coding tutorials look like this

There are actually a couple more details. For example, we need to organize it all into classes, add code to actually draw the images on the canvas, etc.

One of the interesting parts could be detecting if the touch was inside the rect of the sticker or not.

The standard Android Rect cannot be rotated. So you have to write your own code to find the coordinates of the sticker rectangle vertices. You can do that by saving initial vertices of the image. And then map them to the new points using the transformation matrix created before:

NOTE! Don’t forget that you shouldn’t create new classes or arrays in the draw()-like methods. Thus you’d better create destPoints and srcPoints in advance.

The easiest way to check if the point is inside the rectangle is by using the vector cross product:

Check the full explanation on the StackOverflow.

Final Result

Here’s the video of what we got in the end.

Video of the result

Check out the source code on the Github, download the app on Google Play.

Feel free to use it for your own purposes. Let me know if you have any challenges with it.

And what is your experience with image transformation or gestures on Android? Feel free to share / ask questions in the comments, I would be glad to help.