This is the fourth installment of a tutorial series about controlling the movement of a character. This time we focus on the camera, creating an orbiting point of view from which we control the sphere.

This tutorial is made with Unity 2019.2.18f1. It also uses the ProBuilder package.

But relying on the normal time delta makes the camera subject to the game's time scale, so it would also slow down during slow motion effects and even freeze in place if the game would be paused. To prevent this make it depend on Time.unscaledDeltaTime instead.

To apply the expected centering behavior we have to interpolate between the target and current focus points, using `(1-c)^t` as the interpolator, with help of the Mathf.Pow method. We only need to do this if the distance is large enough—say above 0.01—and the centering factor is positive. To both center and enforce the focus radius we use the minimum of both interpolators for the final interpolation.

Add a configuration option for the focus centering factor, which has to be a value in the 0–1 range, with 0.75 as a good default.

Yes, because of the product rule for exponents: `x^ax^b=x^(a+b)`. For example, suppose we start with distance `d` and had one frame with a delta time of one second. Then the new distance is `dc^1=dc`. Now suppose we had two frames with a delta time of 0.6 and 0.4 seconds instead, ending up at the same time but in two steps. Then the new distance is again `dc^0.6c^0.4=dc^(0.6+0.4)=dc^1=dc`.

Halving a starting distance each second can be done by multiplying it with ½ raised to the elapsed time: `d_(n+1) = d_n(1/2)^(t_n)`. We don't need to exactly halve the distance each second, we can use an arbitrary centering factor between zero and one: `d_(n+1)=d_nc^(t_n)`.

For example, the focus starts at some distance from the center. We pull it back so that after a second that distance has been halved. We keep doing this, halving the distance every second. The distance will never be reduced to zero this way, but we can stop when it has gotten small enough that it is unnoticeable.

Using a focus radius makes the camera respond only to larger motion of the focus, but when the focus stops so does the camera. It's also possible to keep the camera moving until the focus is back in the center of its view. To make this motion appear more subtle and organic we can pull back slower as the focus approaches the center.

If the focus radius is positive, check whether the distance between the target and current focus points is greater than the radius. If so, pull the focus toward the target until the distance matches the radius. This can be done by interpolating from target point to current point, using the radius divided by current distance as the interpolator.

A relaxed focus requires us to keep track of the current focus point, as it might no longer exactly match the position of the focus. Initialize it to the focus object's position in Awake and move updating it to a separate UpdateFocusPoint method.

Always keeping the sphere in exact focus might feel too rigid. Even the smallest motion of the sphere will be copied by the camera, which affects the entire view. We can relax this constraint by making the camera only move when its focus point differs too much from the ideal focus. We'll make this configurable by adding a focus radius, set to to one unit by default.

An irregular frame rate will always cause some jitter, especially when there are significant frame rate dips. The editor is prone to this. A build will most likely be much smoother.

The simplest and most robust way to fix this is by setting the sphere's Rigidbody to interpolate its position. That gets rid of the jittery motion of both the sphere and the camera. This is typically only needed for objects that are focused on by the camera.

The camera will not always stay at the same distance and orientation, but because PhysX adjusts the sphere's position at a fixed time step so will our camera. When that doesn't match the frame rate it will result in jittery camera motion.

Every update we have to adjust the camera's position so it stays at the desired distance. We'll do this in LateUpdate in case anything moves the focus in Update . The camera's position is found by moving it away from the focus position in the opposite direction that it's looking by an amount equal to the configured distance. We'll use the position property of the focus instead of localPosition so we can correctly focus on child objects inside a hierarchy.

To keep the camera focused on the sphere we need to tell it what to focus on. This could really be anything, so add a configurable Transform field for the focus. Also add an option for the orbit distance, set to five units by default.

Cinemachine provides a ready-made free-look camera that can orbit our sphere, so we could just use that. However, by creating a simple orbit camera ourselves we'll better understand what goes into creating one and what its limitations are. Also, the Cinemachine option requires a lot of tuning to get right and might still not behave as you prefer. Our simple approach is much easier to understand and tweak.

Adjust the main camera of a scene with a single sphere so it has this component. I made a new scene for this with a large flat plane, positioning the camera so it looks down at a 45° angle with the sphere at the center of its view, at a distance of roughly five units.

We'll create a simple orbiting camera to follow our sphere in third-person mode. Define an OrbitCamera component type for it, giving it the RequireComponent attribute to enforcing that it is gets attached to a game object that also has a regular Camera component.

The third person exists outside the game world, representing the player. A second person exists inside the game. It could be anyone or anything that is not the player's avatar. It's rare, but some games use this viewpoint as a gimmick, for example it's one of the psychic powers in Psychonauts.

A fixed point of view only works when the sphere is constrained to an area that is completely visible. But usually characters in games can roam about large areas. The typical ways to make this possible is by either using a first-person view or having the camera follow the player's avatar in third-person view mode. Other approaches exists as well, like switching between multiple cameras depending on the avatar's position.

Orbiting the Sphere

The next step is to make it possible to adjust the camera's orientation so it can describe an orbit around the focus point. We'll make it possible to both manually control the orbit and have the camera automatically rotate to follow its focus.

Orbit Angles The orientation of the camera can be described with two orbit angles. The X angle defines its vertical orientation, with 0° looking straight to the horizon and 90° looking straight down. The Y angle defines the horizontal orientation, with 0° looking along the world Z axis. Keep track of those angles in a Vector2 field, set to 45° and 0° by default. Vector2 orbitAngles = new Vector2(45f, 0f); In LateUpdate we'll now have to construct a quaternion defining the camera's look rotation via the Quaternion.Euler method, passing it the orbit angles. It required a Vector3 , to which our vector implicitly gets converted, with the Z rotation set to zero. The look direction can then be found by replacing transform.forward with the quaternion multiplied with the forward vector. And instead of only setting the camera's position we'll now invoke transform.SetPositionAndRotation with the look position and rotation in one go. void LateUpdate () { UpdateFocusPoint(); Quaternion lookRotation = Quaternion.Euler(orbitAngles); Vector3 lookDirection = lookRotation * Vector3.forward ; Vector3 lookPosition = focusPoint - lookDirection * distance; transform.SetPositionAndRotation(lookPosition, lookRotation); }

Controlling the Orbit To manually control the orbit, add a rotation speed configuration option, expressed in degrees per second. 90° per second is a reasonable default. [SerializeField, Range(1f, 360f)] float rotationSpeed = 90f; Rotation speed. Add a ManualRotation method that retrieves an input vector. I defined Vertical Camera and Horizontal Camera input axes for this, bound to the third and fourth axis, the ijkl and qe keys, and the mouse with sensitivity increased to 0.5. It is a good idea to make sensitivity configurable in your game and to allow flipping of axis directions, but we won't bother with that in this tutorial. If there's an input exceeding some small epsilon value like 0.001 then add the input to the orbit angles, scaled by the rotation speed and time delta. Again, we make this independent of the in-game time. void ManualRotation () { Vector2 input = new Vector2( Input.GetAxis("Vertical Camera"), Input.GetAxis("Horizontal Camera") ); const float e = 0.001f; if (input.x < -e || input.x > e || input.y < -e || input.y > e) { orbitAngles += rotationSpeed * Time.unscaledDeltaTime * input; } } Invoke this method after UpdateFocusPoint in LateUpdate . void LateUpdate () { UpdateFocusPoint(); ManualRotation(); … } Manual rotation; focus radius zero. Note that the sphere is still controlled in world space, regardless of the camera's orientation. So if you horizontally rotate the camera 180° then the sphere's controls will appear flipped. This makes it possible to easily keep the same heading no matter the camera view, but can be disorienting. If you have trouble with this you can have both the game and scene window open at the same time and rely on the fixed perspective of the latter. We'll make the sphere controls relative to the camera view later.

Constraining the Angles While it's fine for the camera to describe full horizontal orbits, vertical rotation will turn the world upside down once it goes beyond 90° in either direction. Even before that point it becomes hard to see where you're going when looking mostly up or down. So let's add configuration options to constrain the min and max vertical angle, with the extremes limited to at most 89° in either direction. Let's use −30° and 60° as the defaults. [SerializeField, Range(-89f, 89f)] float minVerticalAngle = -30f, maxVerticalAngle = 60f; Min and max vertical angle. The max should never drop below the min, so enforce that in an OnValidate method. As this only sanitizes configuration via the inspector, we don't need to invoke it in builds. void OnValidate () { if (maxVerticalAngle < minVerticalAngle) { maxVerticalAngle = minVerticalAngle; } } Add a ConstrainAngles method that clamps the vertical orbit angle to the configured range. The horizontal orbit has no limits, but ensure that the angle stays inside the 0–360 range. void ConstrainAngles () { orbitAngles.x = Mathf.Clamp(orbitAngles.x, minVerticalAngle, maxVerticalAngle); if (orbitAngles.y < 0f) { orbitAngles.y += 360f; } else if (orbitAngles.y >= 360f) { orbitAngles.y -= 360f; } } Shouldn't we loop until we're in the 0–360 range? If the orbit angle were arbitrary then indeed it would be correct to keep adding or subtracting 360° until it falls inside the range. However, we only incrementally adjust the angles by small amounts so this shouldn't be necessary. We only need to constrain angles when they changed. So make ManualRotation return whether it made a change and invoke ConstrainAngles based on that in LateUpdate . We also only need to recalculate the rotation if there was a change, otherwise we can retrieve the existing one. bool ManualRotation () { … if (input.x < e || input.x > e || input.y < e || input.y > e) { orbitAngles += rotationSpeed * Time.unscaledDeltaTime * input; return true; } return false; } … void LateUpdate () { UpdateFocusPoint(); Quaternion lookRotation; if ( ManualRotation() ) { ConstrainAngles(); lookRotation = Quaternion.Euler(orbitAngles); } else { lookRotation = transform.localRotation; } //Quaternion lookRotation = Quaternion.Euler(orbitAngles); … } We must also make sure that the initial rotation matches the orbit angles in Awake . void Awake () { focusPoint = focus.position; transform.localRotation = Quaternion.Euler(orbitAngles); }

Automatic Alignment A common feature of orbit cameras is that they align themselves to stay behind the player's avatar. We'll do this by automatically adjusting the horizontal orbit angle. But it is important that the player can override this automatic behavior at all times and that the automatic rotation doesn't immediately kick back in. So we'll add a configurable align delay, set to five seconds by default. This delay doesn't have an upper bound. If you don't want automatic alignment at all then you can simply set a very high delay. [SerializeField, Min(0f)] float alignDelay = 5f; Align delay. Keep track of the last time that a manual rotation happened. Once again we rely on the unscaled time here, not the in-game time. float lastManualRotationTime; … bool ManualRotation () { … if (input.x < -e || input.x > e || input.y < -e || input.y > e) { orbitAngles += rotationSpeed * Time.unscaledDeltaTime * input; lastManualRotationTime = Time.unscaledTime; return true; } return false; } Then add an AutomaticRotation method that also returns whether it changed the orbit. It aborts if the current time minus the last manual rotation time is less than the align delay. bool AutomaticRotation () { if (Time.unscaledTime - lastManualRotationTime < alignDelay) { return false; } return true; } In LateUpdate we now constrain the angles and calculate the rotation when either manual or automation rotation happened, tried in that order. if (ManualRotation() || AutomaticRotation() ) { ConstrainAngles(); lookRotation = Quaternion.Euler(orbitAngles); }

Focus Heading The criteria that are used to align cameras varies. In our case, we'll base it solely on the focus point's movement since the previous frame. The idea is that it makes most sense to look in the direction that the focus was last heading. To make this possible we'll need to know both the current and previous focus point, so have UpdateFocusPoint set fields for both. Vector3 focusPoint , previousFocusPoint ; … void UpdateFocusPoint () { previousFocusPoint = focusPoint; … } Then have AutomaticRotation calculate the movement vector for the current frame. As we're only rotating horizontally we only need the 2D movement in the XZ plane. If the square magnitude of this movement vector is less than a small threshold like 0.0001 then there wasn't much movement and we won't bother rotating. bool AutomaticRotation () { if (Time.unscaledTime - lastManualRotationTime < alignDelay) { return false; } Vector2 movement = new Vector2( focusPoint.x - previousFocusPoint.x, focusPoint.z - previousFocusPoint.z ); float movementDeltaSqr = movement.sqrMagnitude; if (movementDeltaSqr < 0.000001f) { return false; } return true; } Otherwise we have to figure out the horizontal angle matching the current direction. Create a static GetAngle method to convert a 2D direction to an angle for that. The Y component of the direction is the cosine of the angle we need, so put it through Mathf.Acos and then convert from radians to degrees. static float GetAngle (Vector2 direction) { float angle = Mathf.Acos(direction.y) * Mathf.Rad2Deg; return angle; } But that angle could represent either a clockwise or a counterclockwise rotation. We can look at the X component of the direction to know which it is. If X is negative then it's counterclockwise and we have to subtract the angle from from 360°. return direction.x < 0f ? 360f - angle : angle; Back in AutomaticRotation we can use GetAngle to get the heading angle, passing it the normalized movement vector. As we already have its squared magnitude it's more efficient to do the normalization ourselves. The result becomes the new horizontal orbit angle. if (movementDeltaSqr < 0.0001f) { return false; } float headingAngle = GetAngle(movement / Mathf.Sqrt(movementDeltaSqr)); orbitAngles.y = headingAngle; return true; Immediate alignment.