Let’s talk about PID control! Chances are you’ve interacted with something that uses a form of this control law, even if you weren’t aware of it, and so it is worth learning a bit more about what this control law is actually doing, and how it helps us. However, often when we’re learning something new in control theory it’s easy to get bogged down in the detailed mathematics of the problem. So, in this video, we’re going to skip most of the math and instead focus on building a good intuitive foundation so I hope you stick around for it. I’m Brian and welcome to a Matlab Tech Talk.

We start with a plant. This is what we call the system that we want to control, or the system whose behavior we want to affect. The input into the plant is the actuating signal and the output is the controlled variable.

Different industries refer to these signals by various names, so you might hear them called something else, like plant input and plant output, but regardless of the names, the basic idea of a control system is to figure out how to generate the appropriate actuating signal, the input, so that our system will produce the desired controlled variable, the output. That is basically the job of the control engineer, produce the right input into the system to get the output you want.

And just like before output you want also goes by various names. Here, I call it the command or the commanded variable, but you might also hear it as the set point, the reference, or the desired value. And in feedback control, the output of the system is fed back, hence the name, and compared to the command to see how far off the system is from where we want it to be. This difference between the two is the error term. If the output was exactly what we commanded it to be, then the error would go to zero. That is what we want, zero error. So, the question is, how do we take this error term and convert it into suitable actuator commands so that over time, the error is driven to zero? The answer is with a controller.

Let’s illustrate this with an example. Imagine you are standing on the goal line of a soccer field and you want to walk to the half field line, 50 meters away. In this case, you are the plant, the actuating signal is the speed and direction that you walk, your current location on the field is the output variable or 0 meters to start, and then 50 meters is the command. Therefore, at the beginning, your position error is 50 meters, or 50 minus 0. You still have a ways to go. Your brain, being the controller, tells your legs how fast to walk and one way your brain can do this is to use the error at the present moment to decide your walking speed.

Here, I’ve set our controller to the value 0.1. This means that we’ll take the error in our system and multiply it by 0.1 to get our walking speed. So if our error is 50 meters, like it is at the beginning, then we’ll start off walking at 5 m/s.

With a proportional controller like this, we start reducing the error quickly since we are far away and then gradually slow down as we get closer and closer to our goal.

In this way, we would eventually and asymptotically reach the half field line at which time the error would be zero and our proportional controller would multiply that by 0.1, which would generate a walking speed of zero, stopping us right where we want to be. If we wanted to adjust the amount of time it took us to get to the goal we could increase or decrease the multiplier term.

Regardless of the gain value chosen, this type of controller will eventually cause us to stop right on the goal line. So this proportional controller seems great, I mean just multiplying the present error by a number seems to work. Well, let’s try it again on a different system and see if it performs the same way.

In this second example, you want to design an altitude controller for a quadcopter drone. The drone has four propellers, and when they all spin up together they produce a force that lifts the drone into the air. This is similar to our walking example in that we’re trying to get to a specific location, but this time we want to get our system to hover at an altitude of 50 meters.

Our plant is the drone, and the output of this system is altitude. The input to the system is not walking speed, but propeller speed. All in all, it seems like a pretty similar problem. So how well does a proportional controller work in this case?

When the drone is on the ground, there is an error of 50 meters which would generate a large propeller speed causing the drone to lift off and start to rise, reducing the error. So far, so good. But let’s imagine the drone was able to rise to 50 meters, what would happen now? The error would drop to 0 since the command and output are both 50 meters, this would shut the propellers off, the lift would stop, and the drone would fall back to Earth. When that happens, the propeller speed would start to increase again as the error grew.

There is a certain propeller speed where the lifting force is exactly equal to the weight of the drone and at that speed, the drone will hover. So where would our proportional controller hover the drone? Well, that depends on the controller gain. Let’s assume the propellers need to spin at 100 rpm in order for the drone to hover. If our proportional gain was 2, then the drone would hover right at the ground level, since an error of 50 times 2 is 100.

However, if we increased the gain to 5, the drone would rise at first but then stop at 30 meters, since the error at that point would be 20 meters and 20 times 5 is 100 rpm. A gain of 10 would produce an error of 10 meters, and a gain of 100 would produce an error of 1 meter. And no matter high we increase the gain, the error won’t go away, it’ll just get smaller and smaller with this system and a proportional controller.

So we can see that a simple proportional controller doesn’t work in every situation. It works for our walking example, but for our drone, it created this constant error, this is also called steady state error. So how can we tweak our controller to get rid of this steady state error? We can do that by letting our controller use past information. Or specifically, adding an integrator path in our controller that is added to our proportional path.

An integrator sums up the input signal over time keeping a running total, therefore, it has this memory of what has happened before, it’s basically keeping track of the past. If the drone gets to steady state below the desired altitude, the error term is non-zero and when a non-zero value is integrated, the output will increase. As long as there is error in our system, the integral output will continue to change. This increased value from the integrator path will increase the speed of the propellers and the drone will continue to rise.

These two paths the proportional and the integral path, work with each other to drive the error down to zero. With this controller, when the drone is hovering at the desired altitude of 50 meters, the proportional path is doing nothing, since the error is zero, but the integral path has been summing and subtracting values until it came to rest at 100 rpm (remember that’s what we said was needed for the drone to hover), and that output from our integrator will not change since the input to the integrator at this point is zero. Alright, this proportional-integral controller is it, this is what we want, something that understands the present and has memory of the past, right?

Well, looking at the past and present error will get us to our goal in this situation, however, the path the drone takes to get there might not be ideal. Imagine this situation.



Right before we get to 50 meters there is an interesting situation that can occur. The proportional path is basically zero since the error is so small, but the integral path can be larger than 100 rpm since at this point the drone is still rising.



Therefore, the drone will have to go higher than our goal and create a negative error in order for the integral path to remove that excess propeller speed. This overshooting of the goal might not be desired. Luckily, there is a simple way around this problem, and that is by adding a path to our controller that can predict the future and respond to how fast we’re closing in on our goal. And we do that with a derivative.



A derivative produces a measure of the rate of change of the error. That is, how fast the error is growing or shrinking. For example, if our drone is rising quickly and fast approaching our goal, this means that the error is quickly decreasing. That decreasing error has a negative rate of change, which will produce a negative value through our derivative path. That negative value will be added to our controller’s output, therefore, lowering the propeller speed. Basically, our controller is using changing error to determine that we are closing in on our goal way too fast and prematurely slow down the propeller speed preventing the drone from overshooting.



And just like that, we’ve just created a PID controller; proportional, integral, derivative. This is a versatile controller that uses the present error, the past error, and a prediction of the future error to calculate the appropriate actuator commands. These three branches each contribute some amount to the overall output of the controller and as the designer, you get to decide how to weigh each contribution. You do this by adjusting the gain term in each branch. This is called tuning the controller.



If your controller contains all three branches, it’s called a PID controller. If the gain of one or more branch is set to zero, taking it out of the equation, then we typically refer to that controller with the letters of the remaining paths; for example a P or PI controller. This proportional only controller was the type we used in our walking example.

PID is just one form of a feedback controller but they are pretty easy to understand and implement. They are the simplest controller you can have that uses the past, present, and future error, and it’s these primary features that are needed to satisfy most control problems, not all, but a lot of them. That is why PID is the most prevalent form of feedback control for a wide range of real physical applications.



There is a lot more to learn on this topic and if you stick with this series, we’re going to explore PID controllers in more detail. Now that we have we a general understanding of what PID control is, we will move on to how we implement one, how we tune it to do what we want, and we’ll watch it in action on some real hardware.



So if you don’t want to miss these future tech talk videos, don’t forget to subscribe to this channel. Also, if you want to check out my channel, control system lectures, I cover more control theory topics there as well. Thanks for watching, I’ll see you next time.

