You are here : Control
System Design - Index | Book Contents |
Chapter 22
22. Exploiting SISO Techniques in MIMO Control
Preview
The previous chapter gave an introduction to MIMO control system
synthesis by showing how SISO methods could sometimes be used in MIMO
problems. However, some MIMO problems require a fundamentally MIMO
approach. This is the topic of the current chapter. We will emphasize
methods based on optimal control theory. There are three reasons for
this choice:
- 1
- It is relatively easy to understand.
- 2
- It has been used in a myriad of applications (Indeed, the authors
have used these methods on approximately 20 industrial
applications).
- 3
- It is a valuable precursor to other advanced methods - e.g. model
predictive control which is explained in the next chapter.
The analysis presented in this chapter builds on the results in
Chapter 18 where state space design methods were briefly described in
the SISO context. We recall from that chapter, that the two key elements
were
- state estimation by an observer
- state estimate feedback
We will mirror these elements here for the MIMO case.
Summary
- Full multivariable control incorporates the interaction dynamics
rigorously and explicitly.
- The fundamental SISO synthesis result that, under mild conditions,
the nominal closed loop poles can be assigned arbitrarily carries
over to the MIMO case.
- Equivalence of state feedback and frequency domain pole placement
by solving the (multivariable) Diophantine Equation carries over as
well.
- Due to the complexities of multivariable systems, criterion based
synthesis (briefly alluded to in the SISO case) gains additional
motivation; it is also a powerful way to pre-compensate a system
which is subsequently trimmed with a MIMO Q-parametrization.
- A popular family of criteria are functionals involving quadratic
forms of control error and control effort.
- For a general nonlinear formulation, the optimal solution is
characterized by a two-point boundary value problem.
- In the linear case (the so-called linear quadratic regulator, LQR),
the general problem reduces to the solution of the continuous time
dynamic Riccati equation which can be feasibly solved, leading to
time-variable state feedback.
- After initial conditions decay, the optimal time-varying solution
converges to a constant state feedback, the so-called steady state
LQR solution.
- It is frequently sufficient to neglect the initial transient of
the strict LQR and only implement the steady state LQR.
- The steady state LQR is equivalent to either
- a model matching approach, where a desired complementary
sensitivity is specified and a controller is computed that
matches it as closely as possible according to some selected
measure.
- pole placement, where a closed loop polynomial is specified
and a controller is computed to achieve it.
- Thus, LQR, model-matching and pole-placement are mathematically
equivalent, although they do offer different tuning parameters.
Equivalent synthesis techniques
|
Tuning parameters
|
LQR |
Relative penalties on control
error versus control effort. |
Model matching |
Closed loop complementary
sensitivity reference model and weighted penalty on the
difference to the control loop |
Pole placement |
Closed loop polynomial |
- These techniques can be extended to discrete time systems.
- There is a very close connection to the ‘dual’ problem of
filtering, i.e., the problem of inferring a state from a related
(but not exactly invertible) set of measurements.
- Optimal filter design based on quadratic criteria leads again to a
Riccati equation.
- The filters can be synthesized and interpreted equivalently in a
- linear quadratic
- model matching
- pole-placement
framework.
- The arguable most famous optimal filter formulation, the Kalman
filter, can be given a stochastic interpretation depending on taste.
- The LQR does not automatically include integral action; thus,
rejection of constant or other polynomial disturbances must be
enforced via the internal model principle.
|