13. Sensitivity analysis


13.1. Introduction

Given an optimization problem it is often useful to obtain information about how the optimal objective value change when the problem parameters are perturbed. For instance assume that a bound represents a capacity of a machine. Now it might be possible to expand the capacity for a certain cost and hence it worthwhile to know what the value of additional capacity is. This is precisely the type of questions sensitivity analysis deals with.

Analyzing how the optimal objective value changes when the problem data is changed is called sensitivity analysis.

13.2. Restrictions

Currently, sensitivity analysis is only available for continuous linear optimization problems. Moreover, MOSEK can only deal with perturbations in bounds or objective coefficients.

13.3. References

The book [22] discusses the classical sensitivity analysis in Chapter 10 whereas the book [10, Chapter 19] presents a modern introduction to sensitivity analysis. Finally, it is recommended to read the short paper [2] to avoid some of the pitfalls associated with sensitivity analysis.

13.4. Sensitivity analysis for linear problems

13.4.1. The optimal objective value function

Assume we are given the problem

\begin{math}\nonumber{}\begin{array}{rclccccl}\nonumber{}z(l^{c},u^{c},l^{x},u^{x},c) & = & \mbox{minimize} &  &  & c^{T}x &  & \\\nonumber{} &  & \mbox{subject to} & l^{c} & \leq{} & Ax & \leq{} & u^{c},\\\nonumber{} &  &  &  &  & l^{x}\leq{}x\leq{}u^{x}, &  &\end{array}\end{math} (13.4.1)

and we want to know how the optimal objective value changes as [[MathCmd 668]] is perturbed. In order to answer this question then define the perturbed problem for [[MathCmd 668]] as follows

\begin{math}\nonumber{}\begin{array}{rclcccl}\nonumber{}f_{{l^{c}_{i}}}(\beta ) & = & \mbox{minimize} &  &  & c^{T}x & \\\nonumber{} &  & \mbox{subject to} & l^{c}+\beta e_{i} & \leq{} & Ax & \leq{}u^{c},\\\nonumber{} &  &  &  &  & l^{x}\leq{}x\leq{}u^{x}, &\end{array}\end{math} (13.4.2)

where [[MathCmd 671]] is the ith column of the identity matrix. The function

\begin{math}\nonumber{}f_{{l^{c}_{i}}}(\beta )\end{math} (13.4.3)

shows the optimal objective value as a function of [[MathCmd 673]]. Note a change in [[MathCmd 673]] corresponds to a perturbation in [[MathCmd 444]] and hence (13.4.3) shows the optimal objective value as a function of [[MathCmd 668]].

It is possible to prove that the function (13.4.3) is a piecewise linear and convex function i.e. the function may look like the illustration in Figure 13.1.

Figure 13.1: The optimal value function [[MathCmd 677]]. Left: [[MathCmd 678]] is in the interior of linearity interval. Right: [[MathCmd 678]] is a breakpoint.

Clearly, if the function [[MathCmd 677]] does not change much when [[MathCmd 673]] is changed, then we can conclude that the optimal objective value is insensitive to changes in [[MathCmd 444]]. Therefore, we are interested in how [[MathCmd 677]] changes for small changes in [[MathCmd 673]]. Now define

\begin{math}\nonumber{}f'_{{l^{c}_{i}}}(0)\end{math} (13.4.4)

to be the so called shadow price related to [[MathCmd 668]]. The shadow price specifies how the objective value changes for small changes in [[MathCmd 673]] around zero. Moreover, we are interested in the so called linearity interval

\begin{math}\nonumber{}\beta \in{}[\beta _{1},\beta _{2}]\end{math} (13.4.5)

for which

\begin{math}\nonumber{}f'_{{l^{c}_{i}}}(\beta )=f'_{{l^{c}_{i}}}(0).\end{math} (13.4.6)

To summarize the sensitivity analysis provides a shadow price and the linearity interval in which the shadow price is constant.

The reader may have noticed that we are sloppy in the definition of the shadow price. The reason is that the shadow price is not defined in the right example in Figure 13.1 because the function [[MathCmd 677]] is not differentiable for [[MathCmd 678]]. However, in that case we can define a left and a right shadow price and a left and a right linearity interval.

In the above discussion we only discussed changes in [[MathCmd 444]]. We define the other optimal objective value functions as follows

\begin{math}\nonumber{}\begin{array}{rcll}\nonumber{}f_{{u^{c}_{i}}}(\beta ) & = & z(l^{c},u^{c}+\beta e_{i},l^{x},u^{x},c), & i=1,\ldots ,m,\\\nonumber{}f_{{l^{x}_{j}}}(\beta ) & = & z(l^{c},u^{c},l^{x}+\beta e_{j},u^{x},c), & j=1,\ldots ,n,\\\nonumber{}f_{{u^{x}_{j}}}(\beta ) & = & z(l^{c},u^{c},l^{x},u^{x}+\beta e_{j},c), & j=1,\ldots ,n,\\\nonumber{}f_{{c_{j}}}(\beta ) & = & z(l^{c},u^{c},l^{x},u^{x},c+\beta e_{j}), & j=1,\ldots ,n.\end{array}\end{math} (13.4.7)

Given these definitions it should be clear how linearity intervals and shadow prices are defined for the parameters [[MathCmd 694]] etc.

13.4.1.1. Equality constraints

In MOSEK a constraint can be specified as either an equality constraints or a ranged constraints. Suppose constraint i is an equality constraint. We then define the optimal value function for constraint i by

\begin{math}\nonumber{}f_{{e^{c}_{i}}}(\beta )=z(l^{c}+\beta e_{i},u^{c}+\beta e_{i},l^{x},u^{x},c)\end{math} (13.4.8)

Thus for a equality constraint the upper and lower bound (which are equal) are perturbed simultaneously. From the point of view of MOSEK sensitivity analysis a ranged constrain with [[MathCmd 696]] therefore differs from an equality constraint.

13.4.2. The basis type sensitivity analysis

The classical sensitivity analysis discussed in most textbooks about linear optimization, e.g. [22, Chapter 10], is based on an optimal basic solution or equivalently on an optimal basis. This method may produce misleading results [10, Chapter 19] but is computationally cheap. Therefore, and for historical reasons this method is available in MOSEK.

We will now briefly discuss the basis type sensitivity analysis. Given an optimal basic solution which provides a partition of variables into basic and non-basic variables then the basis type sensitivity analysis computes the linearity interval [[MathCmd 697]] such that the basis remains optimal for the perturbed problem. A shadow price associated with the linearity interval is also computed. However, it is well known that an optimal basic solution may not be unique and therefore the result depends on the optimal basic solution employed in the sensitivity analysis. This implies the computed interval is only a subset of the largest interval for which the shadow price is constant. Furthermore, the optimal objective value function might have a breakpoint for [[MathCmd 678]]. In this case the basis type sensitivity method will only provide a subset of either the left or the right linearity interval.

In summary the basis type sensitivity analysis is computationally cheap but does not provide complete information. Hence, the results of the basis type sensitivity analysis should be used with care.

13.4.3. The optimal partition type sensitivity analysis

Another method for computing the complete linearity interval is called the optimal partition type sensitivity analysis. The main drawback to the optimal partition type sensitivity analysis is it is computationally expensive. This type of sensitivity analysis is currently provided as an experimental feature in MOSEK.

Given optimal primal and dual solutions to (13.4.1) i.e. [[MathCmd 647]] and [[MathCmd 700]] then the optimal objective value is given by

\begin{math}\nonumber{}z^{*}:=c^{T}x^{*}.\end{math} (13.4.9)

The left and right shadow prices [[MathCmd 702]] and [[MathCmd 703]] for [[MathCmd 444]] is given by the pair of optimization problems

\begin{math}\nonumber{}\begin{array}{rclccl}\nonumber{}\sigma _{1} & = & \mbox{minimize} & e_{i}^{T}s_{l}^{c} &  & \\\nonumber{} &  & \mbox{subject to} & A^{T}(s_{l}^{c}-s_{u}^{c})+s_{l}^{x}-s_{u}^{x} & = & c,\\\nonumber{} &  &  & (l_{c})^{T}(s_{l}^{c})-(u_{c})^{T}(s_{u}^{c})+(l_{x})^{T}(s_{l}^{x})-(u_{x})^{T}(s_{u}^{x}) & = & z^{*},\\\nonumber{} &  &  & s_{l}^{c},s_{u}^{c},s_{l}^{c},s_{u}^{x}\geq{}0 &  &\end{array}\end{math} (13.4.10)

and

\begin{math}\nonumber{}\begin{array}{rclccl}\nonumber{}\sigma _{2} & = & \mbox{maximize} & e_{i}^{T}s_{l}^{c} &  & \\\nonumber{} &  & \mbox{subject to} & A^{T}(s_{l}^{c}-s_{u}^{c})+s_{l}^{x}-s_{u}^{x} & = & c,\\\nonumber{} &  &  & (l_{c})^{T}(s_{l}^{c})-(u_{c})^{T}(s_{u}^{c})+(l_{x})^{T}(s_{l}^{x})-(u_{x})^{T}(s_{u}^{x}) & = & z^{*},\\\nonumber{} &  &  & s_{l}^{c},s_{u}^{c},s_{l}^{c},s_{u}^{x}\geq{}0. &  &\end{array}\end{math} (13.4.11)

The above two optimization problems makes it easy to interpret-ate the shadow price. Indeed assume that [[MathCmd 700]] is an arbitrary optimal solution then it must hold

\begin{math}\nonumber{}(s_{l}^{c})_{i}^{*}\in{}[\sigma _{1},\sigma _{2}].\end{math} (13.4.12)

Next the linearity interval [[MathCmd 697]] for [[MathCmd 444]] is computed by solving the two optimization problems

\begin{math}\nonumber{}\begin{array}{lclccccl}\nonumber{}\beta _{1} & = & \mbox{minimize} &  &  & \beta  &  & \\\nonumber{} &  & \mbox{subject to} & l^{c}+\beta e_{i} & \leq{} & Ax & \leq{} & u^{c},\\\nonumber{} &  &  &  &  & c^{T}x-\sigma _{1}\beta  & = & z^{*},\\\nonumber{} &  &  &  &  & l^{x}\leq{}x\leq{}u^{x}, &  &\end{array}\end{math} (13.4.13)

and

\begin{math}\nonumber{}\begin{array}{lclccccl}\nonumber{}\beta _{2} & = & \mbox{maximize} &  &  & \beta  &  & \\\nonumber{} &  & \mbox{subject to} & l^{c}+\beta e_{i} & \leq{} & Ax & \leq{} & u^{c},\\\nonumber{} &  &  &  &  & c^{T}x-\sigma _{2}\beta  & = & z^{*},\\\nonumber{} &  &  &  &  & l^{x}\leq{}x\leq{}u^{x}. &  &\end{array}\end{math} (13.4.14)

The linearity intervals and shadow prices for [[MathCmd 713]] [[MathCmd 714]] and [[MathCmd 715]] can be computed in a similar way to how it is computed for [[MathCmd 444]].

The left and right shadow price for [[MathCmd 184]] denoted [[MathCmd 702]] and [[MathCmd 703]] respectively is given by the pair optimization problems

\begin{math}\nonumber{}\begin{array}{lclccccl}\nonumber{}\sigma _{1} & = & \mbox{minimize} &  &  & e_{j}^{T}x &  & \\\nonumber{} &  & \mbox{subject to} & l^{c}+\beta e_{i} & \leq{} & Ax & \leq{} & u^{c},\\\nonumber{} &  &  &  &  & c^{T}x & = & z^{*},\\\nonumber{} &  &  &  &  & l^{x}\leq{}x\leq{} & u^{x} &\end{array}\end{math} (13.4.15)

and

\begin{math}\nonumber{}\begin{array}{lclccccl}\nonumber{}\sigma _{2} & = & \mbox{maximize} &  &  & e_{j}^{T}x &  & \\\nonumber{} &  & \mbox{subject to} & l^{c}+\beta e_{i} & \leq{} & Ax & \leq{} & u^{c},\\\nonumber{} &  &  &  &  & c^{T}x & = & z^{*},\\\nonumber{} &  &  &  &  & l^{x}\leq{}x\leq{} & u^{x}. &\end{array}\end{math} (13.4.16)

Once again the above two optimization problems makes it easy to interpret-ate the shadow prices. Indeed assume that [[MathCmd 647]] is an arbitrary primal optimal solution then it must hold

\begin{math}\nonumber{}x_{j}^{*}\in{}[\sigma _{1},\sigma _{2}].\end{math} (13.4.17)

The linearity interval [[MathCmd 697]] for a [[MathCmd 184]] is computed as follows

\begin{math}\nonumber{}\begin{array}{rclccl}\nonumber{}\beta _{1} & = & \mbox{minimize} & \beta  &  & \\\nonumber{} &  & \mbox{subject to} & A^{T}(s_{l}^{c}-s_{u}^{c})+s_{l}^{x}-s_{u}^{x} & = & c+\beta e_{j},\\\nonumber{} &  &  & (l_{c})^{T}(s_{l}^{c})-(u_{c})^{T}(s_{u}^{c})+(l_{x})^{T}(s_{l}^{x})-(u_{x})^{T}(s_{u}^{x})-\sigma _{1}\beta  & \leq{} & z^{*},\\\nonumber{} &  &  & s_{l}^{c},s_{u}^{c},s_{l}^{c},s_{u}^{x}\geq{}0 &  &\end{array}\end{math} (13.4.18)

and

\begin{math}\nonumber{}\begin{array}{rclccl}\nonumber{}\beta _{2} & = & \mbox{maximize} & \beta  &  & \\\nonumber{} &  & \mbox{subject to} & A^{T}(s_{l}^{c}-s_{u}^{c})+s_{l}^{x}-s_{u}^{x} & = & c+\beta e_{j},\\\nonumber{} &  &  & (l_{c})^{T}(s_{l}^{c})-(u_{c})^{T}(s_{u}^{c})+(l_{x})^{T}(s_{l}^{x})-(u_{x})^{T}(s_{u}^{x})-\sigma _{2}\beta  & \leq{} & z^{*},\\\nonumber{} &  &  & s_{l}^{c},s_{u}^{c},s_{l}^{c},s_{u}^{x}\geq{}0. &  &\end{array}\end{math} (13.4.19)

13.4.4. An example

As an example we will use the following transportation problem. Consider the problem of minimizing the transportation cost between a number of production plants and stores. Each plant supplies a number of goods and each store has a given demand that must be met. Supply, demand and cost of transportation per unit are shown in Figure 13.2.

Figure 13.2: Supply, demand and cost of transportation.

If we denote the number of transported goods from location i to location j by [[MathCmd 642]], the problem can be formulated as the linear optimization problem

minimize

\begin{math}\nonumber{}\begin{array}{ccccccccccccccl}\nonumber{}1x_{{11}} & + & 2x_{{12}} & + & 5x_{{23}} & + & 2x_{{24}} & + & 1x_{{31}} & + & 2x_{{33}} & + & 1x_{{34}}\end{array}\end{math} (13.4.20)

subject to

\begin{math}\nonumber{}\begin{array}{ccccccccccccccl}\nonumber{}x_{{11}} & + & x_{{12}} &  &  &  &  &  &  &  &  &  &  & \leq{} & 400,\\\nonumber{} &  &  &  & x_{{23}} & + & x_{{24}} &  &  &  &  &  &  & \leq{} & 1200,\\\nonumber{} &  &  &  &  &  &  &  & x_{{31}} & + & x_{{33}} & + & x_{{34}} & \leq{} & 1000,\\\nonumber{}x_{{11}} &  &  &  &  &  &  & + & x_{{31}} &  &  &  &  & = & 800,\\\nonumber{} &  & x_{{12}} &  &  &  &  &  &  &  &  &  &  & = & 100,\\\nonumber{} &  &  &  & x_{{23}} & + &  &  &  &  & x_{{33}} &  &  & = & 500,\\\nonumber{} &  &  &  &  &  & x_{{24}} & + &  &  &  &  & x_{{34}} & = & 500,\\\nonumber{}x_{{11}}, &  & x_{{12}}, &  & x_{{23}}, &  & x_{{24}}, &  & x_{{31}}, &  & x_{{33}}, &  & x_{{34}} & \geq{} & 0.\end{array}\end{math} (13.4.21)

The basis type and the optimal partition type sensitivity results for the transportation problem is shown in Table 13.1 and 13.2 respectively.

Basis type
Con. [[MathCmd 226]] [[MathCmd 227]] [[MathCmd 702]] [[MathCmd 703]]
1 -300.00 0.00 3.00 3.00
2 -700.00 + 0.00 0.00
3 -500.00 0.00 3.00 3.00
4 -0.00 500.00 4.00 4.00
5 -0.00 300.00 5.00 5.00
6 -0.00 700.00 5.00 5.00
7 -500.00 700.00 2.00 2.00
Var. [[MathCmd 226]] [[MathCmd 227]] [[MathCmd 702]] [[MathCmd 703]]
[[MathCmd 739]] - 300.00 0.00 0.00
[[MathCmd 740]] - 100.00 0.00 0.00
[[MathCmd 741]] - 0.00 0.00 0.00
[[MathCmd 742]] - 500.00 0.00 0.00
[[MathCmd 743]] - 500.00 0.00 0.00
[[MathCmd 744]] - 500.00 0.00 0.00
[[MathCmd 745]] -0.000000 500.00 2.00 2.00
Optimal partition type
Con. [[MathCmd 226]] [[MathCmd 227]] [[MathCmd 702]] [[MathCmd 703]]
1 -300.00 500.00 3.00 1.00
2 -700.00 + -0.00 -0.00
3 -500.00 500.00 3.00 1.00
4 -500.00 500.00 2.00 4.00
5 -100.00 300.00 3.00 5.00
6 -500.00 700.00 3.00 5.00
7 -500.00 700.00 2.00 2.00
Var. [[MathCmd 226]] [[MathCmd 227]] [[MathCmd 702]] [[MathCmd 703]]
[[MathCmd 739]] - 300.00 0.00 0.00
[[MathCmd 740]] - 100.00 0.00 0.00
[[MathCmd 741]] - 500.00 0.00 2.00
[[MathCmd 742]] - 500.00 0.00 0.00
[[MathCmd 743]] - 500.00 0.00 0.00
[[MathCmd 744]] - 500.00 0.00 0.00
[[MathCmd 745]] - 500.00 0.00 2.00
Table 13.1: Ranges and shadow prices related to bounds on constraints and variables. Left: Results for basis type sensitivity analysis. Right: Results for the optimal partition type sensitivity analysis.

Basis type
Var. [[MathCmd 226]] [[MathCmd 227]] [[MathCmd 702]] [[MathCmd 703]]
[[MathCmd 765]] - 3.00 300.00 300.00
[[MathCmd 766]] - 100.00 100.00
[[MathCmd 767]] -2.00 0.00 0.00
[[MathCmd 768]] - 2.00 500.00 500.00
[[MathCmd 769]] -3.00 500.00 500.00
[[MathCmd 770]] - 2.00 500.00 500.00
[[MathCmd 771]] -2.00 0.00 0.00
Optimal partition type
Var. [[MathCmd 226]] [[MathCmd 227]] [[MathCmd 702]] [[MathCmd 703]]
[[MathCmd 765]] - 3.00 300.00 300.00
[[MathCmd 766]] - 100.00 100.00
[[MathCmd 767]] -2.00 0.00 0.00
[[MathCmd 768]] - 2.00 500.00 500.00
[[MathCmd 769]] -3.00 500.00 500.00
[[MathCmd 770]] - 2.00 500.00 500.00
[[MathCmd 771]] -2.00 0.00 0.00
Table 13.2: Ranges and shadow prices related to the objective coefficients. Left: Results for basis type sensitivity analysis. Right: Results for the optimal partition type sensitivity analysis.

Looking at the results from the optimal partition type sensitivity analysis we see that for the constraint number 1 we have [[MathCmd 783]] and [[MathCmd 784]]. Therefore, we have a left linearity interval of [-300,0] and a right interval of [0,500]. The corresponding left and right shadow price is 3 and 1 respectively. This implies if the upper bound on constraint 1 increases by

\begin{math}\nonumber{}\beta \in{}[0,\beta _{1}]=[0,500]\end{math} (13.4.22)

then the optimal objective value will decrease by the value

\begin{math}\nonumber{}\sigma _{2}\beta =1\beta .\end{math} (13.4.23)

Correspondingly, if the upper bound on constraint 1 is decreased by

\begin{math}\nonumber{}\beta \in{}[0,300]\end{math} (13.4.24)

then the optimal objective value will increased by the value

\begin{math}\nonumber{}\sigma _{1}\beta =3\beta .\end{math} (13.4.25)

13.5. Sensitivity analysis in the MATLAB toolbox

The following describe sensitivity analysis from the MATLAB toolbox.

13.5.1. On bounds

The index of bounds/variables to analyzed for sensitivity are specified in the following subfields of the matlab structure prob:

.prisen.cons.subu

Indexes of constraints, where upper bounds are analyzed for sensitivity.

.prisen.cons.subl

Indexes of constraints, where lower bounds are analyzed for sensitivity.

.prisen.vars.subu

Indexes of variables, where upper bounds are analyzed for sensitivity.

.prisen.vars.subl

Indexes of variables, where lower bounds are analyzed for sensitivity.

.duasen.sub

Index of variables where coefficients are analysed for sensitivity.

For an equality constraint, the index can be specified in either subu or subl. After calling

[r,res] = mosekopt('minimize',prob)

the results are returned in the subfields prisen and duasen of res.

13.5.1.1. prisen

The field prisen is structured as follows:

.cons

MATLAB structure with subfields:

.lr_bl

Left value [[MathCmd 226]] in the linearity interval for a lower bound.

.rr_bl

Right value [[MathCmd 227]] in the linearity interval for a lower bound.

.ls_bl

Left shadow price [[MathCmd 228]] for a lower bound.

.rs_bl

Right shadow price [[MathCmd 229]] for a lower bound.

.lr_bu

Left value [[MathCmd 226]] in the linearity interval for an upper bound.

.rr_bu

Right value [[MathCmd 227]] in the linearity interval for an upper bound.

.ls_bu

Left shadow price [[MathCmd 228]] for an upper bound.

.rs_bu

Right shadow price [[MathCmd 229]] for an upper bound.

.var

MATLAB structure with subfields:

.lr_bl

Left value [[MathCmd 226]] in the linearity interval for a lower bound on a varable.

.rr_bl

Right value [[MathCmd 227]] in the linearity interval for a lower bound on a varable.

.ls_bl

Left shadow price [[MathCmd 228]] for a lower bound on a varable.

.rs_bl

Right shadow price [[MathCmd 229]] for lower bound on a varable.

.lr_bu

Left value [[MathCmd 226]] in the linearity interval for an upper bound on a varable.

.rr_bu

Right value [[MathCmd 227]] in the linearity interval for an upper bound on a varable.

.ls_bu

Left shadow price [[MathCmd 228]] for an upper bound on a varables.

.rs_bu

Right shadow price [[MathCmd 229]] for an upper bound on a varables.

13.5.1.2. duasen

The field duasen is structured as follows:

.lr_c

Left value [[MathCmd 226]] of linearity interval for an objective coefficient.

.rr_c

Right value [[MathCmd 227]] of linearity interval for an objective coefficient.

.ls_c

Left shadow price [[MathCmd 228]] for an objective coefficients .

.rs_c

Right shadow price [[MathCmd 229]] for an objective coefficients.

13.5.2. Selecting analysis type

The type (basis or optimal partition) of analysis to be performed can be selected by setting the parameter

MSK_IPAR_SENSITIVITY_TYPE

to one of the values:

MSK_SENSITIVITY_TYPE_BASIS = 0
MSK_SENSITIVITY_TYPE_OPTIMAL_PARTITION = 1 

as seen in the following example.

13.5.3. An example

Consider the problem defined in (13.4.21). Suppose we wish to perform sensitivity analysis on all bounds and coefficients. The following example demonstrates this as well as the method for changing between basic and full sensitivity analysis.

% sensitivity.m

% Obtain all symbolic constants
% defined by MOSEK.
[r,res]  = mosekopt('symbcon');
sc       = res.symbcon;
[r,res] = mosekopt('read(transport.lp) echo(0)');
prob = res.prob;
% analyse upper bound 1:7 
prob.prisen.cons.subl = [];  
prob.prisen.cons.subu = [1:7];  
% analyse lower bound on variables 1:7
prob.prisen.vars.subl = [1:7];
prob.prisen.vars.subu = [];
% analyse coeficient 1:7
prob.duasen.sub = [1:7];
%Select basis sensitivity analysis and optimize.  
param.MSK_IPAR_SENSITIVITY_TYPE=sc.MSK_SENSITIVITY_TYPE_BASIS;
[r,res] = mosekopt('minimize debug(100) echo(0)',prob,param); 
results(1) = res;
% Select optimal partition sensitivity analysis and optimize.  
param.MSK_IPAR_SENSITIVITY_TYPE=sc.MSK_SENSITIVITY_TYPE_OPTIMAL_PARTITION;
[r,res] = mosekopt('minimize debug(100) echo(0)',prob,param); 
results(2) = res;
%Print results
for m = [1:2]
  if m == 1
    fprintf('\nBasis sensitivity results:\n')
  else
    fprintf('\nOptimal partition sensitivity results:\n')
  end
  fprintf('\nSensitivity for bounds on constraints:\n')
  for i = 1:length(prob.prisen.cons.subl)
    fprintf (...
    'con = %d, beta_1 = %.1f, beta_2 = %.1f, delta_1 = %.1f,delta_2 = %.1f\n', ...
    prob.prisen.cons.subu(i),results(m).prisen.cons.lr_bu(i), ...
    results(m).prisen.cons.rr_bu(i),...
    results(m).prisen.cons.ls_bu(i),...
    results(m).prisen.cons.rs_bu(i));
  end
  
  for i = 1:length(prob.prisen.cons.subu)
    fprintf (...
    'con = %d, beta_1 = %.1f, beta_2 = %.1f, delta_1 = %.1f,delta_2 = %.1f\n', ...
    prob.prisen.cons.subu(i),results(m).prisen.cons.lr_bu(i), ...
    results(m).prisen.cons.rr_bu(i),...
    results(m).prisen.cons.ls_bu(i),...
    results(m).prisen.cons.rs_bu(i));
  end
  fprintf('Sensitivity for bounds on variables:\n')
  for i = 1:length(prob.prisen.vars.subl)
  fprintf (...
  'var = %d, beta_1 = %.1f, beta_2 = %.1f, delta_1 = %.1f,delta_2 = %.1f\n', ...
   prob.prisen.vars.subl(i),results(m).prisen.vars.lr_bl(i), ...
   results(m).prisen.vars.rr_bl(i),...
   results(m).prisen.vars.ls_bl(i),...
   results(m).prisen.vars.rs_bl(i));
  end
  
  for i = 1:length(prob.prisen.vars.subu)
    fprintf (...
    'var = %d, beta_1 = %.1f, beta_2 = %.1f, delta_1 = %.1f,delta_2 = %.1f\n', ...
    prob.prisen.vars.subu(i),results(m).prisen.vars.lr_bu(i), ...
    results(m).prisen.vars.rr_bu(i),...
    results(m).prisen.vars.ls_bu(i),...
    results(m).prisen.vars.rs_bu(i));
  end
  
  fprintf('Sensitivity for coefficients in objective:\n')
  for i = 1:length(prob.duasen.sub)
    fprintf (...
    'var = %d, beta_1 = %.1f, beta_2 = %.1f, delta_1 = %.1f,delta_2 = %.1f\n', ...
    prob.duasen.sub(i),results(m).duasen.lr_c(i), ...
    results(m).duasen.rr_c(i),...
    results(m).duasen.ls_c(i),...
    results(m).duasen.rs_c(i));
  end
end

The output from running the example sensitivity.m is shown below.

Basis sensitivity results:

Sensitivity for bounds on constraints:
con = 1, beta_1 = -300.0, beta_2 = 0.0, delta_1 = 3.0,delta_2 = 3.0
con = 2, beta_1 = -700.0, beta_2 = Inf, delta_1 = 0.0,delta_2 = 0.0
con = 3, beta_1 = -500.0, beta_2 = 0.0, delta_1 = 3.0,delta_2 = 3.0
con = 4, beta_1 = -0.0, beta_2 = 500.0, delta_1 = 4.0,delta_2 = 4.0
con = 5, beta_1 = -0.0, beta_2 = 300.0, delta_1 = 5.0,delta_2 = 5.0
con = 6, beta_1 = -0.0, beta_2 = 700.0, delta_1 = 5.0,delta_2 = 5.0
con = 7, beta_1 = -500.0, beta_2 = 700.0, delta_1 = 2.0,delta_2 = 2.0
Sensitivity for bounds on variables:
var = 1, beta_1 = Inf, beta_2 = 300.0, delta_1 = 0.0,delta_2 = 0.0
var = 2, beta_1 = Inf, beta_2 = 100.0, delta_1 = 0.0,delta_2 = 0.0
var = 3, beta_1 = Inf, beta_2 = 0.0, delta_1 = 0.0,delta_2 = 0.0
var = 4, beta_1 = Inf, beta_2 = 500.0, delta_1 = 0.0,delta_2 = 0.0
var = 5, beta_1 = Inf, beta_2 = 500.0, delta_1 = 0.0,delta_2 = 0.0
var = 6, beta_1 = Inf, beta_2 = 500.0, delta_1 = 0.0,delta_2 = 0.0
var = 7, beta_1 = -0.0, beta_2 = 500.0, delta_1 = 2.0,delta_2 = 2.0
Sensitivity for coefficients in objective:
var = 1, beta_1 = Inf, beta_2 = 3.0, delta_1 = 300.0,delta_2 = 300.0
var = 2, beta_1 = Inf, beta_2 = Inf, delta_1 = 100.0,delta_2 = 100.0
var = 3, beta_1 = -2.0, beta_2 = Inf, delta_1 = 0.0,delta_2 = 0.0
var = 4, beta_1 = Inf, beta_2 = 2.0, delta_1 = 500.0,delta_2 = 500.0
var = 5, beta_1 = -3.0, beta_2 = Inf, delta_1 = 500.0,delta_2 = 500.0
var = 6, beta_1 = Inf, beta_2 = 2.0, delta_1 = 500.0,delta_2 = 500.0
var = 7, beta_1 = -2.0, beta_2 = Inf, delta_1 = 0.0,delta_2 = 0.0

Optimal partition sensitivity results:

Sensitivity for bounds on constraints:
con = 1, beta_1 = -300.0, beta_2 = 500.0, delta_1 = 3.0,delta_2 = 1.0
con = 2, beta_1 = -700.0, beta_2 = Inf, delta_1 = -0.0,delta_2 = -0.0
con = 3, beta_1 = -500.0, beta_2 = 500.0, delta_1 = 3.0,delta_2 = 1.0
con = 4, beta_1 = -500.0, beta_2 = 500.0, delta_1 = 2.0,delta_2 = 4.0
con = 5, beta_1 = -100.0, beta_2 = 300.0, delta_1 = 3.0,delta_2 = 5.0
con = 6, beta_1 = -500.0, beta_2 = 700.0, delta_1 = 3.0,delta_2 = 5.0
con = 7, beta_1 = -500.0, beta_2 = 700.0, delta_1 = 2.0,delta_2 = 2.0
Sensitivity for bounds on variables:
var = 1, beta_1 = Inf, beta_2 = 300.0, delta_1 = 0.0,delta_2 = 0.0
var = 2, beta_1 = Inf, beta_2 = 100.0, delta_1 = 0.0,delta_2 = 0.0
var = 3, beta_1 = Inf, beta_2 = 500.0, delta_1 = 0.0,delta_2 = 2.0
var = 4, beta_1 = Inf, beta_2 = 500.0, delta_1 = 0.0,delta_2 = 0.0
var = 5, beta_1 = Inf, beta_2 = 500.0, delta_1 = 0.0,delta_2 = 0.0
var = 6, beta_1 = Inf, beta_2 = 500.0, delta_1 = 0.0,delta_2 = 0.0
var = 7, beta_1 = Inf, beta_2 = 500.0, delta_1 = 0.0,delta_2 = 2.0
Sensitivity for coefficients in objective:
var = 1, beta_1 = Inf, beta_2 = 3.0, delta_1 = 300.0,delta_2 = 300.0
var = 2, beta_1 = Inf, beta_2 = Inf, delta_1 = 100.0,delta_2 = 100.0
var = 3, beta_1 = -2.0, beta_2 = Inf, delta_1 = 0.0,delta_2 = 0.0
var = 4, beta_1 = Inf, beta_2 = 2.0, delta_1 = 500.0,delta_2 = 500.0
var = 5, beta_1 = -3.0, beta_2 = Inf, delta_1 = 500.0,delta_2 = 500.0
var = 6, beta_1 = Inf, beta_2 = 2.0, delta_1 = 500.0,delta_2 = 500.0
var = 7, beta_1 = -2.0, beta_2 = Inf, delta_1 = 0.0,delta_2 = 0.0
Mon Sep 14 15:56:08 2009