Subsections

1. The Application Program Interface

1.1 Compatible programming languages

The MOSEK API is developed in the C programming language and in general it is easy to interface the API from C programs. However, on most platforms it is also possible to call the MOSEK API from Fortran and other programming languages such as Java.

1.2 API functionality

The MOSEK API is build around the concept of an optimization task. An optimization task is a data structure which contains:

The purpose of the functions provided by the MOSEK API is to allow the user in a safe and clean way to:

1.3 The optimization problem

1.3.1 Linear optimization

The simplest optimization problem that can be solved using the MOSEK API is the following form. Minimize or maximize the objective function

\begin{displaymath}
\sum_{j=0}^{n-1} c_j x_j + c^f
\end{displaymath} (1.1)

subject to the functional constraints
\begin{displaymath}
l_k^c \leq \sum_{j=0}^{n-1} a_{kj} x_j \leq u_k^c,  k=0,\ldots,m-1,
\end{displaymath} (1.2)

and the bounds
\begin{displaymath}
l_j^x \leq x_j \leq u_j^x,  j=0,\ldots,n-1.
\end{displaymath} (1.3)

Note the unconventional notation that the first index is $0$ and not $1$. Hence, $x_0$ and not $x_1$ is the first variable. The reason for this is that MOSEK is programmed in C which uses zero as index origin.

Subsequently, we will present each parameter of the optimization problem in some detail.

$c$:
The linear term $c_j x_j$ of the objective are stored in the vector $c$ as follows

\begin{displaymath}
c = \left [ \begin{array}{c} c_1  \vdots  c_n \end{array} \right ].
\end{displaymath}

$A$:
The matrix $A$ is given by

\begin{displaymath}
A = \left [ \begin{array}{ccc}
a_{00} & \cdots & a_{0(n-1)...
...a_{(m-1)0} & \cdots & a_{(m-1)(n-1)}  \end{array} \right ].
\end{displaymath}

$l^c$:
Specifies the lower bounds on the constraints.

$u^c$:
Specifies the upper bounds on the constraints.

$l^x$:
Specifies the lower bounds on the variables.

$u^x$:
Specifies the upper bounds on the variables.

1.3.2 Conic optimization

A conic optimization problem is a generalization of linear optimization where a constraint of the form

\begin{displaymath}
x \in \mathcal{C}
\end{displaymath}

is included in the linear optimization problem. $\mathcal{C}$ must be a convex cone which should satisfy the following requirements. Let

\begin{displaymath}
x^t \in { R}^{n^t},  t=1,\ldots,k
\end{displaymath}

be vectors comprised of parts of the decision variables $x$ such that each decision variable is a member of exactly one vector $x^t$. For example it could be the case that

\begin{displaymath}
x^1 = \left [ \begin{array}{c} x_1  x_4  x_7 \end{array}...
...gin{array}{c} x_6  x_5  x_{3}  x_2 \end{array} \right ].
\end{displaymath}

Next define

\begin{displaymath}
\mathcal{C}:= \left \{ x \in { R}^n:  x^t \in \mathcal{C}_t, t=1,2,\ldots,k \right \}
\end{displaymath}

where $\mathcal{C}_t$ must have one of the following forms.

The ${ R}$ set is never specified explicitly, because if a variable is not a member of any other cone, then it is member of this cone.

1.3.3 Nonlinear optimization

In addition to linear to conic optimization problems, then MOSEK is also capable os solving general nonlinear convex optimization problems. A nonlinear optimization problem is to minimize or maximize an objective function of the form

\begin{displaymath}
f(x) + \frac{1}{2} \sum_{i=0}^{n-1} \sum_{j= 0}^{n-1} q_{ij}^o x_i x_j + \sum_{j= 0}^{n-1} c_j x_j + c^f
\end{displaymath} (1.4)

subject to the functional constraints
\begin{displaymath}
l_k^c \leq g_k(x) + \frac{1}{2} \sum_{i=0}^{n-1} \sum_{j=0}...
..._j + \sum_{j=0}^{n-1} a_{kj} x_j \leq u_k^c,  k=0,\ldots,m-1,
\end{displaymath} (1.5)

and the bounds
\begin{displaymath}
l_j^x \leq x_j \leq u_j^x,  j=0,\ldots,n-1.
\end{displaymath} (1.6)

Observe this problem is a generalization of linear optimization problem. This implies that the linear parameters $c$, $A$ and so forth has the same meaning as in the case of linear optimization problem. Therefore, subsequently we will present the nonlinear functions and their parameters.

$f$:
A general function which must be twice differentiable.

$Q^o$:
The quadratic terms $q_{ij}^o x_i x_j$ in the objective are stored in the matrix $Q^o$ as follows

\begin{displaymath}
Q^o = \left[
\begin{array}{ccc}
q_{00}^o & \cdots & q_{0...
...{(n-1)0}^0 & \cdots & q_{(n-1)(n-1)}^o
\end{array} \right ].
\end{displaymath}

In MOSEK it is assumed that $Q^o$ is symmetric i.e.

\begin{displaymath}
q_{ij}^o = q_{ji}^o
\end{displaymath}

and therefore in general it is only necessary to specify the lower triangular part of $Q^o$.

$c$:
The linear term $c_j x_j$ of the objective are stored in the vector $c$ as follows

\begin{displaymath}
c = \left [ \begin{array}{c} c_1  \vdots  c_n \end{array} \right ].
\end{displaymath}

$g_k(x)$:
A general nonlinear function which must be twice differentiable.

$Q^k$:
The quadratic terms $q_{ij}^k x_i x_j$ in the $k$th constraint are stored in the matrix $Q^k$ as follows

\begin{displaymath}
Q^k = \left [ \begin{array}{ccc}
q_{00}^k & \cdots & q_{0(...
...n-1)0}^k & \cdots & q_{(n-1)(n-1)}^k  \end{array} \right ].
\end{displaymath}

MOSEK assumes that $Q^k$ is symmetric i.e.

\begin{displaymath}
q_{ij}^k = q_{ji}^k
\end{displaymath}

and therefore in general it is only necessary to specify the lower triangular part of $Q^k$.

1.3.3.1 Assumptions about a nonlinear optimization problem

MOSEK makes two assumptions about the optimization problem.

The first assumption is that all functions are at least twice differentiable. This can be stated more precisely as $f(x)$ and $g(x)$ must be at least twice differentiable for all $x$ such that

\begin{displaymath}
l^x < x < u^x.
\end{displaymath}

The second assumption is that

\begin{displaymath}
f(x) + \frac{1}{2}x^T Q^o x
\end{displaymath} (1.7)

must be a convex function if the objective is minimized. Otherwise if the objective is maximized it must be a concave function. Moreover,
\begin{displaymath}
g_k(x) + \frac{1}{2}x^T Q^k x
\end{displaymath} (1.8)

must be a convex function if

\begin{displaymath}
u_k^c < \infty
\end{displaymath}

and a concave function if

\begin{displaymath}
l_k^c > -\infty.
\end{displaymath}

Note this implies nonlinear equalities are not allowed.

If these two assumptions are not satisfied, then it cannot be guaranteed that MOSEK produces correct results or works at all.

1.3.4 Integer optimization

Some optimization problems contain the additional restriction that

\begin{displaymath}
x_j \in \mathcal{J}
\end{displaymath}

must be integer constrained where

\begin{displaymath}
\mathcal{J} \subseteq \{1,\ldots,n\}.
\end{displaymath}

Hence, all or a subset of the variables must be integer valued at optimum. MOSEK can solve this type problem if the problem does not contain any nonlinear functions.

1.4 Naming convention and data structures

In the definition of the MOSEK API a consistent naming convention has been used. This implies that whenever for example numcon is an argument in a function definition then it means the number of constraints.

In Table [*] the C variables used to specify the problem parameters are presented.

Table: Naming convention used in MOSEK.
C name C type Dimension Related problem
      parameter
numcon int 1 $m$
numvar int 1 $n$
numcone int 1 $t$
numqonz int 1 $q_{ij}^o$
qosubi int * numqonz $q_{ij}^o$
qosubj int * numqonz $q_{ij}^o$
qoval double * numqonz $q_{ij}^o$
c double * numvar $c_j$
cfix double 1 $c^f$
numqcnz int 1 $q_{ij}^k$
qcsubk int * qcnz $q_{ij}^k$
qcsubi int * qcnz $q_{ij}^k$
qcsubj int * qcnz $q_{ij}^k$
qcval double * qcnz $q_{ij}^k$
aptrb int * numvar $a_{ij}$
aptre int * numvar $a_{ij}$
asub int * aptre[numvar-1] $a_{ij}$
aval double * aptre[numvar-1] $a_{ij}$
bkc int * numcon $l_k^c$ and $u_k^c$
blc double * numcon $l_k^c$
buc double * numcon $u_k^c$
bkx int * numvar $l_k^x$ and $u_k^x$
blx double * numvar $l_k^x$
bux double * numvar $u_k^x$


The relation between the C variables and the problem parameters is as follows:

1.5 The optimization task

The central MOSEK data structure is the optimization task. The optimization task is essentially a representation of an optimization problem plus additional information related to the optimization. I.e. the task contains information about whether the objective should be minimized or maximized.

In general the user of MOSEK API cannot change the optimization task directly but must use the functions available in the MOSEK API. MOSEK allocates the space it needs for its internal data structures. Hence, MOSEK and the user application never share space. This implies the user application can freely deallocate space used while inputting data to MOSEK as showed in the following code fragment

r = MSK_inputdata(task,
                  numcon,numvar,
                  numcon,numvar,
                  c,0.0,
                  ptrb,
                  ptre,
                  subj,
                  val,
                  bkc,
                  blc,
                  buc,
                  bkx,
                  blx,
                  bux);

free(c);
free(ptrb);
/* ... */

Initially when an optimization is created, then user must supply some defaults for the number of constraints and variables. In general it is recommended to supply values which close to expected values. However, if constraints and variables later are added, then the optimization task will expand the task dimensions as needed.

1.6 Conventions used in the API

1.6.1 Prefix

All definitions in the MOSEK API are prefixed with an MSK. Hence, if the user's application avoids names and definitions starting with MSK, then name clashes should not occur.

1.6.2 Response value

All functions in the MOSEK API returns a so-called response value. The possible response values and their interpretation can be seen in Section [*].


1.7 Type definitions

Type definition Description
MSKcallbackfunc Definition of the progress call-back function.
MSKctrlcfunc A ctrl-c callback function.
MSKexitfunc A user defined exit function which is called in case of fatal errors.
MSKfreefunc A user defined free function.
MSKmallocfunc A user defined malloc function.
MSKnlgetspfunc Definition of structural nonlinear function call-back.
MSKnlgetvafunc Definition of numerical nonlinear function call-back.
MSKstreamfunc A function of this type can be linked to any stream.

* MSKcallbackfunc
Description:

Definition of the progress call-back function. The progress call-back function is a user defined function which will be called by MOSEK occasionally during the optimization process. In particular the call-back function is called at the beginning of each iteration in interior-point optimizer. For the simplex optimizers then MSK_IPAR_SIM_LOG_FREQ controls how frequent the call-back is called.

Typically the user defined call-back function displays information about the solution process. The call-back function can also be used to terminate the optimization process because if the progress call-back function returns a nonzero value, then the optimization process is aborted.

It is important that the user defined call-back function does not modify the optimization task, this will lead to undefined and incorrect results. The only MOSEK functions that can be called safely from within the user defined call-back function are MSK_getdouinf and MSK_getintinf which accesses the task information database. The items in task information database are updated during the optimization process.

Syntax:
MSKintt MSKcallbackfunc
       (MSKtask_t task,
        void * usrptr,
        MSKcallbackcodee caller);
Arguments:
task~
An optimization task.
usrptr~~
A pointer to a user defined data structure or a null pointer.
caller~
This is an integer which will denote where the function was called from. See Section [*] for the possible values of this argument.
Return:

If the return is nonzero, then MOSEK terminates whatever it is doing and returns control to the calling application.

* MSKctrlcfunc
Description:

Definition of a user defined ctrl-c function. If the function returns a nonzero value, then MOSEK assumes ctrl-c has been pressed.

Syntax:
MSKintt MSKctrlcfunc (void * usrptr);
Arguments:
usrptr~~
A pointer to a user defined data structure or a null pointer.
Return:

If the return is nonzero, then MOSEK terminates whatever it is doing and returns control to the calling application.

* MSKexitfunc
Description:

A user defined exit function which is called in case of fatal errors.

Syntax:
void MSKexitfunc
       (void * usrptr,
        MSKCONST MSKchart * file,
        MSKintt line,
        MSKCONST MSKchart * msg);
Arguments:
usrptr~~
A pointer to a user defined data structure or a null pointer.
file~
The name of the file where the fatal error occurred.
line~
The line number in file where the fatal error occurred.
msg~
A message about the error.

* MSKfreefunc
Description:

A user defined free function.

Syntax:
void MSKfreefunc
       (MSKvoid_t usrptr,
        MSKCONST void * buffer);
Arguments:
usrptr~
A pointer to a user defined data structure or a null pointer.
buffer~
A pointer to the buffer which should be freed.

* MSKmallocfunc
Description:

A user defined malloc function.

Syntax:
void * MSKmallocfunc
       (MSKvoid_t usrptr,
        size_t size);
Arguments:
usrptr~
A pointer to a user defined data structure or a null pointer.
size~
The number of chars to allocate.
Return:

A pointer to the allocated memory.

* MSKnlgetspfunc
Description:

Type definition of the call-back function which is used to provide structural information about the nonlinear functions $f$ and $g$ in the optimization problem.

Hence, it is the user's responsibility to provide a function satisfying the definition. The function is inputted to MOSEK using the API function MSK_putnlfunc.

Syntax:
MSKintt MSKnlgetspfunc
       (void * nlhandle,
        MSKintt * numgrdobjnz,
        MSKidxt * grdobjsub,
        MSKidxt i,
        MSKintt * convali,
        MSKintt * grdconinz,
        MSKidxt * grdconisub,
        MSKintt yo,
        MSKintt numycnz,
        MSKCONST MSKidxt * ycsub,
        MSKlintt maxnumhesnz,
        MSKlintt * numhesnz,
        MSKidxt * hessubi,
        MSKidxt * hessubj);
Arguments:
nlhandle~~
A pointer to a user defined data structure. The pointer is passed to MOSEK when the function MSK_putnlfunc is called.
numgrdobjnz~~
If required, then numgrdobjnz should be assigned the number of non-zero elements in the gradient of $f$.
grdobjsub~~
If required, then it should contain the position of the non-zero elements in the gradient of $f$. The elements are stored in

\begin{displaymath}
\mathtt{grdobjsub}[ 0,..,\mathtt{numgrdobjsub}-1]
\end{displaymath}

i~
Index of a constraint.
convali~~
If a non-null pointer, then

\begin{displaymath}
\mathtt{convali[ 0]} = \left \{ \begin{array}{cl}
0, & g_{...
...orall x, \\
1, & \mbox{otherwise}.\\
\end{array} \right .
\end{displaymath}

grdconinz~~
If required, then grdconinz should be assigned the number of non-zero elements in $\nabla g_i(x)$.
grdconisub~~
If a non-null pointer, then

\begin{displaymath}
\mathtt{grdconisub}[ 0,..,\mathtt{grdconinz[ 0]}-1]
\end{displaymath}

should be identical to the positions of the non-zeros in $\nabla g_i(x)$.
yo~
If non-zero, then the $f$ should be included when the gradient and the Hessian of the Lagrangian is computed.
numycnz~
Number of constraint functions which are included in the definition of the Lagrangian. See ([*]).
ycsub~
Index of constraint functions which are included in the definition of the Lagrangian. See ([*]).
maxnumhesnz~
Length of the arguments hessubi and hessubj.
numhesnz~~
If required, then numhesnz should be assigned the number of non-zero elements in the lower triangular part of the Hessian of the Lagrangian:
\begin{displaymath}
L := \mathtt{yo} f(x) - \sum_{k= 0}^{\mathtt{numycnz}-1} g_{\mathtt{ycsub[k]}}(x)
\end{displaymath} (1.18)

hessubi~~
If a non-null pointer, then hessubi and hessubj are used to convey the position of the non-zeros in the Hessian of the Lagrangian $L$ (see ([*])) as follows
\begin{displaymath}
\nabla^2 L_{\mathtt{hessubi}[k],\mathtt{hessubj}[k]} (x) \not = 0.0
\end{displaymath} (1.19)

for $k=0,\ldots,\mathtt{numhesnz}-1$. All other positions in $L$ are assumed to be zero. Note it is sufficient to return the lower or the upper triangular part of the Hessian.
hessubj~~
See the argument hessubi.
Return:

If the return is nonzero, then MOSEK assumes an error during the structure computation.

* MSKnlgetvafunc
Description:

Type definition of the call-back function which is used to provide structural as well as numerical information about the nonlinear functions $f$ and $g$ in the optimization problem.

For later use we need the definition of the Lagrangian $L$ which is given by

\begin{displaymath}
L := \mathtt{yo} * f(\mathtt{xx}) - \sum_{i= 0}^{\mathtt{nu...
...tt{yc}_{\mathtt{subi[k]}} g_{\mathtt{subi[k]}} (\mathtt{xx}).
\end{displaymath} (1.20)

Syntax:
MSKintt MSKnlgetvafunc
       (void * nlhandle,
        MSKCONST MSKrealt * xx,
        MSKrealt yo,
        MSKCONST MSKrealt * yc,
        MSKrealt * objval,
        MSKintt * numgrdobjnz,
        MSKidxt * grdobjsub,
        MSKrealt * grdobjval,
        MSKintt numi,
        MSKCONST MSKidxt * subi,
        MSKrealt * conval,
        MSKCONST MSKlidxt * grdconptrb,
        MSKCONST MSKlidxt * grdconptre,
        MSKidxt * grdconsub,
        MSKrealt * grdconval,
        MSKrealt * grdlag,
        MSKlintt maxnumhesnz,
        MSKlintt * numhesnz,
        MSKidxt * hessubi,
        MSKidxt * hessubj,
        MSKrealt * hesval);
Arguments:
nlhandle~~
A pointer to a user defined data structure. The pointer is passed to MOSEK when the function MSK_putnlfunc is called.
xx~
The solution at which the nonlinear function must be evaluated.
yo~
Multiplier on the objective function $f$.
yc~
Multipliers for the constraint functions $g_i$.
objval~~
If required, then objval should be be assigned $f(x)$ evaluated at $xx$.
numgrdobjnz~~
If required, then numgrdobjnz should be assigned the number of non-zero elements in the gradient of $f$.
grdobjsub~~
If a non-null pointer, then it should contain the position of the non-zero elements in the gradient of $f$. The elements are stored in

\begin{displaymath}
\mathtt{grdobjsub}[ 0,\ldots,\mathtt{numgrdobjnz}-1].
\end{displaymath}

grdobjval~~
If required, then it should contain the numerical value of the gradient of $f$ evaluated at $\mathtt{xx}$. The following data structure

\begin{displaymath}
\mathtt{grdobjval[k]} = \frac{\partial f}{\partial x_{{\tt grdobjsub[k]}}} (\mathtt{xx})
\end{displaymath}

for $k= 0,\ldots,\mathtt{numgrdobjnz}-1$ is used.
numi~
Number of elements in subi.
subi~
$\mathtt{subi}[ 0,...,\mathtt{numi}-1]$ contain the indexes of the constraints that has to be evaluated.
conval~~
$g(\mathtt{xx})$ for the required constraint functions i.e.

\begin{displaymath}
\mathtt{conval[k]} = g_{\mathtt{subi[k]}}(\mathtt{xx)}
\end{displaymath}

for $k= 0,\ldots,\mathtt{numi}-1.$
grdconptrb~
If required, then it is used to specify the gradients of the constraints. See the argument grdconval for details.
grdconptre~
If required, then it is used to specify the gradients of the constraints. See the argument grdconval for details.
grdconsub~~
If required, then it is used to specify the position of the non-zeros in gradients of the constraints. See the argument grdconval for details.
grdconval~~
grdconptrb, grdconptre, and grdconsub are used to specify the gradients of the constraint functions. grdconptrb and grdconptre are specified by the calling function.

Observe both grdconsub and grdconval should be updated when required.

The gradient data are stored as follows

\begin{displaymath}
\begin{array}{l}
\mathtt{grdconval[k]} = \frac{\partial g_...
...re}[i]-1, \\
\quad i= 0,\ldots,\mathtt{numi}-1.
\end{array} \end{displaymath}

grdlag~~
If required, then grdlag should be identical to gradient of the Lagrangian function i.e.

\begin{displaymath}
\mathtt{grdlag} = \nabla L.
\end{displaymath}

maxnumhesnz~
Maximum number of non-zeros in the Hessian of the Lagrangian. I.e. maxnumhesnz is the length of the arrays hessubi, hessubj, and hesval.
numhesnz~~
If required, then numhesnz should be assigned the number of non-zeros elements in the Hessian of the Lagrangian $L$, see ([*]).
hessubi~~
See the argument hesval.
hessubj~~
See the argument hesval.
hesval~~
hessubi, hessubj, and hesval are used to store the Hessian of the Lagrangian function $L$ defined by ([*]).

The following data structure

\begin{displaymath}
\mathtt{hesval}[k] = \nabla^2 L_{\min(\mathtt{hessubi}[k],\...
...t{hessubj}[k]),\max(\mathtt{hessubi}[k],\mathtt{hessubj}[k])}
\end{displaymath}

for $k= 0,\ldots,\mathtt{numhesnz[ 0]}-1$ is used. Note if one element is specified multiple times, then the elements are added together. Hence, only the lower (or the upper) triangular part of the Hessian should be returned.
Return:

If the return is nonzero, then MOSEK assumes an error happend during the function evaluation.

* MSKstreamfunc
Description:

A function of this type can be linked to any of the MOSEK streams. This implies if a message is send to the stream to which the function is linked, then the function is called by MOSEK and the argument str will be identical to the message. Hence, the user can decide what should happen to message.

Syntax:
void MSKstreamfunc
       (void * handle,
        MSKCONST MSKstring_t str);
Arguments:
handle~~
A pointer to a user defined data structure (or a null pointer).
str~
A string containing a message to a stream.


1.8 Function definitions

As already mentioned the MOSEK API consists of a number of data structures and functions which operates on the data structures.

In Section [*] all functions available in the MOSEK API are listed in alphabetic order together with a brief description. This Section is followed by detail description of MOSEK functions.

Another way to learn about the parameters and functions included in MOSEK API is to study the file mosek.h supplied with all versions of the MOSEK API. This file contains the definition of all the data structures and functions definitions provided by the MOSEK API.

1.8.1 An example description

For each function available in the MOSEK API then a summary of the following form is provided.

* somefunction
Description:

A description of the purpose of the function presented.

Syntax:

int somefunction(int arg1,
                 char *arg2,
                 char *arg3)
The syntax for the function call is presented.

Arguments:

arg1
A short presentation of the purpose of arg1.

Comments:

Some comments about the function may be presented.

Return:

Presents the possible return values.