Matlab MEX

Matlab is a great programming language and environment because of its ease of use, great visualization, and rapid prototyping abilities.

Matlab has a feature called MEX, Matlab Executables.   See:  Matlab MEX Introduction

MEX files allow Matlab scripts to call user-supplied functions written in C/C++ and Fortran.

* * *

The first step is to install a supported compiler.


Matlab R2011a Compilers

Matlab R2012a Compilers

* * *

Matlab running on Windows 7 can use:

Microsoft Platform SDK

Microsoft Visual C++ 2010

Note that the following update may be required for Windows 7 64-bit systems:

Microsoft Visual C++ 2010 Service Pack 1 Compiler Update for the Windows SDK 7.1

* * *

Next, go to the Matlab Command Window.


>>mex -setup

The C/C++ source code is compiled with Matlab as:

>>mex filename.cpp

* * *

A good MEX tutorial is given at:  Shawn Lankton Online

A useful example is given at:  University of Cambridge

* * *

Here is a sample Matlab script and the C/C++ function which it calls:



Here are the two commands for running the program set in Matlab:

>>mex rainflow_mex_ac.cpp


The program set performs rainflow cycle counting on a time history per  ASTM E 1049-85 (2005).   Rainflow counting is used for fatigue analysis.

* * *

Here is set of scripts for a fatigue damage spectrum for base excitation via rainflow cycle counting:

Here are the commands for running the program set in Matlab:

>>mex rainflow_fds_mex.cpp

Base acceleration case:


Applied force case:


* * *

See also:  Python Rainflow Page

* * *

– Tom Irvine



Message Passing Interface (MPI) is a portable library of subprograms which can be used to facilitate parallel computing.

The MPI subprograms can be called from C and Fortran programs.

Parallel Computing

Parallel computing enables large scale numerical problems to be solved in a timely manner.  It can be performed on a multi-core PC, or using several networked PCs on a cluster or grid.

The key is to separate large problems into smaller ones.  The calculations are then carried out simultaneously.

The MPI subprograms regulate the communication and synchronization between the various CPUs and memory locations.


MPI can be downloaded from:

The best installation method is to build the source code using the directions in:

This can be done under Cygwin or Linux.

Sample Program

Then go to:

Download the greetings.c program to the same folder which contains mpicc.

Also find and copy libmpich.a into this same folder.

Compile the program via:

mpicc -o greetings greetings.c libmpich.a

Then run the program via:

./mpirun-n 4 ./greetings

Grant persmission to run under firewalls if so prompted by pop-up windows.

The program can also be run as:

./mpiexec -n 4 ./greetings

In the above example, four processors were used.

More later . . .

* * *

Another source…     Argonne National Labs MPICH2

Tom Irvine

Matrix Inversion in LAPACK

Here is a Fortran program which performs matrix inversion using the LU decomposition method:  INVERSE_MATRIX.F

It is compiled via:


It compiles & runs under both Ubuntu & Cygwin.

See also:

* * *

The INVERSE_MATRIX.F program uses the subroutines: DGETRF & DGETRI

DGETRF computes an LU factorization of a general M-by-N matrix A using partial pivoting with row interchanges.

DGETRI computes the inverse of a matrix using the LU factorization computed by DGETRF.

This method inverts U and then computes inv(A) by solving the system inv(A)*L = inv(U) for inv(A).

* * *

Here is a similar C++ program:  matrix_inverse.cpp 

It is compiled as:

gcc -o matrix_inverse matrix_inverse.cpp -llapack -lstdc++

* * *

See also:  Python Matrix Inversion

* * *

Tom Irvine

C/C++ Tips

Passing 1D array as function argument


// prototype function declaration
void calc(double *x);

double a[1000];
int main()
// call calc
void calc(double *x)
// do some calculations


Dynamic Memory Allocation for 1D Array

int* a = NULL;     // Pointer to int, initialize to nothing.
int n;                      // Size needed for array
cin >> n;               // Read in the size
a = new int[n];     // Allocate n ints and save ptr in a.
for (int i=0; i a[i] = 0; // Initialize all elements to zero.
. . . // Use a as a normal array
delete [] a;            // When done, free memory pointed to by a.
a = NULL;            // Clear a to prevent using invalid memory reference.

Alternate declaration method:

#define DYNAMIC_VECTOR(Q,nrows)\
double* Q = NULL;\
Q = new double[nrows];


Dynamic Memory Allocation for 2D Arrays

#define ZERO(Q,nrows,ncols) \
for(long i=0; i<nrows; i++)\
{ for(long j=0; j<ncols; j++)\

#define DYNAMIC_MATRIX(Q,nrows,ncols) \
double **Q;\
Q =new double* [nrows];\
for(long row=0;row<nrows;row++) {Q[row]=new double[ncols]; }\

#define DELETE_MATRIX(aaa,nrows)\
for (long i = 0; i < nrows; i++)\
delete(aaa); \

* * *

Initialize array.

Example 100 elements each equal to -1.

using namespace std;
std::fill_n(array, 100, -1);

* * *

Tom Irvine

Mixed C++ Fortran Programming

This is another matrix multiplication project using the BLAS function dgemm. It is complied and run in an Ubuntu system.

The set consists of two programs:  (right mouse click.  save target or link as)


The set calculates: C = A*B

The C++ program matrix_mult_cf reads in two matrices, A & B.

The input matrices may each have an arbitrary number of rows and columns up to a certain cell limit. Furthermore, the number of A columns must be equal to the number of B rows.

The matrix_mult_cf program then calls a subroutine from fort_matmul_main to do the matrix multiplication.

The fort_matmul_main subroutine uses dgemm to perform the actual multiplication.

An important feature of this program set is that the C++ and Fortran codes pass the matrices back and forth using 1D arrays. This approach requires some extra steps, but it avoids the complexity of passing 2D arrays, especially if any of the matrices is non-square.

The set is compiled using:

gfortran matrix_mult_cf.cpp fort_matmul_main.f -o mmult_cf -lstdc++ -lblas

The resulting executable code is run as


* * *

Tom Irvine

MinGW Windows Installation

MinGW stands for Minimalist GNU for Windows.

MinGW is a native software port of the GNU Compiler Collection (GCC) and GNU Binutils for use in the development of native Microsoft Windows applications.

MinGW can be downloaded with Code::Blocks.

Or it can be downloaded via  soureforge.

* * *

Next add:     C:\MinGW\bin

to the PATH environment variable by opening the System control panel, going to the Advanced tab, and clicking the Environment Variables button.

* * *

C/C++ programs can then be compiled in command line as:

c:\(your folder)>g++ -o filename filename.cpp

This will generate an executable file called:  filename.exe

* * *

Fortran progams can be compiled as:

c:\(your folder)>gfortran -o filename filename.f

This will generate an executable file called:  filename.exe

* * *

MinGW also provides a Unix-like shell.

* * *

Tom Irvine

C/C++ Read 2D Array

The following program reads a 2D array of numbers.  The array may have an arbitrary or unknown number of rows and columns up to a certain cell limit. The program counts the number of columns and rows as it reads the data.

The program souce code is: matrix_read.cpp     (right mouse click & save target or link)

Please let me know if you have any suggestions for improvements.

* * *

Tom Irvine

Matrix Multiplication in BLAS & CBLAS

Here is a program that uses dgemm to multiply two matrices:  matrix_mult.cpp

The dgemm function is implemented using gsl blas.

It can be compiled under Cygwin as:

gcc -o matrix_mult_alt matrix_mult_alt.cpp -lgsl -lstdc++

And under Ubuntu as:

gcc -o matrix_mult_alt matrix_mult_alt.cpp -lgsl -lgslcblas -lstdc++

* * *

For benchmark comparison, here is a matrix multiplication program which uses pure pointer access without BLAS.    matrix_11.cpp

* * *

A Fortran program which uses the intrinsic function MATMUL is given at:  MATRIX_MULT.F

A Fortran program which uses DGEMM is:  MATRIX_MULT_D.F

The program was compiled using:

gfortran -o MATRIX_MULT_D MATRIX_MULT_D.F -lblas -llapack

As an aside, the following was used to determine memory leaks during the debugging process:

gfortran -g -o MATRIX_MULT_D MATRIX_MULT_D.F -lblas -llapack -Wall -fbounds-check -fmax-errors=1 -Werror

* * *

Here is a program  matrix_mult_cblas.cpp  which uses cblas_dgemm.

It is compiled in Cygwin via:

gcc -o matrix_mult_cblas matrix_mult_cblas.cpp c:/cygwin/lib/libblas.a c:/cygwin/lib/libcygwin.a c:/cygwin/home/tirvine/CBLAS/lib/cblas_LINUX.a -lstdc++

The paths should be modified according to your library installation.

Note that CBLAS can be installed in Cygwin using the instructions at:

* * *

Tom Irvine

Lapack in Ubunutu

Blas and Lapack may be installed on an Ubuntu system using the instructions at:

Generalized Eigenvalue Problem

* * *

C++ programs that use Lapack can be compiled in Terminal or Bash shell mode via:

g++ file_in -o file_out -lgfortran -llapack -lm


file_in is the source code

file_out is the output executable file

In addition, linker option -lblas may be needed by the source code.

Verify that the output file exists:

ls -lt

Then run the program:


* * *

The source code can also be built in the Code::Blocks IDE.

The following lines must be added to the linker options:

-lblas     (if needed by source code)


1. Suggest running Ubuntu with Gnome shell rather than Unity in order to avoid Code::Blocks missing menu bar problem.

2. Failure to include the linker options will result in errors such as:

ilaenv.f   undefined reference to ‘_gfortran_compare_string’

* * *
Tom Irvine

Generalized Eigenvalue Problem

The goal of this project is to solve a “very large” generalized eigenvalue problem in a “reasonable” amount of time using library functions. The mass and stiffness matrices may be either dense or sparse. Optimum routines are desired for each case.

Here is description of the libraries and packages

gcc – GNU Compiler Collection for C/C++

BLAS – Basic Linear Algebra Subprograms

CBLAS – C interface to the BLAS

LAPACK – Linear Algebra PACKage. LAPACK is a software library for numerical linear algebra including the generalized eigenvalue problem. It uses BLAS.

PETSc – Portable, Extensible Toolkit for Scientific Computation. PETSc is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations. It includes a large suite of parallel linear and nonlinear equation solvers that are easily used in application codes written in C, C++, Fortran and now Python.

SLEPc – a software library for the parallel computation of eigenvalues and eigenvectors of large, sparse matrices. It can be seen as a module of PETSc that provides solvers for different types of eigenproblems, including linear (standard and generalized) and quadratic, as well as the SVD.

Note that PETSc/SLEPc requires BLAS/LAPACK.

*  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *

Installation Steps:

I have a PC with Ubuntu 11.10 running with the gnome-classic shell.

I have installed the following:

Code::Blocks with the gcc compiler from the Ubuntu Software Center

*  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *


Step 1: download

$ wget

Step 2: extraction

$ tar zxf blas.tgz    this will create a directory BLAS

Step 3: compilation

$ cd BLAS

$ make all

If everything was correct in the previous step, your library is in the BLAS directory, called blas_LINUX.a

*  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *


Installation instructions:

BLAS must be installed first.



tar zxf cblas.tgz

Go to CBLAS folder.

Open in a text editor.

Modify this line in

BLLIB = (specify path)/BLAS/blas_LINUX.a

make all

*  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *


Step 1: download

$ wget

Step 2: extraction

$ tar zxf lapack.tgz        this will create a directory lapack.

Step 3: compilation

$ cd lapack-3.3.1

The configuration can be done in the file:

modify:  BLASLIB      = /directory/where/to/find/BLAS/blas$(PLAT).a

The LaPack library uses the BLAS library, so you need to tell where to find it. The result is a library lapack_LINUX.a: this can be copied in a place of your choice.

Save file as:

$ sudo apt-get install cmake      (if not installed already)

$ cmake

$ make

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *


The following step may also be needed:
$ sudo apt-get install libblas-dev liblapack-dev

*  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *

The C++ source code for the generalized eigenvalue program is given at:
gen_eigen.cpp    (right mouse click & save target or link)

The header files are:   dsygv.h     dsyev.h     cblas.h

The source code uses BLAS, CBLAS & LAPACK.

The following linker options are required:


* * * * * * * * *

Here is another C++ version: gen_eig.cpp

The program can be compiled via:

gcc -o gen_eig gen_eig.cpp -lblas -llapack -lstdc++

*  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *

Here is a pure Fortran program for the generalized eigenvalue problem:  geigen.f

The program can be compiled via:

$ gfortran -o geigen geigen.f -lblas -llapack

* * *

Matlab scripts are given at:

Matlab Linear Algebra Page

* * *

A future generalized eigenvalue problem code will use PETSC & SLEPc.  These packages can be downloaded via:

sudo apt-get install petsc-dev

sudo apt-get install slepc3.1-dev

* * *

Python is well-suited for the generalized eigenvalue problem.  It has functions derived from LAPACK.  It also has functions for sparse systems using ARPACK.  Sample scripts are posted at:

Vibrationdata Python  Generalized Eigenvalue

* * *

Tom Irvine