Peter Benner, Rafael Mayo, Enrique S. Quintana-Orti and Gregorio Quintana-Orti

SLICOT Working Note 2002-1: January 2002.

This paper describes enhanced services for remote model reduction of large-scale, dense linear time- invariant systems. Specically, we describe a Web service and a Mail service for model reduction on a cluster of Intel Pentium-II architectures using absolute error methods. Experimental results show the appeal and accessibility provided by these services.

Petko Petkov, Da Wei Gu and Mihail Konstantinov

SLICOT Working Note 2001-7: December 2001.

In this expository paper we show the application of some of the SLICOT routines in the robust control analysis and design of a disk drive servo system. An uncertainty model of the system plant is first derived which contains eleven uncertain parameters including four resonance frequencies, four damping coefficients and three rigid body model parameters. Three controllers for the uncertain system are designed using, respectively, the techniques of H_inf mixed sensitivity design, H_inf loop shaping design procedure (LSDP) and mu-synthesis method. With these controllers the closed-loop system achieves robust stability and in the cases of H_inf and mu-controllers the closed loop system practically achieves robust performance. A detailed comparison of the frequency domain and time domain characteristics of the closed-loop system with the three controllers is conducted. Further, model reduction routines have been applied to find a reasonably low order controller based on the mu-synthesis design. This reduced order controller maintains the robust stability and robust performance of the closed-loop system. Simulations of the nonlinear sampled-data servo system with the low order controller have been included as well, which confirms the practical applicability of the controller obtained.

Chris Denruyter

SLICOT Working Note 2001-6: October 2001.

In this report, we compare two Sylvester equation solvers: the MATLAB function lyap and the SLICOT function slsylv. An algorithm designed for model reduction and based on the resolution of a Sylvester equation is presented. In this context, timing results show the superiority of the SLICOT based m.file slsylv.

Isak Jonsson and Bo Kågström

SLICOT Working Note 2001-5: September 2001.

We continue our study on high-performance algorithms for solving triangular matrix equations. They appear naturally in different condition estimation problems for matrix equations and various eigenspace computations, and as reduced systems in standard algorithms. Building on our successful recursive approach applied to one-sided matrix equations (Part I), we now present recursive blocked algorithms for two-sided matrix equations, which include matrix product terms such as AXB T . Examples are the discrete-time standard and generalized Sylvester and Lyapunov equations. The means for high-performance are the recursive variable blocking, which has the potential of matching the memory hierarchies of today's high-performance computing systems, and level 3 computations which mainly are performed as GEMM operations. Different implementation issues are discussed, focusing on similarities and differences between one-sided and two-sided matrix equations. We present uniprocessor and SMP parallel performance results of recursive blocked algorithms and routines in the state-of-the-art SLICOT library. The performance improvements of our recursive algorithms are remarkable, including 10-folded speedups or more, compared to standard algorithms.

Isak Jonsson and Bo Kågström

SLICOT Working Note 2001-4: available since April 2001 and revised in August 2001.

Triangular matrix equations appear naturally in estimating the condition numbers of matrix equations and different eigenspace computations, including block-diagonalization of matrices and matrix pairs and computation of functions of matrices. To solve a triangular matrix equation is also a major step in the classical Bartels-Stewart method. We present recursive blocked algorithms for solving one-sided triangular matrix equations, including the continuous-time Sylvester and Lyapunov equations, and a generalized coupled Sylvester equation. The main parts of the computations are performed as level 3 general matrix multiply and add (GEMM) operations. Recursion leads to an automatic variable blocking that has the potential of matching the memory hierarchies of today's HPC systems. Different implementation issues are discussed, including when to end the recursion, the design of optimized superscalar kernels for solving leaf-node triangular matrix equations efficiently, and how parallelism is utilized in our implementations. Uniprocessor and SMP parallel performance results of our recursive blocked algorithms and corresponding routines in the state-of-the-art libraries LAPACK and SLICOT are presented. The performance improvements of our recursive algorithms are remarkable, including 10-folded speedups compared to standard algorithms.