High-Performance Computer Algebra

Jeremy Johnson(jjohnson@cs.drexel.edu) and
Werner Krandick(krandick@cs.drexel.edu)

Aim and Scope
Improved algorithms, better implementations, and faster computers have enabled many previously time-consuming computer algebra computations to be performed routinely and have extended the range of what is practically possible to compute. However, there remains many computations that still require excessive computing time, and there are many cases where the performance achieved by an implementation could be dramatically improved. Tuning algorithms to perform well on modern computer architectures can be a difficult and time consuming process. Simply reducing the number of arithmetic operations is insufficient. It is necessary to take into account the memory hierarchy, pipelining, and instruction level parallelism. Compilers attempt to produce code that utilize these features, however, even for highly structured numeric computations such as matrix multiplication, compilers are far from achieving optimal performance. The scientific computing community has devoted considerable effort to producing (recently using automated techniques - see the Special Issue of the Proceedings of the IEEE on Program Generation, Optimization, and Platform Adaptation, Vol.93, No. 2, Feb. 2005 ) high-performance libraries of key kernel computations such as matrix multiplication, linear solve, and the FFT. Similar work has been initiated in a few cases by the computer algebra community (largely fundamental arithmetic operations and some work on matrix and polynomial computations); however, the effort is still at a primitive state compared to the numeric community. There are many challenges if achieving high-performance in computer algebra algorithm implementations due to their irregular structure and higher level data types. It is not enough to rely on optimizing compilers and the use of optimized implementations of basic arithmetic operations.

This session will focus on techniques for obtaining high-performance implementations of computer algebra algorithms. Potential Topics:
  1. Practical implementation of asymptotically fast algorithms
  2. Practical performance analysis of computer algebra algorithms
  3. Implementation and optimization techniques for computer algebra algorithms
  4. The application and extension of optimizing compiler techniques to the implementation of computer algebra algorithms.
  5. Adapting the implementation of computer algebra algorithms to improve performance by better utilizing the underlying hardware
  6. Techniques for automating the optimization and platform adaptation of computer algebra algorithms

Call for Submissions:
This session is a last minute addition to the ACA program; however,
submissions for presentations will still be considered. Please contact
either of the session organizers

List of Talks
  1. Overview of High-Performance Computer Algebra
    by Jeremy Johnson
    (Drexel University, Philadelphia, PA, U.S.A.)

  2. Efficient Polynomial Computations from a Programmer's Perspective
    by Xin Li
    (ORCCA, University of Western Ontario, London, Canada)

  3. Efficient Implementation of Multivariate Polynomial Arithmetic Based on Fast Fourier Transform
    by Xin Li, Marc Moreno Maza, Eric Schost
    (ORCCA, University of Western Ontario, London, Canada)

  4. Using High-Performance Taylor Shift by 1 in Real Root Isolation
    Anatole Ruslanov, Jeremy Johnson and Werner Krandick
    (Drexel University, Philadelphia, PA, U.S.A.)

  5. Towards High-Performance High-Level Arithmetic Code
    by Werner Krandick
    (Drexel University, Philadelphia, PA, U.S.A.)