General Session
- The Art of Symbolic Computation
by Erich Kaltofen ( North Carolina State University, USA )
Abstract (pdf)
Symbolic Computation in the past 40 years has brought us remarkable theory:
Berlekamp-Zassenhaus; Groebner; Risch; Gosper, Karr and WZ; cylindrical
algebraic decomposition; sparse polynomial interpolation; LLL; Wiedemann
and block Lanczos; matrix Pade; straight-line and black box polynomial
calculus; baby-steps/giant-steps and black-box linear algebra and polynomial
factorization; symbolic/numeric GCD, factorization and sparse interpolation;
Tellegen's principle; sparse resultants; Giesbrecht/Mulders-Storjohann
diophantine linear solvers; Sasaki/van Hoeij power sums and Bostan et al.
logarithmic derivatives; fast bit complexity for linear algebra over the
integers; over the integers; essentially optimal polynomial matrix inversion;
Skew, Ore and differential polynomial factorization; Barvinok short rational
functions and supersparse polynomial factorization; and many more.
The discipline has lead to remarkable software like Mathematica and Maple,
which supply implementations of these algorithms to the masses. As it turned
out, a killer application of computer algebra is high energy physics, where
a special purpose computer algebra system, SCHOONSHIP, helped in work worthy
of a Nobel Prize in physics in 1999.
In my talk I will attempt to describe what the descipline of Symbolic Computation
is and what problems it tackles. In particular, I will discuss the use
of heuristics (numerical, randomized, and algorithmic) that seem necessary
to solve some of today's problems in geometric modeling and equation solving.
Thus, we seem to have come full cycle (the discipline may have started
in the 1960s at the MIT AI Lab), but with a twist that I shall explain.
- Counting Young Group Double Cosets with Computer Algebra
by Bill Pletsch ( Albuquerque Technical Vocational Institute Alburquerque, New Mexico, USA )
Abstract (pdf)
Until the advent of computer algebra, the theory of double cosets has been
restricted to a few elegant but computationally impossible theorems. Impossible
in the sense that in principle the calculation can be done but it will
take ten thousand years. Today, using Computer Algebra much can be calculated
quickly. Using Macsyma and Maple in the special case of Young Group double
cosets, we will see just how valuable Computer Algebra can be.
We will focus on the use of CA in three areas of interest: the initial
calculations, the generation of novel double coset symbols, and the generalization
the method of successive subtractions.
Without CA the new insights into the theory of double cosets would never have been discovered. There was no particular reason to believe that the long arduous calculations required to compute a each double coset number would yield any fruit. Thanks to the power of MACSYMA these calculations were simple to conduct. The result was a breakthrough that immediately revealed a multitude of patterns.
In follow-up research a novel system of double coset symbols was developed.
The novel system uses a special canonical form that can be expressed in
Maple as a subroutine. Using this subroutine and other Maple subroutines,
lists of these symbols can be quickly generated.
In the initial calculations, successive subtractions of the data eventually
yielded a constant: thus uncovering a data fitting polynomial (in this
case a quartic). The initial calculations used successive subtractions
to point at the pattern, but the pattern itself is not a proof. However,
the method of successive subtractions also points to the next step. Using
the lists of DC-symbols and Maple once again the method of successive subtractions
is generalized by successive subtractions of sets. The patterns revealed
resulted in a series of proofs.
Time permitting; sketches of the proofs will be given with special emphasis on the roles of Macsyma and Maple. It is easy to see that in all these cases, CA was a critical aid in speeding the requisite insight for the next leap in theory.
- Rational approximants of formal power multivariate series
by Christiane Hespel, Cyrille Martig ( IRISA-INSA, France )
Abstract (pdf)
Please see pdf.
- A Unified Formula for Arbitrary Order Symbolic Derivatives and Integrals
of The Power-Exponential Class
by Mhenni M. Benghorbal ( Simon Fraser University, Burnaby, Canada )
Abstract (pdf)
Please see pdf.
- Fist Order Algebraic Differential Equations -- A Computer Algebraic Approach
by Yujie Ma ( Chinese Academy of Sciences, Beijing, P. R. China )
Abstract (pdf)
In this talk, we present our computer algebraic approach to first order algebraic differential equations. We apply the algebro geometric approach to the study of first order algebraic differential equations and computer algebraic approach is given. The algebro geometric approach is used to obtain bound of the degree of rational solutions of a first order algebraic differential equation with algebraic genus greater than one and the number of rational solutions of a first order algebraic differential equation. Matsuda's and Eremenko's algorithms are explicitly given by computer algebra. The algebraic general solutions of first order algebraic differential equations were studied by using of the birational transformations of algebraic curves, and an algorithm was presented to get an algebraic general solution of first order algebraic differential equations without movable critical point if the algebraic general solution exists. We also present a polynomial algorithm for the uniform solutions of first order algebraic differential equations with constant coefficients. All of the algorithms are implemented by Maple.
- On Computing Network Prestige
by Wai-Ki CHING ( The University of Hong Kong, Hong Kong )
Abstract (pdf)
The computation of network prestige is an important issue in studying networks
such as the WWW, social networks and epidemic networks. A number of iterative
methods based on solving the dominant eigenvector have been proposed for
computing the prestige of symmetric or asymmetric network whose problem
size is huge. The PageRank algorithm has been successfully applied in the
computation of ranking of webpages. In this talk, we propose a revised
PageRank algorithm for the computation of prestige of a general network
and extend it to handle the case when there are negative relations among
the members in the network.
- Math Authoring on xfy
by Masaki Kume, Atsushi Miyamoto, Hiroshi Kai and Matu-Tarow Noda ( Ehime University, Japan )
Abstract ( pdf )
Xfy is a framework provided by JUSTSYSTEM for authoring and editing compound
XML documents. It enables us to handle multiple XML vocabularies, such
as MathML, SVG, and so on, in a workspace. Furthermore it has extensible
architecture (plugin and VCD) to manage foreign XML vocabularies. In this
talk, we provide two types of plugins for authoring MathML presentation
markup. One is MathML editor and another is 2D/3D plot to view algebraic
functions. Editing MathML documents by the editor, the plots are updated
in real time. We can apply these plugins and xfy for interactive educational
contents.
Young Researchers Session
- An Interactive Algorithm Animation System for the Buchberger Algorithm
by Nakayama Hiromasa and Kuwahara Kosuke (Kobe univ. Japan)
Abstract
Computer algebra systems give an answer for a given problem after a heavy
computation. However, it is not easy to modify the procedure of the computation
interactively. Since the performace of the Buchberger algorithm changes
depending on what strategy we chose, it will be worth doing to build a
system to perform the Buchberger algorithm interactively. We will suppose
a new interactive and animated interface for the Buchberger algorithm.
In adition to this, our system aims at a system which is fun to use.
- Combinatorial Criteria for Groebner Basis
by John Perry (North Carolina State Univ., USA)
Abstract
Groebner bases are an important tool of computer algebra, with applications
to many fields. Their computation usually requires many reductions of S-polynomials.
These reductions are computationally expensive. Bruno Buchberger discovered
that sometimes we can skip the reduction of some S-polynomials. Since his
two criteria consider only the leading terms of the polynomials, we call
them combinatorial. A question arises: are there other combinatorial criteria
that allow us to skip S-polynomial reduction? We provide the necessary
and sufficient conditions on three leading terms (the usual number used
in implementation), revealing that additional combinatorial criteria exist.
We also investigate how often these criteria allow us to skip an S-polynomial
reduction.
- Factorization of Multivariate Polynomials by Extended Hensel Construction
by Daiju Inaba (Univ. of Tsukuba, Japan)
Abstract
The extended Hensel construction is a Hensel construction at an unlucky
evaluation point for the generalized Hensel construction, and it allows
as to avoid shifting the origin in multivariate polynomial factorization.
We have implemented a multivariate factorization algorithm which is based
on the extended Hensel construction, by solving a leading coefficient problem
which is peculiar to our method. We describe the algorithm and present
some experimental results. Experiments show that the extended Hensel construction
is quite useful for factoring multivariate polynomials which cause large
expression swell by shifting the origin. We compare our algorithm with
a conventional orthodox algorithm and an enhanced version by Wang's technique.We
found that, for typical polynomials which cause large expression swells
by the origin shifting, our algorithm is about 50 - 1000 times faster than
the orthodox algorithm and about 20 - 200 times faster than the enhanced
version.
- Combinatorial structures and Diophantine equations for the construction
of self-dual codes
by Stelios Georgiou (Univ. of Athens, Greece)
Abstract
Self-dual codes are among the most important families of linear codes.
In this paper, we introduced and used several methods to construct self-dual
codes over $GF(p)$, for some primes $p$. In particular, we employ Hadamard
matrices, weighing matrices and orthogonal designs for constructing some
high distance codes. We generalized the results of these methods and product
the generalized orthogonal designs. These new designs had led to many new
self-dual codes with parameters that were previously unknown. Furthermore,
we present an idea on how Diophantine equations over $GF(p)$ could be applied
for a direct construction of self-dual codes. These equations, circulant
and block-circulant matrices, are combined together and had led to algebraic
systems over $GF(p)$. The interesting property of these systems is that
any of their solution products a linear code with the desirable properties.
- Algorithm for local cohomology classes attached to an isolated hypersurface singularity - toward computing of standard bases -
by Takayuki Abe and Shinichi Tajima (Niigata Univ., Japan)
Abstract
In treating a hypersurface having singular points, we often perform a concrete
analysis by computing the standard base of Jacobi ideal to solve a membership
problem. When the defining polynomial has no parameter, we can perform
various calculations with the standard base algorithm by Mora and by Lazard.
On the other hand, when the defining polynomial has parameters, we face
various difficulties in actual computation. In this talk, using Grothendieck
duality, we develop a new method for computing standard bases. The key
idea is to use an algorithm for local cohomology. The method developed
is quite effective for polynomials containing parameters.
- Algorithms for partial fraction decomposition via differential equations
by Takumu Shoji and Shinichi Tajima (Niigata Univ., Japan)
Abstract
Using differential operators, we developed new algorithms for (complete)
partial fraction decomposition of rational functions. The new algorithms
are fast because they work in some quotient fields, and they are quite
efficient and powerful, in particular when the rational function contains
highly multiple factors in its denominator. High efficiency of the algorithms
is demonstrated by computer experiments.
- Hamiltonian Normal Form Computations Through Invariant Theory
by Guillem Huguet and Jesus F. Palacian (Univ. of Navarra, Spain)
Abstract
We classify all possible Williamson normal forms related to a quadratic
polynomial Hamiltonian of $n$ degrees of freedom, with $n$ arbitrary. Then,
given a semisimple part of the quadratic Hamiltonian,
we compute a fundamental set of invariants as well as a basis of its linearly
independent invariants for a given degree. We consider Hamiltonian functions
of the form
\[ {\cal H}(x) = {\cal H}_0(x) + {\cal H}_1(x) + \cdots, \]
where $x$ is a 2$n$-dimensional vector in the coordinates $x_1,\dots,x_n$
and respective momenta $X_1,\dots,X_n$. Each ${\cal H}_i$ is a homogeneous
polynomial in $x$ of degree $i+2$. We present a combinatorial method to
generate all possible normal forms corresponding to any ${\cal H}_0$ with
$n$ arbitrary. This classification is based on the type and number of indecomposable
eigenspaces of the matrix $A$ associated with the linear differential system
derived from ${\cal H}_0$, and the number of normal forms is closely related
to the number of partitions of the dimension of the system.
- Families of Factorizations of Linear Partial Differential Operators
by Shemyakova Ekaterina (Lisc, Austria)
Abstract
The problem of factorization of linear partial differential operators is
well-known due to its importance for solving of partial differential equations.
It is stated that whenever there is a factorization of given linear partial
differential operator (into arbitrary number of factors of an arbitrary
order), there is a family of factorizations of the same operator (an explicit
formulas for the coefficients are found). Hence, looking for factorization
one may assume some of coefficients are one without loss of generality
and, therefore, simplify the problem.
Approximate Algebraic Computation
- The Nearest Multivariate System with Given Root Structure
by Scott Pope (North Carolina State University, USA)
Abstract
Let $f_1, \ldots, f_s$ be polynomials in $x_1,\ldots, x_n$ with finitely
many common roots. Assume that $f_1,\ldots, f_s$ has roots with multiplicities,
which can be described by the vanishing of certain derivatives of $f_1,\ldots,
f_s$ at the roots. Given the root multiplicities, we would like to recover
the roots of the system. However, even small perturbations of the coefficients
can completely destroy the above root structures. This is the reason that
in numerical computations handling the above systems is a major challenge:
convergence to the solution is slow and the output is unreliable, or no
output is returned. We propose an iterative method, which for a given (perturbed)
system $F_1,\ldots,F_s$ and given root structure, computes the nearest
system $f_1,\ldots, f_s$ which has roots with the given structure. The
method also computes the common roots of $f_1,\ldots, f_s$ simultaneously.
Similar results were only known previously in the univariate case \cite{Lihong,
Zeng, Kaltofen-Hitz}, and our result is a generalization of them to the
multivariate case.
- On the Location of Zeros of a Complex Interval Polynomial
by Hiroshi Sekigawa and Kiyoshi Shirayanagi (NTT, Japan)
Abstract
Given a univariate complex interval polynomial F, we provide rigorous method
for deciding whether or not there exists a polynomial in F that has a zero
in a prescribed closed complex domain D. We use circular intervals and
assume that the boundary C of D is a simple curve and that C is a union
of a finite number of arcs, each of which is represented by a rational
function. When D is not bounded, we assume further that all of the polynomials
in F have the same degree. Examples of such domains are the outside of
an open circle and a half-plane with boundary. The decision method uses
the representation of C and the property that a polynomial in F is of degree
one with respect to each coefficient regarded as a variable.
- Polynomial root-finding with matrix eigen-solving
by Victor Pan (City Univ. of New York, USA)
Abstract
Numerical matrix methods are increasingly popular for polynomial root-finding.
This approach usually amounts to the application of the QR algorithm to
the highly structured Frobenius companion matrix of the input polynomial.
The structure, however, is routinely destroyed already in the first iteration
steps. To accelerate this approach, we exploit the matrix structure of
the Frobenius and generalized companion matrices, employ various known
and novel techniques for eigen-solving and polynomial root-finding, and
in addition to the Frobenius input allow other highly structured generalized
companion matrices. Employing polynomial root-finders for eigen-solving
is a harder task because of the potential numerical stability problems,
but we found some new promising directions, particularly for sparse and/or
structured input matrices.
- An Algebraic Method for Separating Close-root Clusters and the Minimum
Root Separation
by Tateaki Sasaki (Univ. of Tsukuba, Japan)
and Fujio Kako (Nara Women's Univ., Japan)
Abstract
Given a univariate polynomial over {\bf C}, we discuss two topics, an algebraic
method for separating a factor of mutually close roots from the polynomial,
and a reasonable formula for the minimum root separation, by assuming that
the close roots form well-separated clusters. The technique we use is very
original and effective; we move the origin near to the center of a close-root
cluster, then we are able to treat the other roots collectively, reducing
the problem to a very simple one. Following this idea, we present a very
simple and stable algebraic method for separating the close-root cluster,
derive two lower-bound formulas for the distance between two close roots,
and obtain a fairly simple lower bound of the minimum root separation of
polynomials over {\bf C}.
- A fast rank-revealing method for Sylvester matrix
by Bingyu Li, Zhuojun Liu and Lihong Zhi (AMSS, China)
Abstract
We propose a fast algorithm for computing Sylvester matrix's numeric rank
and apply it to compute approximate GCD of univariate polynomials with
floating-point coefficients. This fast rank-revealing method is based on
a stabilized version of the generalized Schur algorithm in [1]. All computations
can be done in $O(n^2)$ operations needed in [2], where $n$ is the sum
of the degrees of polynomials.
- On the Approximate GCD in Initial Value Problems
by Stephen Watt (Univ. of Western Ontario, Canada)
Abstract
The computation of approximate greatest common divisors of polynomials
has been considered by a number of authors under various different assumptions.
Approximate GCDs have a number of applications, and have been used as an
approach to ill-conditioned algebraic equations [Noda and Sasaki, J Comp
and App Math 1991]. This paper examines the application of the approximate
GCD to initial value problems. We consider initial value problems of the
form $ \left [\sum_{i=0}^n a_i D^i \right] y(t) = f(t) $ where $a_i$ and
$y^{(i)}(0)$ are given constants. Under appropriate conditions, equations
such as this may be solved by integral transform methods. From the Laplace
transform of the initial value problem, we see that ${\mathcal L}[y(t)](s)$
is a rational function whenever ${\mathcal L}[f(t)](s)$ is a rational function.
Depending on $a_i$ and $y^{(i)}(0)$, there may be a non-trivial approximate
GCD between the numerator and denominator of ${\mathcal L}[y(t)](s)$. This
paper examines the consequences of this fact and shows how the approximiate
GCD may be used to remove spurious singularities.
- An implementation issue on SNAP and significant digits
by Kosaku Nagasaka (Kobe Univ., Japan)
Abstract
In the approximate factorization and the approximate GCD of numerical or
empirical polynomials, we handle inexact coefficients. The algorithms constructed
so far work well on several CASs. From the practical point of view, bounding
errors by norms is not so effective as using floating-point numbers, since
we have a few kind of significance arithmetics for floating-point numbers,
Mathematica's or Kako and Sasaki's for example. In this talk, for numerical
or empirical polynomials with coefficients with significant digits, absolute
irreducibility testing for bivariate polynomials and relative primality
testing for univariate polynomials are formulated without using norms.
Basically, our formulation is not so different from original formulation,
but it will be more natural for those who use significance arithmetics.
We demonstrate the difference between our formulation and conventional
one based on the norm.
Computational Algebraic Structures and Engineering Applications
- Computational Approach to Nonlocal BVP by Multivariate Operational Calculus
by Ivan Dimovski and Margarita Spiridonova
Abstract
A large class of linear nonlocal Boundary Value Problems (BVP) for the
classical equations of mathematical physics in finite domains is considered.
It is assumed that a part of the boundary-value conditions are local and
the others are nonlocal. Using a multivariate operational calculi, algebraization
of each problem is made. Thus explicit Duhamel-type representation of the
solution is obtained, using one special solution satisfying simple boundary-value
conditions. The general solution is obtained as a multivariate convolution,
which could be used successfully for numerical computation of the solution.
The algorithms are implemented using the computer algebra system Mathematica.
- Hardware Implementation of Geometric Algebra Processor Core
by Biswajit Mishra and Peter Wilson
Abstract
The widespread use of Computer Graphics and Computer Vision applications
has led to a plethoraof hardware implementations that are usually expressed
using linear algebraic methods. There are two drawbacks with this approach
that are posing fundamental challenges to engineers developing hardware
and software applications in this area. The first is the complexity and
size of the hardware blocks required to practically realise such applications
particularly multiplication, addition and accumulation operations. Whether
the platform is Field Programmable Gate Arrays (FPGA) or Application Specific
Integrated Circuits (ASICs), in both cases there are significant issues
in efficiently implementing complex geometric functions using standard
mathematical techniques, particularly in floating point arithmetic. The
second major issue is the complexity required for the effective solution
of complex multi-dimensional problems either for scientific computation
or for advanced graphical applications. Conventional algebraic techniques
do not scale well in hardware terms to more than 3 dimensional problems,
so a new approach is desirable to handle these situations. In this paper
we describe a scalable n-dimensional geometric algebra processor core architecture
realisable using an FPGA or ASIC platform. The designer can easily specify
the floating point resolution, the order of the computation and also configure
the trade-offs between IC area and speed. The VHDL has been synthesised
and optimised for an FPGA platform and is re-targeted to an ASIC platform.
A sub-pipelined approach has been used to reduce the area requirements
and keep the throughput at a rate useful for practical real time applications.
- Generalized Stewart Platforms and their Direct Kinematics
by Xiao-Shan Gao
Abstract
In this talk, we introduce the generalized Stewart platform (GSP) consisting
of two rigid bodies connected with six distance and/or angular constraints
between six pairs of points, lines and/or planes in the base and the moving
platform respectively. GSP could be considered as the most general form
of parallel manipulators in certain sense. We show that there exist 16
forms of 2D GSPs and 3850 possible forms of 3D GSPs. With help of computer
algebra techniques, we give the upper bounds for the number of solutions
of the direct kinematics for all the 3D GSPs. We also obtain closed-form
solutions and the best upper bounds of real solutions of the direct kinematics
for a class of 1120 GSPs and all 2D GSPs.
- On Symbolic Geometric Computation with Conformal Geometric Algebra
by Hongbo Li
Abstract
Coordinate approach to symbolic geometric computation is limited by tremendous
diffi-culties coming from middle expression swell. Invariant-theoretical
method is applicable to projective geometry and affine geometry, but is
far from being well-established in that basic computing problems like representation,
expansion, contraction and factorization are either open or overlooked.
Reducing the middle expression swell is not taken care of. Symbolic computation
in Euclidean geometry with Geometric Algebra gives rise to more problems
which are much more difficult to solve.
In this talk, we show that contrary to the common observation that invariant
algebra introduces succinct algebraic description of geometric problems
with the cost of significant complication in algebraic manipulation, by
means of new invariant frameworks, new guidelines and new techniques for
invariant computing, we can achieve amazing simplification in algebraic
manipulation.
Our work in this direction includes a new invariant framework called Conformal
Geometric Algebra, which is a hierarchy of geometric covariants and invariants
induced by the Grassmann product and Clifford product. It is currently
being used in computer vision, graphics and engineering in many research
institutions around the world, and plays the central role in our Euclidean
geometric computing. We further propose a new guideline for invariant computing
to control the size of the middle expression at every minimum step, called
BREEFS-Bracket-oriented Representation, Elimination, Expansion for Factored
and Shortest result. We establish a series of new powerful techniques for
invariant computing: expansion, contraction, factorization and geometric
transformation.
The new framework, guideline and techniques being applied to symbolic geometric
computation, can lead to significant improvement in computational efficiency,
and thus lead to the solving of some hard geometric computing problems
which defy any efforts of using either coordinates or classical invariant
methods.
- Construction and Recognition of G-graphs
by Alain Bretto, Luc Gillibert and Bernard Laget
Abstract
An important part of the computer science is focused on the links that
can be established between group theory and graph theory. Cayley graphs
can establish such a link but meet some limitations. This paper introduces
a new type of graph associated to a group: the Ggraphs. We present an algorithm
constructing efficiently these new graphs. Then we present the G-graph
recognition problem and we exhibit a new algorithm based on the exploration
of the SmallGroups library from GAP [3] for solving this problem. With
this algorithm we establish a library of the most common G-graphs and we
show that many graphs, as the generalized Petersen graphs P8,3, P4,1 and P12,5, are G-graphs. More details and properties about the G-graphs are given
in [1, 2].
References
[1] A. Bretto and L. Gillibert. Graphical and computational representation
of groups, LNCS 3039, Springer-Verlag pp 343-350. Proceedings of ICCS'2004.
[2] A. Bretto and L. Gillibert. Symmetric and Semisymmetric Graphs Construction
Using G-graphs. Accepted for the International Symposium on Symbolic and
Algebraic Computation (ISSAC 2005).
[3] The GAP Team, (06 May 2002), GAP - Reference Manual, Release 4.3. http://www.gapsystem.org
Computer Algebra and Coding Theory
- Littlewood Polynomials with High-Order Zeros
by Daniel Berend and Shahar Golan (Ben-Gurion University of the Negev,
Beer-Sheva, Israel)
Abstract
Let $S(N) = \{-1, 1\}^N$. A word $(a_0, a_1, ..., a_{N-1})$ in S(N) is
an $m-th$ order spectral-null word if the polynomial $a_0 + a_1x + a_2x^2
+ ... + a_{N-1}x^{N-1}$ is divisible by $(x-1)^m$. Denote by $S(N,m)$ the
set of all m-th order spectral-null words of length $N$. Any subset of
$S(N,m)$ is an $m-th$ order spectral-null code (SNC) of length $N$. High-order
SNC's can be used for error correcting and detecting purposes. In particular,
the minimum Hamming distance of an $m-th$ order SNC is at least $2m$. These
codes improve the reliability of digital communication over noisy partial
response channels. Let $N^{*}(m)$ be the minimal length of a polynomial
with $\pm 1$ coefficients divisible by $(x-1)^m$. In previous works, the
value of $N^{*}(m)$ was found for every $m < 8$. Here we prove that
$N^{*}(8) = 144$. Similarly, let $m^{*}(N)$ be the maximal power of $(x-1)$
dividing some polynomial of degree $N-1$ with $\pm 1$ coefficients. We
extend the range of $N$'s with known $m^{*}(N)$ from $N < 88$ to $N
< 168$.
- Groebner Bases Combinatorics for Binary Codes
by M. Borges-Quintana, M.A. Borges-Trenard ( Universidad de Oriente, Cuba)
and E. Martinez-Moro (Universidad de Valladolid, Spain )
Abstract
We show how many combinatorial properties of binary codes such as finding the codewords of minimal weight and decomposition of all the codewords can be studied with the help of a Groebner basis associated to the monomial ideal associated to the code. Moreover, taking into account the connection between cycles in graph and binary codes, the set of cycles in a graph is a binary linear code of a certain vector space associated with the structure of the graph, with the help of the Groebner basis we will obtain all the minimal cycles of a graph according to their lengths (the length of a cyclic is the number of edges).
- Hadamard Matrices and Self-Dual Codes
by I. S. Kotsireas and C. Koukouvinos
Abstract
1 Introduction
We propose an algebraic approach to the construction of Hadamard matrices
and self-dual codes. Hadamard matrices with specific structure (one circulant
core, two circulant cores [2], Williamson array with four and eight matrices,
orthogonal designs, Goethals-Seidel array) have been constructed with computational
algebra methods. The resulting sets of Hadamard matrices have been used
to establish new lower bounds for the numbers of inequivalent Hadamard
matrices in many orders. The doubling construction in conjunction with
the usage of the symmetric group, yields astronomical new lower bounds
for Hadamard matrices of order 8t. The Magma Databases for Hadamard and
skew-Hadamard matrices, contain matrices that were constructed in this
manner. The purpose of this paper is to present an application in Coding
Theory, based on a theorem of Tonchev.
2 Basic Coding Theory definitions and notations
Let $F = GF(2)$. A linear $[n, k]$ code $C$ over $F$ is a $k$-dimensional
vector subspace of $F^n$. The elements of $C$ are called codewords and
the weight $wt(x)$ of a codeword $x$ is the number of non-zero coordinates
in $x$. The minimum weight of $C$ is defined as $\min{wt(x) | 0 \neq x
\in C}$. An $[n, k, d]$ code is an $[n, k]$ code with minimum weight $d$.
A matrix whose rows generate the code $C$ is called a generator matrix
of $C$. The dual code $C^\perp$ of $C$ is defined as $C^\perp = {x \in
F_n | x\cdot y = 0 for all y \in C}$. If $C \subset C^\perp$, $C$ is called
a self-orthogonal code. $C$ is called self-dual if $C = C^\perp$. Furthermore
$C$ is called doubly-even if the weights of all codewords of $C$ are a
multiple of four. A self-dual code is called singly-even if there exists
at least one codeword whose weight is $\equiv 2(mod4)$.
A self-dual code is extremal if it has the largest possible minimum weight for the given length. For every doubly-even $[n, n/2, d]$ self-dual code, $n$ is a multiple of $8$ with $d \leq 4[n/24]+4$.
Conway and Sloane [1] give a list of the possible weight enumerators of binary extremal selfdual codes and details on the largest possible minimum weight for each length. The existence of some extremal self-dual codes is an open question in [1].
See [3] for more on self-dual codes.
3 A theorem of Tonchev
By $J_n$ we denote the $n\times n$ matrix with all its entries equal to $1$ while $I_n$ denotes the identity
matrix of order $n$.
Theorem 1 Let $H$ be a Hadamard matrix of order $n = 8t + 4$ such that the number
of +1's in each row and column is congruent to $3(mod4)$. Then the following
matrix
$$(I_n, (H + J_n)/2),$$
generates a binary self-dual doubly-even code $C$ of length $2n$. The minimum
distance of $C$ is at least $8$ if and only if each row and column of $H$
contain at least seven +1's.
The above theorem of Tonchev [4], gives a simple criterion for extremality of codes arising from Hadamard matrices of order 4, 12 and 20. Starting from a particular Hadamard matrix, one can transform it into many different (but equivalent) matrices by multiplying rows and columns by -1 so that all rows and columns contain a number of +1's congruent to 3(mod4). A computer search showed that at least 79 inequivalent extremal doubly-even [40, 20, 8] codes arise from the three Hadamard matrices of order 20 [4].
References
[1] J.H. Conway and N.J.A. Sloane, A new upper bound on the minimal distance of self-dual
codes, IEEE Trans. Inform. Theory, 36 (1990), pp. 1319-1333.
[2] I. S. Kotsireas, C. Koukouvinos and J. Seberry Hadamard ideals and Hadamard matrices
with two circulant cores, Europ. J. of Combin., (to appear)
[3] E.M. Rains and N.J.A.Sloane, Self-dual codes, in Handbook of Coding Theory, V.Pless
and W.C. Huffman, (Eds.), Amsterdam, Elsevier, 1998, pp. 177-294.
[4] V.D. Tonchev, Self-orthogonal designs and extremal doubly-even codes, J. Combin. Theory
Ser. A, 52 (1989), pp. 197-205.
- Frobenius Numbers and the Postage Stamp Problem
by Daniel Lichtblau (Wolfram Research)
Abstract
Given a set A = {a1, . . . , an} of positive integers with gcd 1, it is
not hard to show that all "sufficiently large" integers can be
represented as a nonnegative integer combination of elements of A. The
Frobenius number of the set is defined as the largest integer not so representable.
The Frobenius instance problem (also called the money changing or postage
stamp problem) is to determine, given a positive integer M, a set of nonnegative
integers X = {x1, . . . , xn} such that X\cdot A = n, or else show no such
set exists. We will briefly recall how this can be addressed via toric
Groebner bases.
It is known that the Frobenius number problem is NP-hard in general. For dimension 2 it is trivial (Sylvester solved in two decades before Frobenius publicized the problem). In dimension 3 a very efficient method was found independently by Greenberg and Davison. For higher dimensions some quite effective methods are known for the case where one element of A is not too large (say, less than 107).
Recent work has given rise to methods that are effective when the above restrictions do not hold, although the dimension must be bounded by 10 or so. It turns out that there is a way to recast this work using toric Groebner bases, wherein the "fundamental domain" for the set A is given by the staircase of the basis with respect to a particular ordering. It is reasonably efficient in dimension 4 or 5, when the elements in the set are as large as 1011 or so. We will illustrate this.
Computer Algebra in Quantum Information and Computation
- An algorithm for decomposing unitary matrices using Cartan decomposition
by Yumi Nakajima, Yasuhito Kawano, and Hiroshi Sekigawa
(NTT Communication Science Laboratories, JAPAN)
Abstract
We provide an algorithm for decomposing an arbitrary unitary matrix into
a sequence of elementary operations, such as single-qubit rotations and
CNOT gates. The algorithm is based on Cartan decomposition in Lie algebra
theory. We also present a canonical decomposition of the quantum Fourier
transform (QFT) by our algorithm and show that our algorithm decomposes
the QFT into $O(n^2)$ elementary gates, which is similar to the well-known
QFT circuit.
- Computer Algebra-Aided Tool for Simulation of Quantum Circuits
by Vladimir P.Gerdt and Vasily Severyanov
(Laboratory of Information Technologies Joint Institute for Nuclear Research,
Russia)
Abstract
In [1] it was shown that elements of the unitary matrix determined by a
quantum circuit can be computed by counting the number of common roots
in $Z_2$ for a certain set of multivariate polynomials over $Z_2$. Given
a quantum circuit, the polynomial set is uniquely constructed. In this
talk we present a C\# package called QP ({\underline{Q}uantum {\underline{P}olynomials)
that allows a user to assemble a quantum circuit and to generate the multivariate
polynomial set associated with the circuit.
The generated polynomial system can further be converted into the canonical
Gr\"{o}bner basis form for the lexicographical monomial order. Gr\"{o}bner
bases form the most universal algorithmic tool of modern computer algebra
to investigate and solve systems of polynomial equations [2]. Construction
of the lexicographical Gr\"{o}bner basis substantially alleviates
the problem of the root finding for polynomial systems. To construct such
a Gr\"{o}bner basis one can use efficient involutive algorithms developed
in [3]. Our QP package together with a Gr\"{o}bner basis software
provides a tool for simulation of quantum circuits. We illustrate this
tool by example from [1].
[1] Christopher M. Dawson et al. Quantum computing and polynomial
equations over the finite field $Z_2$. {\em arXiv:quant-ph/0408129}, 2004.
[2] B.Buchberger and F.Winkler (eds.) {\em Gr\"{o}bner Bases and Applications}.
Cambridge University Press, 1998.
[3] Gerdt V.P. Involutive Algorithms for Computing Gr\"{o}bner Bases.
{\em Proceedings of the NATO Advanced Research Workshop "Computational
commutative and non-commutative algebraic geometry"} (Chishinau, June 6-11, 2004), IOS Press, 2005.
- On generalized quantum turing machine and its application
by Satoshi Iriyama and Masanori Ohya (Tokyo University of Science, Japan)
Abstract
Ohya and Volovich have proposed a new quantum computation model with chaotic
amplification to solve the SAT problem, which went beyond usual quantum
algorithm. In this talk, we generalize quantum Turing machine by rewriting
usual quantum Turing machine in terms of channel transformation. Moreover,
we define some computational classes of generalized quantum Turing machine
and show that we can treat the Ohya-Volovich (OV) SAT algorithm.
- Semidefinite Programming for the Equivalence of Finite Automata and Quantum Circuits
by David Avis(McGill Univ., Canada), Takeshi Koshiba(Saitama Univ., Japan),
Kazuo Iwama and Rudy Raymond(Kyoto Univ./ERATO Quantum Computation and
Information Project, Japan)
Abstract
Two deterministic finite automata (DFA) are equivalent if they accept exactly
the same language. It is well known that there exists an efficient algorithm
to check the equivalence of DFA which furthermore enables the minimization
of DFA's states. With respect to probabilistic finite automata (PFA), two
probabilistic (quantum) automata are exactly (approximately) equivalent
if the two automata accept every string with exactly (approximately) the
same probability. Tzeng showed a polynomial-time algorithm for checking
their equivalence. His results have been extended by Koshiba for checking
the exact equivalence of two quantum finite automata (QFA). In this draft,
we show the Semidefinite Programming (SDP) formulation for computing the
difference in accepting probability between two finite automata on input
of length at most $n$. As consequences, we can check the equivalence of
two FAs in polynomial time, thus unifying and simplifying Tzeng and Koshiba's
results. Next, we show how to extend our techniques for checking the existence
of quantum circuits whose output is a given quantum state and to reduce
their computational complexity by considering its Second-Order Conic Programming
(SOCP) formulation.
- Comparison between quantum Turing machines with advice and circuits
by Harumichi Nishimura
(ERATO Quantum Computation and Information Project, Japan)
Abstract
The notion of advice, a piece of additional information to help a computation,
is often used in complexity theory to discuss the power of polynomial-size
circuits in terms of Turing machines. In this talk, we discuss the relationships
between quantum Turing machines with advice and circuits. In classical
case, it is well-known that Turing machines with polynomial-size advice
and polynomial-size circuits simulate each other exactly. We show that
quantum Turing machines with polynomial-size advice cannot simulate polynomial-size
quantum circuits with zero-error by reducing this simulation problem to
a relation between an infinite set of complex numbers and an extended field
of the rational numbers. On the contrary, we show that, if quantum Turing
machines are allowed to have ``quantum'' advice, then that simulation is
possible. To present this, we discuss a decomposion of unitary matrices
using algebraic properties of polynomial-time computable numbers, and Nielsen's
arguments on measurement-based quantum computation. Also, extending Simon's
result we present a black-box problem such that it can be solved with probability
1 by polynomial-time quantum Turing machines while it cannot be solved
even by classical Turing machines with polynomial advice, i.e., classical
polynomial-size circuits.
Computer Algebra in the Biological Sciences
- Symbolic-Numeric Optimization for Biochemical Models by Quantifier Elimination
by Shigeo Orii (Fujitsu, Univ. Tokyo, Japan), Katsuhisa Horimoto (Univ.
Tokyo, Japan) and Hirokazu Anai (Fujitsu Laboratolies, Japan)
Abstract
The sequencing of complete genomes allows analyses of interactions between
various biological molecules on a genomic scale, which prompted us to simulate
the global behaviors of biological phenomena on the molecular level. One
of the basic mathematical problems in the simulation is the parameter optimization
in the biochemical model for complex dynamics, and many optimization methods
have been designed. Here, we introduce a new approach to optimize the parameters
in biochemical models by quantifier elimination (QE), in combination with
numerical simulation methods. The optimization method was applied to a
model for the inhibition kinetics of HIV proteinase with ten parameters
and nine variables, and attained the goodness of fit to 300 points of observed
data with the same magnitude as that obtained by the previous optimization
methods, remarkably by using only one or two points of data. Furthermore,
the utilization of QE demonstrated the feasibility of the present method
for elucidating the behavior of the parameters and the variables in the
analyzed model. The present symbolic-numeric method is therefore a powerful
approach to reveal the fundamental mechanisms of biochemical models, in
addition to being a computational engine.
- Molecular Loop Closure
by Evangelos A. Coutsias and Michael J. Wester (University of New Mexico,
Albuquerque, New Mexico, USA)
Abstract
Determining the three-dimensional structure of a molecule from its chemical
composition is the central problem of stereochemistry. Especially for large
macromolecules with complex topologies and unique compositions, such as
proteins and nucleic acids, the extreme complexity of the configuration
space makes this problem one of the grand challenges of our time. The recent
advances in the field of genomics have resulted in ever increasing numbers
of proteins whose sequence can be deduced from the genome, but whose structure
and function are not understood. Computer prediction has thus become an
increasingly alluring alternative to costly and time consuming experimental
structure determination, such as by crystallographic or NMR techniques.
In this talk, we will discuss the molecular loop closure algorithm for
finding the ensemble of possible backbone conformations of a chain segment
of a protein geometrically consistent with the preceding, intermediate
and following portions of the chain whose structures are given. This problem
can be reduced to finding the real roots of a 16th degree univariate polynomial,
whose solutions will yield sets of six dihedral angles. These sets of dihedral
angles can be substituted for those found in the original structure to
produce alternative conformations of the backbone segment. This methodology
is based on ideas from the robotics literature involving kinematics of
the equivalent rotor linkage for the most general case of oblique rotors.
We have implemented a version of this algorithm in Maple.
Computational Topology and Geometry
- Computing the Stratification of Actions of Compact Lie Groups
by T. Bayer (Technische Universitaet Muenchen, Germany)
Abstract
We provide a constructive approach to the stratification of the representation-
and the orbit space of linear actions of compact Lie groups contained in
$GL_n(\mathcal{R})$ on $\mathbf{R}_n$ and we show that any d-dimensional
stratum, respectively, its closure can be described by d sharp, respectively,
relaxed polynomial inequalities and that d is also a lower bound for both
cases. Strata of the representation space are described as differences
of closed sets given by polynomial equations while d-dimensional strata
of the orbit space are represented by means of polynomial equations and
inequalities. All algorithms have been implemented in SINGULAR V2.0.
- Determining the Topology of Real Algebraic Surfaces
by J.-S. Cheng, X.-Sh. Gao and M. Li
(Institute of Systems Science, AMSS, Academia Sinica, Beijing, China)
Abstract
In this talk, an algorithm is proposed to determine the topology of an
implicit real algebraic surface in $R^3$. The algorithm consists of four
steps: surface projection, projection curve topology determination, surface
patch composition, and combination of surface patches. The algorithm gives
an intrinsic representation for the topology of the surface. Some examples
are used to show that the algorithm is effective.
- On the Construction of Robot Navigation Function on Semi-Algebraic Sets
by D. Chibisov (Technische Universitaet Muenchen, Germany)
Abstract
The construction a scalar valued "navigation" function for the
specification of robot tasks
is a well-known problem. Given the initial and final position of a robot as well as a set of
semi-algebraic obstacles, the navigation function is required to rise in the vicinity of obstacles
in the direction towards them and to decrease monotonously along some path from the initial
to the final position, if and only if the path does not intersect any obstacle. In this way the
problem of calculation of the collision-free path can be solved in a computationally efficient
manner by reduction to the task of following the gradient of the navigation function. In the
present paper, we present a new family of analytic navigation functions and investigate their
properties for a large class of geometric optimization problems.
- Understanding Volumens from an Algebraic-Topological Perspective
by R. Gonzalez-Diaz, B. Medrano, P. Real, J. Sanchez Pelaez
(Universidad de Sevilla, Spain)
Abstract
Roughly speaking, in this talk we show how to use an algebraic-topological technique
developed by the authors, called AT-model, in problems of topological analysis, control, simplification
and representation of 3D and 4D digital images.
- Origami Construction of a Morley's Triangle with Automated Proof
by T. Ida, H. Takahashi and M. Marin (University of Tsukuba, Japan)
Abstract
We show origami construction of a Morleyfs triangle together with the automated proof
of Morleyfs theorem and its generalization. Morleyfs theorem states that the three points
of intersection of the adjacent interior trisectors of the angles of any triangle form an equilateral
triangle. The whole process of origami construction and subsequent automated proof
of the correctness of the intended construction is performed in a streamlined fashion by our
Computational Origami System. We show that the computational origami system not only
simulates origami construction to a precision of numeric and symbolic computation, but has
the power of symbolic constraint solving and proving. The automated proof uses Groebner
bases computation, and took over 16 hours to prove the generalized Morleyfs
theorem.
- Solving Cubic Equations by ORIGAMI
by S. Moritsugu
(University of Tsukuba, Japan)
Abstract
We show an algebraic proof of the method for solving cubic equations by ORIGAMI (paper
folding). Using ORIGAMI, we can solve the construction problems that are unsolvable in
Euclidean geometry[2], such as angle trisection and doubling cubes.
In our formulation, first, we translate the geometrical conditions into polynomial relations
among the coordinates of points[1]. Second, we compute a Gr¨obner basis of the ideal and
solve the ideal membership problem. Consequently, cubic equations are solved as construction
problems, and drawing a trisector of a given angle and the cubic root of 2 is clearly realized
by ORIGAMI.
References
[1] S.-C. Chou. Mechanical Geometry Theorem Proving. D.Reidel, Dordrecht, 1988.
[2] T. Hull. A Note on “Impossible” Paper Folding. American Mathematical Monthly,
103(3):240–241, 1996.
- Detecting Degeneracies in Robust Geometric Modeling
by J. Keyser and K. Ouchi
(Texas A&M University, USA)
Abstract
Detecting degenerate configurations is an important part of a robust geometric modeling
system. We focus specifically on the exact boundary evaluation problem. Most efficient methods
currently available for exact boundary evaluation rely on a general position assumption,
and fail in the presence of degeneracies. We describe a method for detecting degeneracies
based on the Rational Univariate Reduction.
A dominating computation within boundary evaluation is finding common roots of systems
of polynomials that describe the boundary of solids. Most degeneracies can be found as
degenerate configurations of this system of polynomials. We propose the use of the Rational
Univariate Reduction (RUR), also referred to as the Rational Univariate Representation, as
an exact method for finding the zero sets of polynomial systems without multiplicities.
In the RUR, every coordinate of every point in the zero set of a system of polynomials
is represented as some univariate polynomial evaluated at some root of the other univariate
polynomial. Together with the classical root-bound approach to sign determination of algebraic
numbers, we can perform exact computation over the points and curves, enabling us to detect
and handle degenerate situations smoothly.
In this talk, we will present our algorithm, along with implementation issues, examples,
and performance characteristics.
- Automatic Discovering of Geometric Theorems by Computing Groebner Bases
with Parameters
by D. Wang, L. Lin
(Academy of Mathematics and Systems Sciences, Beijing, China)
Abstract
A geometric statement of equality-type consists of two parts: hypothesis
and conclusion. Both hypothesis and conclusion can be expressed in terms
of polynomial equations in a number of free parameters $u_1, \cdots, u_m$
and a number of dependent variables $x_1, \cdots x_n$. Typically, the hypothesis
is composed of
\[
\left\{
\begin{array}{l}
h_1(u_1,\cdots, u_m, x_1,\cdots, x_n) = 0,\\
h_r(u1,\cdots, u_m, x_1, \cdots, x_n) = 0,
\end{array}
\right.
\label{eqn1}
\]
where the $h$'s are polynomials over a ground field $K$. The conclusion
is
\[
g(u_1,\cdots, u_m, x_1,\cdots , x_n) = 0
\label{eqn2}
\]
where $g$ is a polynomial over $K$. If the geometric statement is not generically
true, by computing Groebner bases with parameters, we can find the conditions,
which the parameters should satisfy, such that the conclusion (\ref{eqn2})
can be deduced from the hypothesis (\ref{eqn1}).
Computer Algebra in Education
- List of talks is available in the session homepage.
Handling Large Expressions in Symbolic Computation
- Symbolic representation of polynomial systems for efficient manipulation
and evaluation
by Dan Bates (University of Notre Dame, USA)
Abstract (pdf)
I, with Andrew Sommese (University of Notre Dame) and Charles Wampler (GM Research & Development), with some early work of Chris Monico (Texas Tech University), have been developing a new software package, Bertini, to numerically compute the zero- and positive-dimensional solutions of polynomial systems using a number of recent techniques as well as adaptive precision.
A fundamental design issue when planning Bertini was the internal representation of polynomial systems. We chose to use straight-line programs since they allow for very e}cient evaluation, simple automatic diRerentiation, and polynomials in nonstandard forms. In this talk, I will discuss how Bertini parses polynomials systems into straight-line programs as well as how multi-homogenization and diRerentiation may be carried out automatically.
- Space-Efficient Evaluation of Hypergeometric Series
by Howard Cheng (University of Lethbridge, Canada)
Abstract (pdf)
Hypergeometric series are used to approximate many important constants, such as $e$ and Aperyfs constant $\zeta(3)$. The evaluation of such series to high precision has traditionally been done by binary splitting followed by integer division. For evaluating such a series to $N$ digits of accuracy, the numerator and the denominator computed by binary splitting have size $O(N logN)$ even though the reduced fraction may only have size $O(N)$.
In this talk, we show how standard computer algebra techniques including modular computation and rational reconstruction can be applied. The space complexity of our algorithm is the same as a bound on the size of the reduced fraction of the series approximation. We also show that the series approximation for $\zeta(3)$ is indeed a reduced fraction of size $O(N)$. The analysis can be related to our previous algorithm using partially factored integers (ISSAC 2000), which in turn allows our previous algorithm to be improved as well (Hanrot, Thome, Zimmermann). This technique is also applicable to a large class of hypergeometric series.
This work was done jointly with Barry Gergel, Ethan Kim (University of
Lethbridge) and Eugene Zima (Wilfrid Laurier University).
- Perturbation Expansion Techniques to Solution of Fluid Flow Problem Using
Symbolic Computation
by Ishak Hashim and M. N. Mohamad-Fadzil (Universiti Kebangsaan Malaysia, Malaysia)
Abstract (pdf)
Please see pdf.
- Frontier Computations in the Dynamics of one Dimensional Maps
by Ilias Kotsireas (Wilfrid Laurier University, Canada)
Abstract
The dynamics of one dimensional maps presents significant challenges for
Symbolic Computation. The archetype of one dimensional maps is the logistic
map. The study of the dynamics of the logistic map involves the analysis
of the bifurcation points and the superstable orbits, both of whom are
algebraic numbers of high degrees. The intricate interconnections between
bifurcation points and superstable orbits are revealed by means of Symbolic
Computation techniques such as Groebner bases and powerful factorization
algorithms, in conjunction with Symbolic Dynamics techniques such as the
MSS (Metropolis-Stein-Stein) algorithm. The efficient management of large
expressions, resulted in the proof of the Bailey-Broadhurst conjectures
on the bifurcation point B4 and some conjectural results, towards the elaboration
of the degree of the bifurcation point B5.
Joint work with K. Karamanos.
- Output-sensitive Modular Algorithms for Row Reduction of Matrices of Ore
Polynomials
by Howard Cheng (University of Lethbridge, Canada) and George Labahn (University of Waterloo, Canada)
Abstract (pdf)
We consider the problem of row reduction of a matrix of Ore polynomials
with coefficients in Z[t], computing both the transformation matrix and
the transformed matrix. Such computations are useful for finding the rank
and left nullspace of such matrices, computing GCRD and LCLM of Ore polynomials
to name just a few applications. As in any process that involves elimination
operations there is a significant problem with intermediate expression
swell with such computations. In our case we propose a new modular algorithm
which is output sensitive, that is, the number of homomorphic images required
depends on the size of the output. Furthermore, this output sensitivity
does not come at the expense of any costly verification step using trial
division or multiplication.
- Some recipes for handling large expressions in polynomial system solving
by Marc Moreno Maza (University of Western Ontario, Canada)
Abstract (pdf)
Modular methods are well known techniques for controlling the swell of
intermediate expressions in symbolic computations. Sometimes, the output
of a computation is so large that additional techniques are needed. This
is often the case with the solution set of polynomial systems, for space
complexity reasons.
In this talk, we discuss various representations for such solution set
that tend to be more compact than others. For instance, triangular decompositions
into non-normalized regular chains. Then, we discuss various strategies
for solving large systems. In particular, for computing gall of the zerosh
of systems with infinitely many solutions.
- Handling large expressions symbolically in transition state theory
by Jesus F. Palacian and Patricia Yanguas (Universidad Publica de Navarra, Spain)
Abstract (pdf)
In this talk we show the normal form approach as a way of getting an analytical
handle on geometric objects, such as normally hyperbolic invariant manifolds
(NHIM), and the breakthrough this is in chemistry, as well as in celestial
mechanics problems. Computer visualization is used in order to gseeh
these objects. We illustrate the technique through several examples borrowed
from chemistry and astrodynamics. Concretely, we show how to determine
analytically the transition state (TS) in chemical reactions. For that
we calculate the normal form and transform the original three-degree-of-freedom(3DOF)
Hamiltonian to a 0DOF one. These expressions need to be known very accurately,
so one carries out the computations to a high degree, therefore using very
large formulae. In fact, we are able to construct three asymptotic integrals
of the original Hamiltonian by inverting the normal form transformation.
Moreover, we calculate in the original system the three-dimensional NHIM,
its stable and unstable manifolds, as well as the transition state. We
compute trajectories that start on the NHIM in the five-dimensional energy
surface. Besides, we determine trajectories in either the forward or backward
stable and unstable manifolds associated to the NHIM. These trajectories
are simply chosen and computed from the normal form vector field. The normal
form transformation then allows us to visualize them in the original coordinates.
Thus, we have complete control and knowledge of the exact dynamical trajectories
near the TS in a 3DOF system. For the calculations we make use of the commercial
software Mathematica.
- Large Simplicial
Expressions in Algebraic Topology
by Rocio Gonzalez-Diaz and Pedro Real (Universidad de Sevilla, Spain)
Abstract (pdf)
Working in the context of Simplicial Topology, cohomology operations can
be expressed in terms of compositions of component morphisms of an Eilenberg-Zilber
chain homotopy equivalence and tensor product permutations. Taking into
account that a simplicial operator can be putted into a canonical form
(this process can be called normalization of the simplicial operator),
such combinatorial formulation for cohomology operations admits a natural
simplification in terms of simplicial expressions consisting uniquely in
face operators. In order to give an efficient answer to this simplification
question, we deal with here some techniques for efficiently normalizing
particular large simplicial expressions.
- Evaluation Techniques and Symmetric Polynomials
by Eric Schost (Ecole Polytechnique, France)
Abstract (pdf)
Standard algorithms for dealing with symmetric polynomials are presented
using rewriting techniques. This is for instance the case of the hfundamental
theorem of symmetric polynomialsh, which states that any symmetric polynomial
is a polynomial in the elementary symmetric ones, and whose proof involves
rewriting using a suitable elimination order.
This kind approach usually spoils useful features such as sparseness, so
its complexity is hard to control. By contrast, I will show how the straight-line
program representation of polynomials yields improved results. My first
focus is on polynomial invariants under the symmetric group, but other
actions will be discussed as well.
- Growth of Formulas During Quantifier Elimination by Virtual Substitution
by Hitoshi Yanami (FUJITSU LABORATORIES LTD./CREST, JST, Japan)
Abstract
Recently symbolic computation methods have been gradually applied to solving
engineering problems. We have been turning our attention to quantifier
elimination (QE), which is a symbolic procedure of removing the quantified
variables from a given formula. Using QE as a basic tool we have been developing
Maple toolbox SyNRAC for solving real algebraic constraints. During a procedure
based on QE by virtual substitution, formulas grow larger, which could
sometimes halt the procedure. We discuss how we can circumvent this problem.
- Hierarchical Representations in Large Expression Generations
by Wenqin Zhou and David Jeffrey (University of Western Ontario, Canada)
Abstract (pdf)
In this talk we propose hierarchical representations to alleviate the large
expression swell problem that occurs during symbolic expression generation.
The problems that we are trying to address are the kind of problems which
have intermediate large expression swell problems and also have the large
outputs. We use hierarchical representations to make the sizes of expressions
more manageable during the symbolic expression generations. It turns out
that the run time and memory used during the computation are much less
than those done without hierarchical representations and the outputs are
much more concise. We use a low-level package called LargeExpressions in
Maple to automatically generate the hierarchical representations for us.
The advantage for this package is that we can define the size of the expression
that we want to hide and the exact moment when we want to hide the complex
expressions.
In order to demonstrate our idea, we use the classical symbolic LU decomposition
problem as an application and build a high-level tool for doing symbolic
LU decomposition with hierarchical representations. We introduce several
strategies for pivoting, veiling an expression and zero-recognition in
our algorithm which can be chosen based on different applications. We also
can include other strategies according to different usersf desires. It
is very flexible and it is much faster than the existing LU decomposition
in Maple. We analyze LU decomposition complexity with and without the hierarchical
representation. Some benchmarks are given in the end of the talk.
Similar results can be obtained if we apply our method to compute determinants,
or symbolically solve linear systems, or generate large symbolic models
for multibody dynamic systems, etc.
High-Performance Computer Algebra
- Overview of High-Performance Computer Algebra
by Jeremy Johnson
(Drexel University, Philadelphia, PA, U.S.A.)
Abstract
This talk surveys the features of modern processors, pipelining and hazards,
superscalar execution, instruction level parallelism, vector instructions,
and the memory hierarchy, that affect performance. Effective utilization
of these features can be more important than reducing the number of arithmetic
operations in obtaining high-performance code; however, code optimized
for one architecture may be inefficient on another architecture, leading
to portability issues.
We survey techniques for self-adapting code to produce optimized portable
numeric libraries ATLAS (http://math-atlas.sourceforge.net) and OSKI (http://bebop.cs.berkeley.edu),
previously Sparsity, for linear algebra and FFTW (www.fftw.org) and SPIRAL
(www.spiral.net) for the FFT and other signal processing algorithms — and
discuss how these techniques may be used to obtain high-performance computer
algebra libraries.
- Efficient Polynomial Computations from a Programmer's Perspective
by Xin Li
(ORCCA, University of Western Ontario, London, Canada)
Abstract
Our research purpose is to reduce the practical complexity and improve
the performance of exact symbolic computations with polynomials We are
interested in studying known techniques such as:
1. using modular methods and asymptotically fast polynomial arithmetic
2. choosing adapted underlying data representation and appropriate algorithms
3. mixing high-level generic code and low-level machine dependent code
in a transparent way for the high-level end-user
We also aim at measuring precisely the impact of these various techniques
and their interactions. We choose Axiom as the implementation language.
Our Lisp, C and assembly code can be ported to other computer algebra systems.
- Efficient Implementation of Multivariate Polynomial Arithmetic Based on
Fast Fourier Transform
by Xin Li, Marc Moreno Maza, Eric Schost
(ORCCA, University of Western Ontario, London, Canada)
Abstract
The goal of this work is to provide fast algorithms with efficient implementation
for multivariate polynomials in a high-level language, namely AXIOM. The
main application that we have in mind is supporting Hensel lifting techniques
modulo big primes in order to solve systems of non-linear systems symbolically.
We focus on :
1. FFT-based univariate polynomial multiplication over finite field
2. FFT-based multivariate polynomial arithmetic
3. "Big prime" arithmetic
Our code is cross over Axiom, Lisp, C and Assembly. High performance is achieved by
selecting suitable data structure, using fast integer and floating point arithmetic, understand
restrictions of compiler, understand memory performance and processor's
architecture.
Our implementation is based on Intel-compatible processor, running on Linux.
- Using High-Performance Taylor Shift by 1 in Real Root Isolation
Anatole Ruslanov, Jeremy Johnson and Werner Krandick
(Drexel University, Philadelphia, PA, U.S.A.)
Abstract
A common approach to high-performance computer algebra software consists
in using highperformance
low-level routines as building blocks. However, in a recent paper [2] we show that
using a high-performance integer addition routine as a building block does not yield a highperformance
implementation of Taylor shift by 1.
We first review the techniques from [2] and show why it is not sufficient in general to
expect high-performance by plugging-in calls to efficient kernel routines (e.g., incompatible
data structures, inability to apply high-level optimization across calls to the kernel routines,
and need for special instances of kernel routines). Then we investigate using high-performance
Taylor shift by 1 to obtaining a high-performance implementation of the Descartes method for
polynomial real root isolation.
References
[1] J. R. Johnson, W. Krandick, and A. D. Ruslanov. Architecture-aware classical Taylor shift
by 1. In International Symposium on Symbolic and Algebraic Computation. ACM Press,
to appear.
- Towards High-Performance High-Level Arithmetic Code
by Werner Krandick
(Drexel University, Philadelphia, PA, U.S.A.)
Abstract
The GNU-MP library [1] for multiprecision integer arithmetic achieves high
performance by providing hand-crafted assembly language routines for a
large number of architectures. By building on top of GNU-MP, computer algebra
systems such as Maple [3, 5, 4] try to obtain high performance in high-level
algorithms.
However, a recent paper [2] shows that this approach does not work for
Taylor shift by 1,
a relatively low-level operation. Moreover, relying on hand-crafted assembly code
- hides important algorithmic ideas,
- limits the portability to new processor architectures, and
- recludes benefits from optimizing compilers.
We review some of the GNU-MP routines for multiprecision integer addition
and report on ongoing efforts at Drexel University to replace the GNU-MP
routines with high-level programs.
References
[1] Torbjorn Granlund. GNU MP: The GNU Multiple Precision Arithmetic Library.
Swox
AB, September 2004. Edition 4.1.4.
[2] J. R. Johnson, W. Krandick, and A. D. Ruslanov. Architecture-aware classical Taylor shift
by 1. In International Symposium on Symbolic and Algebraic Computation. ACM Press,
to appear.
[3] Maplesoft. Maple 9: Learning Guide, 2003.
[4] M. B. Monagan, K. O. Geddes, K. M. Heal, G. Labahn, S. M. Vorkoetter, J. McCarron,
and P. DeMarco. Maple 9: Advanced Programming Guide. Maplesoft, 2003.
[5] M. B. Monagan, K. O. Geddes, K. M. Heal, G. Labahn, S. M. Vorkoetter, J. McCarron,
and P. DeMarco. Maple 9: Introductory Programming Guide. Maplesoft, 2003.
Newton and Hensel techniques in scientific computing
- Height estimates for the equiprojectable decomposition
by Xavier Dahan (Ecole Polytechnique, France)
Abstract
Assume $k$ is a perfect field and let $V \subset {\overline{k}}^n$ be a
$0$-dimensional variety. Given an ordering of the coordinates $x_1 <
\cdots < x_n$, we introduced, in a previous work, the equiprojectable
decomposition of $V$. We showed that it could be encoded by a triangular
decomposition of $V$ and that this decomposition could be computed easily
from any triangular decomposition of $V$.
In this talk, we establish specialization properties for the equiprojectable
decomposition of $V$. We provide also height estimates for the coefficients
of the triangular decomposition encoding the equiprojectable decomposition
of $V$. Then, we deduce Hensel lifting techniques for computing the equiprojectable
decomposition of $V$. Finally, we report on a preliminary implementation
that shows the capacity of this approach to improve the practical efficiency
of triangular decomposition.
- Technical issues on lifting and a unified algorithm for factorization of
multivariate polynomials
by Maki Iwami (Univ. Tsukuba, Japan)
Abstract
Let $K$ be a number field of characteristic $0$. Let $K[\boldsymbol{u}],
K\{\boldsymbol{u}\}$ be the ring of polynomials and the ring of formal
power series over $K$, respectively, where $(\boldsymbol{u})$ is an abbreviation
of variables, $(u_1,\cdots, u_{\ell})$.
It makes sense to perform analytic factorization which is a factorization
over the ring of formal power series by fixing the expansion point, because
$K[x,\boldsymbol{u}]\subset K\{\boldsymbol{u}\}[x]$. Therefore, the author
presents two algorithms for multivariate analytic factorization. One is
utilizing the extended Hensel construction [1] and the other is multivariate
expansion base algorithm [2]. Therefore, the author suggests an algorithm
from a unified viewpoint, which comes from the identification of ``weight
of expansion base'' and ``slope of Newton's polynomial'' [3]. We can say
this unified method is a blend of techniques of ``multivariate expansion-base''
and ``the extended Hensel construction''.
[1] M.Iwami: Analytic Factorization of the Multivariate Polynomial, Proc.
of the Sixth International Workshop on Computer Algebra in Scientific Computing,
pp.213--225 (2003).
[2] M.Iwami: Extension of Expansion Base Algorithm to Multivariate Analytic
Factorization, Proc. of the Seventh International Workshop on Computer
Algebra in Scientific Computing, pp.269--281 (2004).
[3] M.Iwami: Extension of Expansion Base Algorithm to Multivariate Analytic
Factorization including the Case of Singular Leading Coefficient, (submitted
for CASC2005)
- Evaluation Techniques for Polynomial System Solving
by Gr\'egoire Lecerf (Universit\'e de Versailles, France)
Abstract
In this talk we will present a recent probabilistic algorithm for solving
systems of polynomial equations and inequations. Our algorithm computes
the equidimensional decomposition of the Zariski closure of the solution
set of such systems. Each equidimensional component is encoded by a generic
fiber, that is a finite set of points obtained from the intersection of
the component with a generic transverse affine subspace. Our algorithm
is incremental in the number of equations to be solved. Its cost is mainly
cubic in the maximum of the degrees of the solution sets of the intermediate
systems counting multiplicities. This behavior is reached thanks to certain
lifting methods that generalize the classical Newton operator. We will
make a short demonstration of our implementation called Kronecker, which
is written in the Magma computer algebra system.
- Improved Dense Multivariate Polynomial Factorization Algorithms
by Gr\'egoire Lecerf (Universit\'e de Versailles, France)
Abstract
Popularized by Zassenhaus in the seventies, several algorithms for factoring multivariate polynomials use a so called lifting and recombination scheme. In this talk, we will present a new recombination method that requires a lifting up to precision twice the total degree of the polynomial to be factored. This method leads to nearly optimal algorithms for multivariate polynomial factorization and absolute factorization.
- Hensel Lifting via Groebner bases
by Daniel Lichtblau (Wolfram Research, USA)
Abstract
In this talk I will show how one may use Groebner bases over Euclidean
domains to perform Hensel lifting in some polynomial rings. The algorithm
is quite simple. Moreover, for the ring of univariate polynomials over
the integers, dedicated polynomial arithmetic code of around two dozen
lines can implement this method quite efficiently (it compares well to
the tree lifting method, which appears to be the most effective approach
known). We will also see how the Groebner basis approach to lifting may
be applied to bivariate polynomials over finite fields.
- Toeplitz and Hankel Meet Hensel and Newton Modulo a Power of Two
by Victor Pan (The City University of New York, USA)
Abstract
We extend Hensel lifting for solving general and structured linear systems
of equations to the rings of integers modulo nonprimes, e.g. modulo a power
of two. This enables significant saving of word operations. We elaborate
upon this approach in the case of Toeplitz linear systems. In this case,
we initialize lifting with the MBA superfast algorithm, estimate that the
overall bit operation (Boolean) cost of the solution is optimal up to roughly
a logarithmic factor, and prove that the degeneration is unlikely even
where the basic prime is fixed but the input matrix is random. We also
comment on the extension of our algorithm to some other fundamental computations
with possibly singular general and structured matrices and univariate polynomials
as well as to the computation of the sign and the value of the determinant
of an integer matrix.
- An Approach to Singularity from Extended Hensel Construction
by Tateaki Sasaki, Daiju Inaba, (Univ. Tsukuba, Japan) and Kentaroh Katamati
(Iwate Prefectural Univ., Japan)
Abstract
Let $F(x,u_1,\dots,u_\ell)$, or $F(x,u)$ in short, with $\ell \geq 2$,
be a multivariate polynomial over {\bf C}. The generalized Hensel construction
breaks down at a point $(s_1,\dots,s_\ell) \in {\bf C}$, or $(s)$ in short,
if $F(x,s) = {\rm const} \times (x \!-\! \alpha)^n$. Such a point $(s)$
is called a {\it singular point for the Hensel construction}, or {\it singular
point} in short. The extended Hensel construction, or EHC in short, is
the Hensel construction at the singular point. It was invented by Kuo in
1989 for monic bivariate polynomials and by Sasaki-Kako in 1993 for monic
multivariate polynomials. The EHC gives the Puiseux-series roots for bivariate
polynomial $F(x,u_1)$ and, for multivariate polynomial $F(x,u_1,\dots,u_\ell)$,
the roots which are fractional-power series in the total-degree variable
$t$. In this talk, we clarify a relationship between the extended Hensel
factors and the singularity of the multivariate roots.
In order to obtain Hensel factors which are linear w.r.t.\ the main variable
$x$, we are necessary to perform two different kinds of EHC's; the initial
factors in the first kind of EHC are chosen to be in ${\bf C}[x,u]$, and
in the second kind of EHC, we introduce algebraic functions $\theta_1(u),\dots,\theta_s(u)$.
In both cases, rational functions appear usually in the coefficient of
Hensel factors. The Hensel factors become singular at a point $(s)$ where
the leading coefficient disappears or denominators become zero. If the
leading coefficient disappears, then ``scaled root'' $\bar{\chi}(u \!-\!
s,t)$, with $t$ the total-degree variable, becomes infinity as $(u) \to
(s)$. Let $\cal{S}$ be a surface (a line in trivariate case) on which some
denominators become zero. The singularity structure of $x = \chi(u)$ changes
on $\cal{S}$; for example, the cube-root type singularity changes to the
square-root type one.
- Fast algorithms for Newton sums in small characteristic
by Eric Schost (Ecole Polytechnique, France)
Abstract
I consider the following problem: computing the coefficients of a polynomial
from the data of its first Newton sums, over a small finite field. In large
characteristic, the main tool for this task is Brent's idea of using Newton's
iteration for exponentiating a power series. This algorithm fails in small
characteristic because of divisions; I will show how adding only a few
bits of precision makes it well-defined again. As applications, I will
describe algorithms "a la Leverrier" for parallel linear algebra
over finite fields, and compare these results to those of notably Schonhage,
Kaltofen and Pan, Giesbrecht, and Pan. This is a joint work with Alin Bostan,
Laureano Gonzalez-Vega and Herv\'e Perdry.
Parametric and Nonconvex Constraint Solving
- List of talks is available in the session homepage.
Pen-Based Mathematical Computing
- List of talks is available in the session homepage.