My talk at JuliaCon 2018 (video)
Sep 3, 2018

JuliaCon 2018 was fantastic. All talks were recorded and posted (1) on the Julia youtube channel. This is fortunate, because with parallel sessions, meeting with people, etc., most participants saw only a relatively small fraction of them live. These talks can be an excellent resource for learning more about Julia at many levels.

Here is the abstract for my talk Symbolic mathematics in Julia (pdf). at JuliaCon 2018.

Please see below for updates and corrections to the content of the talk, in particular, regarding performance and features of Pari, Mathematica, and Maxima. The slides linked above have been corrected.

Having a recording of your talk is a great resource. You can review your performance like an athlete or a politician would. And by posting corrections, you can, in a sense, have a partial do-over.

### Polynomial benchmark, starting at minute 8:44

The task is to compute coefficients of each term in the polynomial $q$ given by \begin{align} f & = x + y + z + 1 \newline p & = f^{20} \newline q & = p (p + 1) \end{align}

The following table of times required to execute the task replaces the one given in the recorded talk.

TaylorSeries.jl These benchmarks were omitted entirely in the talk. The first time $0.05$s illustrates the general tendency for performance to increase with increased specificity of the data structure.

Mathematica In the talk I quote a benchmark time of $7.7$s for Mathematica 3, which was released in 1996. I was sent benchmark times for the same problem of $0.28$s and $0.12$s for a recent version of Mathematica on two different machines. These latter times are competitive with that for AbstractAlgebra.jl.

Note that, according to the user-facing semantics of Mathematica, the expressions entered are first interpreted as generic, purely symbolic expressions. The rewriting rules then return as the fixed point the expanded polynomial, which is again a generic expression. On the other hand, in AbstractAlgebra.jl the problem is encoded (simply) from the outset in a specialized, dense-polynomial data type. It is highly likely that Mathematica does the calculation using a similar dense-polynomial representation. We don’t know when or how this is done.

Pari. In the talk, I cast doubt on the validity of the Pari benchmark time of $1.5$s when in fact this time is correct and valid for comparison. In brief, the number is correct because a very simple Pari program extracts all coefficients and powers in a time negligible compared to $1.5$s.

In the talk, I explain that Pari does not actually fully expand the polynomial, so it is unfair to compare the result to the other software systems, all of which compute the coefficients and powers for each term in the fully-expanded polynomial. This is because Pari does not support true multi-variate polynomials. Instead, a bi-variate polynomial is represented as a uni-variate polynomial whose coefficients are themselves polynomials in a second variable.

The Pari manual has this to say about its multivariate-polynomial capability

this system does very badly compared to much more sophisticated systems like Axiom, Macsyma, Maple, Mathematica or Reduce on such manipulations (e.g. multivariate polynomials, formal integration, etc… )

But later in the manual, you find a short recursive routine written in the interpreted language gp (written for use exclusively as a frontend to Pari) that extracts all coefficients and powers into a nested structure. Surprisingly, this routine runs so quickly on the benchmark polynomial, that no time is returned by gp. Thus, it is fair to compare the time $1.5$s to other systems.

Maxima As mentioned in the talk, Maxima should perform better than this. But, the particular Maxima/lisp combination I tried has a memory leak. If anyone has a good benchmark, please let me know.

### Other clarifications

• 08:16 “rendered as the operator +” should be “rendered as the word Plus”.

• 17:50 I mention W B Hart in connection with the OSCAR project. While Hart is involved in the project, there are many PIs for the project, let alone workers. This post lists five PIs who together lead the project. Furthermore, OSCAR is only part of a larger project.

• 18:12 I neglect to mention (but it’s on the slide) that Julia plays a central role the OSCAR project!

• 19:00 I say that AbstractAlgebra.jl is the fastest. I mean it is the fastest on a set of related benchmarks!

• 21:13 There has in fact been some work on GUIs for Maxima, most of it quite a while ago. However a Jupyter interface for Maxima has been written.

• 20:53 I claim that Maxima has no useful TAB completion. This is not correct. The maxima distribution includes rmaxima which improves the command-line experience with readline (via rlwrap) and completion for a list of builtin functions.

• 22:30 I may give the impression that S. Wolfram is the first to use “Orderless” and “Flat”. This may be true, but I don’t know it, and I doubt it.

• 23:55 “optimize this guy here” refers to .\\ which is ReplaceRepeated. It is unfortunate that the laser pointer is not visible in the video.

1. Some talks are missing from the play list (including mine). The list appears to be truncated at 99 items. [return]

Back to posts