Alex (Alexander if you're a publisher or my mother) is an Assistant Professor in the Faculty of Computer Science at Dalhousie University in Halifax, Canada. My research centers around improving the performance, quality, maintainability, usability, and capability of (scientific and mathematical) software. I am also affiliated with the Ontario Research Centre for Computer Algebra and the Symbolic Computation Labratory at the University of Western Ontario.
cientific
lgrebaic
ymbolic
oftware
ngineering
My research uses a hardware-centric viewpoint to software performance engineering for scientific and data-intensive software. I am interested in high-performance computing, mathematical software design, parallel computing, among many more things. Applications of interest include simulation, gaming, graphics, physics and astronomy.
When not in the office, you can find me rowing, cycling, or running around HRM.
Interested in Dalhousie Rowing ?
My research interests focus on software engineering and software performance engineering for high performance and scientific computing.
In a phrase:
engineering for scientific computing.
This includes:
Scientific computing occurs at many levels and in many ecosystems. From exploratory computing and computational discovery (thanks Rob and Eunice), like in Jupyter and Python; to large-scale high-performance computing, for which you might use C++ and Julia.
One facet of my research is the interoperability, maintainability, and functionality of scientific software programming languages and ecosystems. One particular interest is to bring symbolic computation into the mainstream (with or without users knowing it).
The BPAS (Basic Polynomial Algebra Subprograms) library is a symbolic computation library written in C/C++ for foundational operations on (multivariate) polynomials over various coefficient rings. This includes arithmetic, GCDs, (sub)resultants, and factorization.
This library includes advanced methods for solving systems of polynomial equations.
Triangular decomposition is a symbolic method for solving systems of polynomial equations. Geometrically, this method iteratively computes the intersection of hypersurfaces defined by each polynomial in the system. The output of this solver is an equidimensional decomposition of the variety of the polynomial system.
Of interest is the practical improvement of this method to minimize memory usage and execution time. The study of parallel algorithms and high-performance implementation techniques for triangular decomposition is ongoing.
Extracting parallelism from programs whose concurrency opportunities emerge dynamically (and vary with each problem instance) is a challenging task. Irregular parallelism describes this, alongside cases where task and data dependencies cannot be determined a priori, or when there is load-imbalance.
Take this idea further, and we get what I call unstructured parallelism. This describes where data and tasks to be executed concurrently are dynamically generated. A classic example that falls into this category is state-space search. I study ways to effectively exploit irregular parallelism in a problem-agnostic way.
The first high-performance computer implementation of multivariate formal power series in a compiled language was developed in BPAS and later integrated into the Computer Algebra System Maple.
This work brings fundamental operations on power series as well as foundational algorithms such as Weierstrass preparation and Hensel's lemma which, in this context, can be seen as a generalization of the classic Newton–Puiseux theorem.
The kernel launch parameters (i.e. thread block configuration) of a CUDA kernel influences execution time by orders of magnitude. One normally chooses thread block configurations by guess or intuition.
KLARAPTOR is a compile-time tool for dynamically finding optimal kernel launch parameters for CUDA programs. This tool is based on LLVM and uses compile-time analysis to automatically, at run-time, determine kernel launch parameters (i.e. thread block configurations) which produce (nearly)-optimal kernel execution time for each individual kernel invocation.
CSCI 2134-01: T/R 16:00-17:30
CSCI 2134-02, M/W 14:30-16:00
I wrote an online open-source textbook for discrete math: Discrete Structures for Computing
If you are a student (undergraduate, graduate, prospective graduate, CS, non-CS, math, physics, whatever) who is interested in any of my research areas, please reach out to me. I have both undergraduate and graduate positions available. For local Dalhousie students, come find me in my office and talk any time (see my public calendar linked below).
If you're not local, send me a real email (make it personable; not generic formal business language). Mention your academic or work background, what research areas you're interested in, any specific projects of mine you may want to work on, or any specific ideas you already have that you want to work on and are related to my areas. No transcripts or CV needed; but a well-written email goes a long way at a first impression. Show that you're serious and that you've done your research. Due to volumes of email received (and my own attempt at work-life balance), only those who are a good fit may receive a reply.