Selected Completed Honours Thesis: Evolutionary Computation
Lauren Galbraith
Impact of indexed memory on preformance in linear genetic programming for classification tasks:
December 2023
Abstract: In this work, the performance of a Linear Genetic Programming (LGP) model in
the context of classification tasks is assessed with and without the use of indexed
memory. Using a virtual machine with a fixed instruction set, the model addresses
various classification problems. Read and Write instructions are introduced to the
model for accessing a single shared instance of memory. Ultimately, the incorporation
of indexed memory not only allows the model to achieve a more diverse distribution of
results, often corresponding to higher accuracy scores, but also reveals a significantly
task-dependent relationship between the volume and frequency of memory accesses.
Furthermore, the degree to which memory increases performance is also demonstrated
to be highly task-dependent, emphasizing the nuanced impact of memory on the
model’s adaptability and effectiveness in diverse classification scenarios.
Urmzd Mukhammadnaim
Reinforced linear genetic programming:
April 2023
Abstract: Linear Genetic Programming (LGP) is a powerful technique that allows for a variety of
problems to be solved using a linear representation of programs. However, there still exists
some limitations to the technique, such as the need for humans to explicitly map registers
to actions. This thesis proposes a novel approach that uses Q-Learning on top of LGP,
Reinforced Linear Genetic Programming (RLGP) to learn the optimal register-action assignments.
In doing so, we introduce a new framework ‘linear-gp’ written in memory-safe
Rust that allows for extensive experimentation for future works.
John Douncette
Novelty-based Fitness Measures in Genetic Programming:
April 2010
Abstract: The utility of the class of non-qualitative fitness measures known as
"novelty-based"
measures is considered with respect to Genetic Programming (GP). Previously used
benchmarks from GP and from other work on novelty-based search heuristics are
used. The resulting data suggest that novelty-based measures may be useful when
solution robustness is essential, but that overall, they may not be as powerful in the
context of GP as previous work suggests they are in other areas of machine learning.
The introduction of factors found in other machine learning models for which novelty-
based measures preformed well previously did not improve the performance of GP
when novelty-based measures were used, or produced inconclusive results, depending
on the benchmark problem used.