Selected Completed Honours Thesis: Evolutionary Computation
Bryce MacInnis (Code)
PyTPG: A distributed and multithreaded implimentation of tangled program graphs for reinforcement learning:
August 2024
Abstract: Tangled Program Graphs (TPG) have been shown to be competitive with Deep Q-learning Networks (DQN) at reinforcement learning benchmarks such as the OpenAI’s Gym tasks and games in the Atari Learning Environment (ALE). This paper presents pytpg, a multi-threaded and distributed open-source implementation of the Tangled Program Graph genetic model. Previous single-threaded implementations of TPG are limited by their slow training performance. This paper provides details about the implementation of pytpg such that it can be reproduced by others, as well as providing a benchmark of its performance compared to its single-threaded counterpart. The benchmarks used for this paper are the CartPole-v1, LunarLander-v2, and CarRacing-v2 tasks from OpenAI’s Gym. The metrics used for benchmarking are throughput, time vs generation, and CPU utilization.
Lauren Galbraith
Impact of indexed memory on preformance in linear genetic programming for classification tasks:
December 2023
Abstract: In this work, the performance of a Linear Genetic Programming (LGP) model in
the context of classification tasks is assessed with and without the use of indexed
memory. Using a virtual machine with a fixed instruction set, the model addresses
various classification problems. Read and Write instructions are introduced to the
model for accessing a single shared instance of memory. Ultimately, the incorporation
of indexed memory not only allows the model to achieve a more diverse distribution of
results, often corresponding to higher accuracy scores, but also reveals a significantly
task-dependent relationship between the volume and frequency of memory accesses.
Furthermore, the degree to which memory increases performance is also demonstrated
to be highly task-dependent, emphasizing the nuanced impact of memory on the
model’s adaptability and effectiveness in diverse classification scenarios.
Urmzd Mukhammadnaim
Reinforced linear genetic programming:
April 2023
Abstract: Linear Genetic Programming (LGP) is a powerful technique that allows for a variety of
problems to be solved using a linear representation of programs. However, there still exists
some limitations to the technique, such as the need for humans to explicitly map registers
to actions. This thesis proposes a novel approach that uses Q-Learning on top of LGP,
Reinforced Linear Genetic Programming (RLGP) to learn the optimal register-action assignments.
In doing so, we introduce a new framework ‘linear-gp’ written in memory-safe
Rust that allows for extensive experimentation for future works.
John Douncette
Novelty-based Fitness Measures in Genetic Programming:
April 2010
Abstract: The utility of the class of non-qualitative fitness measures known as
"novelty-based"
measures is considered with respect to Genetic Programming (GP). Previously used
benchmarks from GP and from other work on novelty-based search heuristics are
used. The resulting data suggest that novelty-based measures may be useful when
solution robustness is essential, but that overall, they may not be as powerful in the
context of GP as previous work suggests they are in other areas of machine learning.
The introduction of factors found in other machine learning models for which novelty-
based measures preformed well previously did not improve the performance of GP
when novelty-based measures were used, or produced inconclusive results, depending
on the benchmark problem used.