Show simple item record

dc.contributor.advisorDick, Grant
dc.contributor.advisorCranefield, Stephen
dc.contributor.authorWilliams, Paul James
dc.date.available2021-06-30T20:45:20Z
dc.date.copyright2021
dc.identifier.citationWilliams, P. J. (2021). Ensemble learning through cooperative evolutionary computation (Thesis, Doctor of Philosophy). University of Otago. Retrieved from http://hdl.handle.net/10523/12072en
dc.identifier.urihttp://hdl.handle.net/10523/12072
dc.description.abstractBuilding ensembles of classifiers is an active area of research for machine learning, with the fundamental goal of combining the predictions of multiple classifiers to improve prediction accuracy over an individual classifier. In theory, combining classifiers in an ensemble can improve the prediction results by compensating for individual classifier weaknesses in certain areas by benefiting from better accuracy of the other individuals in the same area. Typical ensemble learning approaches require extensive amounts of computation to train and combine multiple models into a single solution. A key question in ensemble learning is: given the total computational effort roughly equivalent to a single monolithic solution, can an ensemble learner achieve comparable or better performance? In this thesis, a comparison is made between a single complex monolithic agent and an ensemble of many simpler agents that is evolved using equivalent computational effort. To do this, a framework is constructed that enables the comparison of a monolithic approach using complex agents, and an ensemble approach made up of simple agents. This is then applied to the application of buying and selling stocks on a simulated stock market and comparing the results of the two approaches to classify stock data on when to buy and sell. The framework involves creating a population of agents. These are “decision making agents” (DMAs), which evaluate a data source and decide at each time step whether to trade or hold a stock. In many learning problems, such as the stock trading example used in this thesis, the suitability of a model is measured at a macroscopic level aggregated over multiple decision actions. These problems are not well-suited to traditional learning methods, so evolutionary computation (EC) is frequently used to build machine learning models in these situations. Historically, most EC approaches use a single population to evolve a single solution. A more recent branch of EC research emphasises the use of cooperative coevolution, where the required solution is decomposed into several sub-components, and multiple populations are used in parallel to simultaneously evolve these. There are strong analogies between the divide and conquer strategies of cooperative co-evolution and the building of ensembles in traditional machine learning. In this thesis, a cooperative co-evolution approach using genetic programming to evolve individual populations and combined them as an ensemble is used to evolve a solution. The agents in the individual populations are evolved with a standard genetic programming approach, where our DMAs are decision trees made up of logic operators (function primitives) and stock indicators (terminal primitives). DMAs are used for both monolithic and ensemble algorithms, but the size of the DMAs varies and the way they are evaluated is different. With the monolithic approach only a single population is used, but the agents in the population evolve to have increased complexity compared to the agents in the ensemble approach. With the cooperative co-evolution ensemble approach, n populations are created and evolved independently, but they are evaluated together using majority voting. The agents used in the ensemble approach are only allowed 1/n of the nodes that the monolithic agent can have, reducing the ensemble’s total complexity to a similar level to that of the monolithic approach. With this framework, this thesis suggests that an ensemble of simple agents using variance reduction performs as well, and in most cases better than, a complex monolithic agent. The variance reduction process is like that of bagging, with majority voting within the ensemble damping down the behaviour of over-active, risky models to reduce the error component attributable to these risky actions. This variance reduction behaviour was not by design, but was an emergent property. The robustness of these findings is examined under multiple conditions, which include key parameters pertaining to ensemble learning. These include population size and ensemble size, which are examined in this work to gain insights into an optimal set of parameter values. To ensure that the insights into ensemble learning generalise beyond that of the examined stock trading problem, an alternative unrelated problem, suitable for a cooperative approach is then tested in a similar way. This is the Tartarus problem, in which agents use finite state machines (FSM) for internal states. Previous work in using cooperative co-evolutionary methods on the Tartarus problem focused on decomposition of a single FSM and met with limited success. The co-evolutionary approach used here builds an ensemble of smaller FSMs, each voting on the best action to take. This configuration reduces the computational effort in the mutation operator, therefore allowing an ensemble with more total states to be used for the same overall computational effort. In this context, this approach improves on previous cooperative research and shows that some findings are transferable between applications when using the ensemble approach shown in this research.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherUniversity of Otago
dc.rightsAll items in OUR Archive are provided for private study and research purposes and are protected by copyright with all rights reserved unless otherwise indicated.
dc.subjectCoevolution
dc.subjectevolution strategies
dc.subjectgenetic algorithms
dc.subjectemergent decomposition
dc.subjectensemble learning
dc.titleEnsemble learning through cooperative evolutionary computation
dc.typeThesis
dc.date.updated2021-06-25T23:54:54Z
dc.language.rfc3066en
thesis.degree.disciplineInformation Science
thesis.degree.nameDoctor of Philosophy
thesis.degree.grantorUniversity of Otago
thesis.degree.levelDoctoral
otago.openaccessOpen
otago.evidence.presentYes
 Find in your library

Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record