A Mechanism for Attending to 'Unusual' Object Properties, and its Role in a Model of Object Categorization
Gorman, Christopher

View/ Open
Cite this item:
Gorman, C. (2019). A Mechanism for Attending to ‘Unusual’ Object Properties, and its Role in a Model of Object Categorization (Thesis, Doctor of Philosophy). University of Otago. Retrieved from http://hdl.handle.net/10523/8922
Permanent link to OUR Archive version:
http://hdl.handle.net/10523/8922
Abstract:
Categorization is one of the most fundamental aspects of human cognition. From soon after birth, humans form categories based on most of our sensory inputs. Stevan Harnad (2005) famously said, “To cognize is to categorize: cognition is categorization”. Every concrete concept that features in our mental processes is an abstraction — a categorization operation.
While categorization has been studied for decades, there are many outstanding questions that remain to be answered. One key question is how infants learn a hierarchy of categories. In this thesis, we will argue that the mechanisms underlying hierarchical category development are related to the mechanisms controlling attention to object properties. Specifically, we propose that both mechanisms rest on a single operation, which we call property-level inhibition of return (IOR). IOR is an operation inspired by the inhibition of return process that exists within the visual system. Assume that a token object has been classified as a certain type T. The IOR operation compares the properties of the token object to the properties of the 'prototypical' object of type T, to identify what (if anything) makes it unusual as an instance of this type. We suggest that this IOR operation is what leads to the apprehension of isolated properties in an object. In addition, we also posit a role for this IOR operation in the process of learning a hierarchy of object categories. Specifically, we propose that learning finer-grained object categories involves systematic attention to the unusual properties of objects, so that sub-types of object with similar unusual properties are identified. In this text, usual and unusual properties are defined by the statistical regularities encoded by category prototypes.
We first develop a proof-of-concept unsupervised neural network model of this hypothesis. This model is trained on token instances of several objects, each belonging to a basic-level category and a subordinate-level category. The model mimics experimental data on human category learning and first learns the strong correlations of the basic-level category, and then it starts to learn the subtle correlations of the subordinate-level category. Without the inhibition operation, the network is only ever able to learn the basic-level categories.
The next model we introduce attempts to provide a more neurally plausible (compared to the previous model) account of category learning using Hopfield-type neural networks. These networks are able to reliably learn prototypes from abstract, distributed representations of token objects. We also show that, using information inherent to the execution of the network, we can determine if a given state of the network is a token, a category, or if it is spurious.
Finally, we address an important precondition for our IOR-based model of category learning. The IOR operation only works if the properties of objects are represented in a 'localist' scheme, where different properties are represented in separate populations of neurons. The dominant model of visual object classification, the deep convolutional network, learns highly distributed representations of visual object properties. We present a variant on a deep convolutional network that uses Kohonen's self-organizing maps (SOMs) as its core learning architecture, rather than error backpropagation (which is biologically questionable). The SOM-based network preserves some of the important properties of convolutional networks (hierarchical structure, spatial abstraction), but delivers localist representations of visual object properties, rather than distributed ones. This model is currently not able to reliably learn categories, but provides an interesting avenue for further research.
Date:
2019
Advisor:
Knott, Alistair; Robins, Anthony; Szymanski, Lech
Degree Name:
Doctor of Philosophy
Degree Discipline:
Computer Science
Publisher:
University of Otago
Keywords:
Cognitive models; neural networks; categorization; prototype theory; Hopfield networks; Self-organizing maps
Research Type:
Thesis
Languages:
English
Collections
- Computer Science [89]
- Thesis - Doctoral [3456]