Show simple item record

dc.contributor.advisorD.Deng, Jeremiah
dc.contributor.advisorWoodford, Brendon
dc.contributor.authorShah, Syed Munir Hussain
dc.date.available2014-05-06T20:36:09Z
dc.date.copyright2014
dc.identifier.citationShah, S. M. H. (2014). Adaptive foreground-background segmentation of complex video scenes (Thesis, Doctor of Philosophy). University of Otago. Retrieved from http://hdl.handle.net/10523/4799en
dc.identifier.urihttp://hdl.handle.net/10523/4799
dc.description.abstractMoving objects detection and visual analysis is an active research topic due to its growing demand in automatic sports video analysis and intelligent video surveillance. The early precise distinction between foreground and background regions can reduce the amount of data to be processed, thus speed up further processing of high-level computer vision tasks. Background subtraction is a commonly applied method for foreground detection in complex video scenes. The typical conditions in real world scenarios such as illumination variations, dynamic backgrounds, and camera shaking, intermittent object motion and sensor noise confound the traditional back- ground subtraction methods. Furthermore, most of these applications pose an additional real-time processing constraint. In this thesis, we present a set of methods and model enhancements to handle these issues. More specifically, we improved the existing CodeBook (CB) and Mixture of Gaussians (MoG) models to handle temporally irregular background motions and illumination changes. Two real-time and self-adaptive back- ground models, i.e, the Self-Adaptive CodeBook (SACB) and the CodeBook based Mixture of Gaussians (CB-MoG) are presented for reliable moving object segmentation in unconstrained videos. The proposed online models can automatically estimate sensor noise and model parameters. Also, the proposed models have enhanced capabilities of exploiting the spatial information and keeping the model compact to achieve real-time processing without compromising on accuracy. The CB-MoG background model is able to achieve the benefits of both CB and MoG approaches and it gives a robust performance for dynamic backgrounds. To address the limitations of the CB-MoG model, a Growing Neural Gas algorithm is adopted and modified to present a new background model. It provides a number of advantages compared with traditional methods. Firstly, the GNG based algorithm has the ability to grow and shrink itself according to the data distribution, thus there is no need to specify a number of nodes in advance. Secondly, its inherently adaptive mechanism can model changing data distributions. It should be noted that the proposed models do not require foreground free training samples for initial training. Moreover, a new framework for background subtraction algorithms and techniques to handle four key challenges are presented. A novel hierarchical SURF feature-matching algorithm is introduced to handle local illumination changes and suppress ghosts from the foreground map, which shows advantage over the traditional shadow detection methods. Also, a new method to detect strong shadows is introduced, where the local statistics of the local patches are used to differentiate between strong shadow and valid foreground objects. Furthermore, a frame level method is presented to detect sudden or abrupt illumination changes in the scene. The proposed method can reuse most of the learned components and can reduce a large number of false positives by quickly adapting to the changed conditions. Also, we present a frame-difference approach to detect paused objects and a new background update algorithm to prevent incorporating them into the background. Furthermore, the proposed voting based scheme intelligently uses the spatial and temporal information to further refine the foreground map. It should be noted that these techniques are generalize and can be integrated into any background model. The proposed models are extensively evaluated on challenging datasets and results show that our proposed models effectively handle most of the difficult challenges in real-world videos and have shown significant performance improvements over the existing methods.
dc.language.isoen
dc.publisherUniversity of Otago
dc.rightsAll items in OUR Archive are provided for private study and research purposes and are protected by copyright with all rights reserved unless otherwise indicated.
dc.subjectBackground
dc.subjectsubtration
dc.subjectforeground
dc.subjectvideo
dc.subjectprocessing
dc.subjectMixture
dc.subjectGaussian
dc.subjectCodeBook
dc.subjectNeural
dc.subjectGase
dc.titleAdaptive foreground-background segmentation of complex video scenes
dc.typeThesis
dc.date.updated2014-05-06T03:57:31Z
dc.language.rfc3066en
thesis.degree.disciplineInformation Science
thesis.degree.nameDoctor of Philosophy
thesis.degree.grantorUniversity of Otago
thesis.degree.levelDoctoral
otago.interloanyes
otago.openaccessAbstract Only
 Find in your library

Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item is not available in full-text via OUR Archive.

If you would like to read this item, please apply for an inter-library loan from the University of Otago via your local library.

If you are the author of this item, please contact us if you wish to discuss making the full text publicly available.

This item appears in the following Collection(s)

Show simple item record