Mottakin, K., 2022. Distributed On-line Training for Object Detection on Embedded Devices. Masters Thesis (Masters). Bournemouth University.
Full text available as:
|
PDF
MOTTAKIN, Khairul_M.Res._2022.pdf Available under License Creative Commons Attribution Non-commercial. 3MB | |
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via BURO@bournemouth.ac.uk. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. |
Abstract
In this thesis, we develop a scalable distributed approach for object detection model training and inference, using low cost embedded devices. Examples of the usage of such an approach is automatic beach litter collection using mobile robots, IoT based intelligent video surveillance system for traffic monitoring, indoor monitoring, crime and violence detection etc. Another important part of this study is using Embedded systems for distributed training, as application of above mentioned scenarios. These devices have brought about a revolution in technology due to the facts that they consume low power, are of a small size and their cost is low per-unit. Since data are distributed over devices and to speed-up the training for object detection on embedded devices, we adopt the concept of Parameter Server based distributed training, where each embedded de- vice works as node or worker and can communicate through Parameter server(s). We develop a distributed training approach keeping in mind the resource constraints e.g. memory limitation, computational power, energy limitation etc for embedded devices. Use of Transfer Learning technique, Data Parallelism and Model Parallelism in this study reduces the resource consumption of such devices. Besides, retraining the model with continuous streaming data helps to improve the accuracy of the model as well as to further reduce resource consumption of embedded devices. Therefore, we combine the distributed training with online learning or incremental learning, where our model not only predicts in real time but also learns in real time. Additionally, this incremental learning approach discards data, once training has been done, thus save huge amount of memory on de- vices. Concurrent use of Knowledge Distillation and Exemplary Dataset during incremental training exhibits higher accuracy (up to 16%) com- pare to batch learning, most importantly without forgetting old classes. Our experiment shows that the Distributed Training (using 3-GPUs system) reduces the training time of object detection models by up to 67% comparing with non-distributed system. We test the performance of our Distributed Incremental Training approach using Fruits dataset as well as it’s practical applicability on our own collected Cigarette Filter dataset. Some other associated issues such as Catastrophic Forgetting and Small Object Detection problem have also been studied in this thesis.
Item Type: | Thesis (Masters) |
---|---|
Additional Information: | If you feel that this work infringes your copyright please contact the BURO Manager. |
Uncontrolled Keywords: | parameter server; online learning; incremental learning; distributed machine learning; object detection; embedded device; transfer learning; small object detection; catastrophic forgetting; knowledge distillation |
Group: | Faculty of Science & Technology |
ID Code: | 37370 |
Deposited By: | Symplectic RT2 |
Deposited On: | 24 Aug 2022 14:09 |
Last Modified: | 24 Aug 2022 14:09 |
Downloads
Downloads per month over past year
Repository Staff Only - |