Shoukat, M.A., Sargano, A.B., Habib, Z. and You, L., 2018. Automatic depth estimation from single 2D image via transfer learning approach. In: 2nd European Conference on Electrical Engineering and Computer Science (EECS), 20-22 December 2018, Bern, Switzerland, 589 - 594.
Full text available as:
|
PDF
08910034.pdf - Accepted Version Available under License Creative Commons Attribution Non-commercial. 537kB | |
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via BURO@bournemouth.ac.uk. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. |
Abstract
Nowadays, depth estimation from a single 2D image is a prominent task due to its numerous applications such as 2D to 3D image/video conversion, robot vision, and self-driving cars. This research proposes an automatic novel technique for the depth estimation of single 2D images via transfer learning of pre-trained deep learning model. This is a challenging problem, as a single 2D image does not carry any cues regarding depth. To tackle this, the pool of available images is exploited for which the depth is known. By following the hypothesis that the color images having similar semantics are most probably to have similar depth. Along these lines, the depth of the input image is predicted through corresponding depth maps of semantically similar images available in the dataset, fetched by high-level features of pre-trained deep learning model followed by a classifier (i.e., K-Nearest Neighbor). Afterward, a Cross Bilateral filter is applied for the removal of fallacious depth variations in the depth map. To prove the quality of the presented approach, different experiments have been conducted on two publicly available benchmark datasets, NYU (v2) and Make3D. The results indicate that the proposed approach outperforms state of the art methods.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Uncontrolled Keywords: | depth estimation, 2D to 3D conversion, transfer learning, KNN-Framework |
Group: | Faculty of Media & Communication |
ID Code: | 33209 |
Deposited By: | Symplectic RT2 |
Deposited On: | 08 Jan 2020 11:52 |
Last Modified: | 14 Mar 2022 14:19 |
Downloads
Downloads per month over past year
Repository Staff Only - |