Speeding up semantic segmentation for autonomous driving

Michael Treml, José Antonio Arjona Medina, Thomas Unterthiner, Rupesh Durgesh, Felix Friedmann, Peter Schuberth, Andreas Mayr, Martin Heusel, Markus Hofmarcher, Michael Widrich, Ulrich Bodenhofer, Bernhard Nessler, Sepp Hochreiter

Research output: Contribution to conferencePaperpeer-review

Abstract

Deep learning has considerably improved semantic image segmentation. However, its high accuracy is traded against larger computational costs which makes it unsuit-able for embedded devices in self-driving cars. We propose a novel deep network architecture for image segmentation that keeps the high accuracy while being efficient enough for embedded devices. The architecture consists of ELU activation functions, a SqueezeNet-like encoder, followed by parallel dilated convolutions, and a decoder with SharpMask-like refinement modules. On the Cityscapes dataset, the new network achieves higher segmentation accuracy than other networks that are tailored to embedded devices. Simultaneously the frame-rate is still sufficiently high for the deployment in autonomous vehicles.
Original languageEnglish (American)
Publication statusPublished - Dec 2016
Externally publishedYes
EventNIPS Workshop on Machine Learning for Intelligent Transportation Systems - Barcelona, Spain
Duration: 8 Dec 20168 Dec 2016

Workshop

WorkshopNIPS Workshop on Machine Learning for Intelligent Transportation Systems
Country/TerritorySpain
CityBarcelona
Period08.12.201608.12.2016

Cite this