How to Ghost your Neural Network

Insights from Han et. Al: “GhostNet: More Features from Cheap Operations”

Mark Cleverley
8 min readMay 25, 2020

If you want to modify top-of-the-line deep learning models to run twice as fast without dropping accuracy, I’d say you have a ghost of a chance.

Making image-based neural networks quicker is a big deal, especially when you consider where and how they’re applied. A self-driving car employs complex object detection and recognition nets to determine “are those pixels a piece of asphalt or a pedestrian?”.
The faster (and cleaner) the networks at play, the better.

While exorcising my convolutional neural network last week, I stumbled across a paper by some clever fellows from Beijing & Sydney Universities proclaiming a new style of convolutional layers for use in CNNs.
Current production-standard models offer high performance, but have to spend a decent chunk of time calculating Floating Point OPerations (FLOPs). Han et. al determined that existing image-analysis networks have a lot of redundancy, and runtime can be slashed by reducing the computational load with some efficient linear operations.

For image classification & object detection, “GhostNet” yields similar or better performance 33%…

--

--

Mark Cleverley
Mark Cleverley

Written by Mark Cleverley

data scientist, machine learning engineer. passionate about ecology, biotech and AI. https://www.linkedin.com/in/mark-s-cleverley/

Responses (2)