New camera sensor borrows human retina tech to capture images at 1000+ FPS
Acquiring images as a succession of frames is one way to get the job done, but it is not the best way. One problem with this approach is that each pixel of a sensor is exposed for the same amount of time, which makes it hard to simultaneously capture bright and dark regions of a scene. IniLabs, a spinoff from the Neuroinformatics Institute at the University of Zurich, has developed a new sensor, known as the VS128 DVS, which eliminates many of the limitations of conventional cameras.
Rasterized images require a comparatively large amount of time and energy for processing because of all the redundant information they collect. The VS128 eliminates the need for expensive post-processing and compression by never acquiring redundant information in the first place. The key to achieving this is to do away with conventional frames and instead convert the information in a visual scene to a more supple currency — spikes.
We recently covered an intriguing new camera design known as the Curvace, which uses a neuromorphic design to meet the needs of surveillance drones. The Curvace has a huge FOV, and fairly high resolution, but by lacking true “spiking pixels” does not have the speed to which the VS128 is capable of. The video above shows an implementation of the new camera in which it controls a robot goalie. The robot’s brain uses less than 4% peakCPU load to achieve a reaction latency less than 3ms, and an equivalent frame rate of 550 FPS. You can find more details about how the goalie is controlled at the jAER Open Source Project, and all the code is freely available.
Even more astounding, under bright illumination, a timing precision of 1 μs and a latency of just 15 μs can be achieved. Comparable performance with conventional high-speed vision systems would require acquisition at thousands of frames per second. The key is that only local pixel-level changes, like those caused by movement, get transmitted by using processing similar to that found in the retina. One conserved principle used on the front end of many biological vision systems is to build up so-called “center-surround” detectors using synaptic structure. When organized into large scale arrays, they make very efficient edge or motion sensors. Cells in the retina that are interposed between the photoreceptor detectors and the transmitting (ganglion) cells, mold themselves into uniquely optimized spatially-extended machines that integrate light signals at every spatial scale in the retina. The computation that each of these cells might be said to perform, is in fact, their structure — we just have not yet fully decoded how it is mapped.
The VS128 was also designed to work with IBM’s “True North” computing architecture, which attempts to capture hypothesized principles of brain function into software. While potentially powerful, clear applications for this platform have yet to emerge. One area where the quick reaction capabilities of this kind of a system could be put to use is in microscopy, or even imaging of the brain itself. Scanning speed has been a traditional limitation of techniques like two-photon and sub-millisecond florescence microscopy. The standard camera used in these applications has been the electron multiplier charge coupled device (EMCCD), or for point-scanning applications, photomultiplier tubes. New camera concepts, including scanless methods, would be invaluable for recorded fast voltage transients at high resolution inside individual brain cells.
The VS128 now costs $2700 for its current 240×180 resolution device. IniLabs is working to enlarge that, and also add color sensitivity. It is integrated with a high-speed USB 2.0 interface and the host software currently has more than 200 Java classes. As an asynchronous system, playback speed is decoupled from any specific rate and under complete user control. As one of the more advanced vision toys to ever be brought to market, this new device could be ideal for many applications where you want to capture things that are happening fast, under varying light conditions with high resolution — provided we can figure out how to harness it.