Technology

The DeepEyes algorithm represents an enhancement to the Bayesian method. It uses bitwise comparisons and efficiently stores training samples in the fast cache of processors.

Traditionally, multiplication operations are required to solve a problem, like, “Classifying Data with a Teacher”. These calculations consume 10 times the number of processor cycles for their execution, compared to the bitwise logical operations that we use. 

On Intel i5 2520m processors with tasks like “Classifying Data with a Teacher” we process an average of 2.6 data series values per clock cycle time on Intel i5 2520m processors. For comparison, this parameter usually ranges from 0.03 to 0.4 times per cycle

For our new approach, complex programming was necessary. However, the DeepEyes core algorithm represents the fastest calculation method. It can execute the calculations in the processor´s cache memory which represent the fastest option.

Therefore, our technology can be implemented on any processor, starting with 8-bit microcontrollers, ending with modern GPU platforms.

At the moment, we are testing our algorithm on neural networks. First results show that the overall operation speed is increasing by a factor of 2.

Technology Quick Facts

  • Efficient: DeepEyes can capture up to ten recognition modules in one solution
  • Adaptable: The generic DeepEyes technology is adapted to capture very specific recognition requirements
  • Safe: DeepEyes solutions can work standalone and offline during operation. This prevents data and privacy breaches.
  • Flexible: Embedding, analyzing and enriching data is easy. Alerts can be sent to any device.
  • Hardware agnostic: DeepEyes solutions can run on existing digital cameras and hardware
  • Powerful: DeepEyes captures even tiny errors/anomalies in real-time

Our technical team looks forward to answering your questions.