The DeepEyes algorithm represents an enhancement to the Bayesian method. It uses bitwise comparisons and efficiently stores training samples in the fast cache of processors.
Traditionally, multiplication operations are required to solve a problem, like, “Classifying Data with a Teacher”. These calculations consume 10 times the number of processor cycles for their execution, compared to the bitwise logical operations that we use.
On Intel i5 2520m processors with tasks like “Classifying Data with a Teacher” we process an average of 2.6 data series values per clock cycle time on Intel i5 2520m processors. For comparison, this parameter usually ranges from 0.03 to 0.4 times per cycle
For our new approach, complex programming was necessary. However, the DeepEyes core algorithm represents the fastest calculation method. It can execute the calculations in the processor´s cache memory which represent the fastest option.
Therefore, our technology can be implemented on any processor, starting with 8-bit microcontrollers, ending with modern GPU platforms.
At the moment, we are testing our algorithm on neural networks. First results show that the overall operation speed is increasing by a factor of 2.
Our technical team looks forward to answering your questions.