Examples of LeapMind's core technologies, which underpin
our joint development projects as well as the Efficiera business
Extremely low bit quantization
Quantizing deep learning models to extreme low bits makes them significantly lighter and faster without sacrificing their performance (Figure 1).
Specifically, we achieve weight savings by replacing the parameters in an inference model with 1 or 2 bits instead of the usual single-precision floating-point number (32 bits).
The limit of quantization without performance degradation is generally believed to be 8 bits, but LeapMind has succeeded in achieving negligible performance degradation even with a combination of 1-bit weight (weight factor) and 2-bit activation (input), which is well below 8 bits (Figure 2).
Optical Flow Estimation
Optical flow describes how each image pixel moves between consecutive frames.
By grasping the optical flow for successive frames, we can know the relative motions of objects in the video footage.
This technique estimates the positions of human joints and major points (neck, shoulders, elbows, wrists, ankles, etc.) and their connections from images.
This technology paints areas in the input image with different colors so that each pixel is associated with a category it belongs to.
This technology improves image quality by outputting an image with a higher resolution than the input image.
The technology outputs a rectangle enclosing each of objects in the input image, and can be used for detecting where people, faces, cars, signs, etc. are located in the image.
It outputs the location of the objects in the consecutive image frames without losing sight of them.
This technology improves the quality of the input images by removing noise from it. Unlike super resolution, the resolution does not change.
This technology recognizes what the input image shows and outputs the category label associated with it.
This technique detects "anomalous" data that behaves differently from the majority of data.