mobilint_logo_350px.png

Powerful Hardware Better Intelligence

Mobilint provides integrated sensor fusion & deep learning SoC solutions.

Core technologies

Mobilint is a rapidly growing NPU startup located in South Korea. The company was founded in April, 2019 by Dongjoo Shin, Ph.D, CEO of Mobilint. He has been doing research on DNN/AI ASIC (Application Specific Integrated Circuit) since 2013 and got a Ph.D. degree at KAIST. Mobilint specializes in integrated sensor fusion and deep learning SoC solutions. The company provides customers with NPUs (Neural Processing Unit) that have the following key advantages:

Programmability

tool.png

Mobilint’s NPU supports a wide variety of deep learning networks using its long-term accumulated hardware architecture and compiler technology. Mobilint’s NPU is based on configurable and scalable architecture. Moreover, neural network optimizer and compiler technologies enable the hardware architecture to perform at the highest efficiency.

High performance per cost

performance.png

We design the ASIC architecture optimized for deep learning algorithms. Also, our area optimization technology allows us to reduce chip (silicon) area, thus to reduce the cost.

Low power consumption

power.png

To reduce power consumption, we utilize off-chip memory access minimization technology, network optimized dynamic fused-layer technology, and self-tuning dynamic fixed point technology. Mobilint’s low-power technology had also been proven by a real chip.

End to end acceleration

cpu.png

NPU can handle the entire process involved in deep learning, including pre-processing and post-processing of sensor data. We use Sensor fusion/ISP, Voxelization/Sampling/Grouping, and Pre/Post-Processing ASIC technology to enable end-to-end acceleration.

 
 

Performance Benchmark

On April 2021, the latest result of MLPerf™ Benchmark (v1.0), the leading industry benchmark for Deep Learning Accelerator Performance, had been released. Mobilint submitted benchmark result of an Edge inference NPU implemented on the FPGA, Xilinx Alveo U250. In this round, we gained from x1.12 to x2.16 performance improvement comparing to the previous v0.7 submission, and have been achieving the highest performance result in South Korea.

Closed Division*

ResNet

Single Stream* : 17.27 (ms)

Offline* : 174 (samples/sec)

SSD-MobileNet

Single Stream : 6.98 (ms)

Offline : 463 (samples/sec)

Open Division*

ResNet

Offline : 891.70 (samples/sec)

SSD-MobileNet

Offline : 2,404.61 (samples/sec)

* Closed Division : Based on relatively strict rule, it is for direct performance comparison.

* Open Division : Based on relatively free rule, it is for showcasing new and exciting thing.

* Single Stream : Measures latency consumed per a query.

* Offline : Measures the number of query an accelerator can process in a second.

The result of MLPerf benchmark differs by system configuration, so to check out more in detail, please visit here.

Applications

Neural Processing Unit is applicable almost everywhere.

Autonomous

Vehicle

Smartphone

IoT/Wearable

Device

Surveilence

Camera

Drone/Robot

Smart

City

Smart

Factory

Home Appliance

 
 

Join us!

Mobilint employees consist of people who studied Business, Electrical Engineering, Mathematics, and Computer Science. Moreover, more than half of them holds Ph.D. degrees, and we are also from industries such as Academy, Finance, Electronics, and Automotive. We are a team of motivated, passionate, and hard-working members, those of who gathered to make NPU successful. Currently we are concentrating on designing and implementing a cutting-edge NPU Hardware and Software.

We are looking for talented Deep Learning Engineer, Software Engineer, and Hardware Engineer at any time. If you are interested, please contact us via recruit@mobilint.co

More questions?

Please feel free to email contact@mobilint.co to reach us!