To accelerate next-generation of AI capabilities through faster and simpler AI computation
Pioneering the practical integration of AI, we offer advanced computing power to business around the worldView Details
The new standard in AI computing
As AI model sizes, including large-scale language models (LLMs), have grown enormously, they require ever-larger computational processing. However, due to soaring semiconductor prices, supply shortages, and stagnant performance improvements caused by the challenge of semiconductor process technology, we are seeing many companies struggling to secure sufficient computing power to accelerate AI-driven business. To resolve these issues, we, LeapMind, provide both the hardware and software solution as a one-stop service needed to bring AI into society.
Ultra low-power AI inference accelerator IP
'Efficiera', our state-of-the-art AI inference accelerator IP, achieves industry-leading PPA (Power, Performance, and Area), enabling practical AI models to run on edge devices.
Top-class power efficiency in the industry: 107.8TOPS/W
World-class AI chip with superior cost performance
Dedicated to training and inference of AI models, aiming for 2 PFLOPS (petaflops) of computing performance, 10 times more cost-effective than a GPU with the same performance. More information is posted on TechBlog.
Designed for AI model training and inference
Open-source drivers and compilers
October 10, 2023
LeapMind's New AI Chip Paves the Way for Unprecedented Cost-Effective AI Computing
August 1, 2023
LeapMind’s Ultra Low-Power AI accelerator IP “Efficiera” Achieved industry-leading power efficiency of 107.8 TOPS/W
January 31, 2023
【Press release】Yoshitaka Ushiku appointed as Technical Advisor