FuriosaAI logo

FuriosaAI

FuriosaAI designs and develops data center accelerators for the most advanced AI models and applications. Our mission is to make AI computing sustainable so everyone on Earth has access to powerful AI. Our Background Three misfit engineers with each from HW, SW and algorithm fields who had previously worked for AMD, Qualcomm and Samsung got together and found FuriosaAI in 2017 to build the world’s best AI chips. The company has raised more than $100 million, with investments from DSC Investment, Korea Development Bank, and Naver, the largest internet provider in Korea. We have partnered on our first two products with a wide range of industry leaders including TSMC, ASUS, SK Hynix, GUC, and Samsung. FuriosaAI now has over 140 employees across Seoul, Silicon Valley, and Europe. Our Approach We are building the full stack solutions to offer the most optimal combination of programmability, efficiency, and ease of use. We achieve this through a “first principles” approach to engineering: We start with the core problem, which is how to accelerate.

https://www.furiosa.ai
51-200 employees

Growth Trajectory

Furiosa AI plans to expand its product line with new RNGD variants targeting specific AI workloads like video and image generation, demonstrating their commitment to innovation. Furthermore, Furiosa is dedicated to Kubernetes integration, enhancing infrastructure management for AI deployments. Their market expansion strategy involves targeting a broader range of data centers with its power-efficient solutions and fostering partnerships within the Kubernetes community.

Technical Challenges

Optimizing tensor mapping and compiler for efficient deep learning acceleration
Hardware testing and validation of RNGD
Managing ML inference workloads at scale
Complex hardware topology requirements
Limited hardware-aware scheduling
Fragmented container runtime support
Inconsistent device exposure methods
Achieving high-bandwidth interconnect for multi-chip deployments
Minimizing data movement between DRAM and processing elements

Tech Stack

Tensor Contraction Processor (TCP)Furiosa SDKHBM3 memoryPCIe P2PBF16FP8INT8INT4PyTorch 2.xContainerizationSR-IOVKubernetesContainerdDockerCDIDRA5nm nodeIAMTSMC 5nm processSamsung 14nmUltra EthernetPCIe Gen5SRAM2.5D packagingGoogle AnalyticsCookieYes

Team Size

Key Risks

Competition from established players like NVIDIA in the AI accelerator market could limit market share and pricing power.
Technical challenges in optimizing tensor mapping and compiler for efficient deep learning acceleration could delay product development and deployment.
Market adoption risks associated with new chip architectures requiring extensive hand-tuning and model optimization.
Potential for regulatory scrutiny related to the energy consumption of AI hardware.
Difficulty in talent acquisition for specialized roles in hardware and software co-design.

Opportunities

Further development of the Tensor Contraction Processor architecture could unlock new levels of energy efficiency and performance.
Strategic partnerships with cloud platforms and AI software frameworks could expand market reach and integration capabilities.
Addressing the limitations of GPUs in terms of cost, power consumption, and cooling requirements could drive adoption in data centers.
Leveraging Kubernetes advancements like Dynamic Resource Allocation (DRA) and Container Device Interface (CDI) to simplify AI deployments.
Expanding product line with RNGD-S and RNGD-MAX variants to target specific customer segments and use cases
Live Data Stream

Access Our Live VC Funding Database

30,000+ funded startups

tracked in the last 3 months

B2B verified emails

of key decision makers

Growth metrics

Real-time company performance data

Live updates

of new VC funding rounds

Advanced filters

for sophisticated queries

API access

with multiple export formats