Supported Frameworks & Applications
SAKURA™
AI Accelerator
EdgeCortix SAKURA-II is an advanced AI accelerator providing best-in-class efficiency, driven by our low-latency Dynamic Neural Accelerator (DNA). SAKURA-II is designed for applications requiring fast, real-time Batch=1 AI inferencing with excellent performance in a small footprint, low power silicon device.
SAKURA-II is designed to handle the most challenging Generative AI applications at the edge, enabling designers to create new content based on disparate inputs like images, text, and sounds. Supporting multi-billion parameter models like Llama 2, Stable Diffusion, DETR, and ViT within a typical power envelope of 8W, SAKURA-II meets the requirements for a vast array of Edge Generative AI uses in Vision, Language, Audio, and many other applications.
SAKURA™
Key Benefits
SAKURA™
Technical Specs
Performance | DRAM Support | DRAM Bandwidth | On-chip SRAM |
60 TOPS (INT8) 30 TFLOPS (BF16) |
Dual 64-bit LPDDR4x (8/16/32GB total) |
68 GB/sec | 20MB |
Compute Efficiency | Temp Range | Power Consumption | Package |
Up to 90% utilization | -40C to 85C | 8W (typical) | 19mm x 19mm BGA |
Performance |
60 TOPS (INT8) 30 TFLOPS (BF16) |
DRAM Support |
Dual 64-bit LPDDR4x (8/16/32GB total) |
DRAM Bandwidth |
68 GB/sec |
On-chip SRAM |
20MB |
Compute Efficiency |
Up to 90% utilization |
Temp Range |
40C to 85C |
Power Consumption |
8W (typical) |
Package |
19mm x 19mm BGA |
Get the details in the SAKURA-II product brief
MERA Software Supports Diverse Neural Networks from Convolutions to the Latest Generative AI Models
Transformer Models | Convolutional Models | ||
DETR DINO Whisper Encoder / Decoder DistillBERT DistilBert - SST2 Nano-GPT GPT-2 - 150M Distil-GPT-2 (HF) GPT-2 (HF) - 117M GPT-2 (HF) - medium / large GPT-2 - XL (HF) - 1.5B |
TinyLama (HF) - 1.1B Phi-2 (HF) - 3B Open-Llama2 (HF) - 7B CodeLlama (HF) - 7B Mistral-v0.2 (HF) - 7B Llama3 - 8B ViT (HF) / CLIP / Mobile-ViT ConvNextV1/V2 (HF) SegFormer Roberta-Emotion StableDiffusion V1.5 |
ResNet 18 ResNet 50/101 Big YoloV3 TinyYolo V3 Yolo V5/V6/V8 YoloX EfficientNet-Lite EfficientNet-V2 SFA3D |
MonoDepth - MiDaS U-Net MoveNet DeepLab MobileNet V1-V2 MobileNetV2-SSD GladNet ABPN SCI |
Transformer Models |
DETR DINO Whisper Encoder / Decoder DistillBERT DistilBert - SST2 Nano-GPT GPT-2 - 150M Distil-GPT-2 (HF) GPT-2 (HF) - 117M GPT-2 (HF) - medium / large GPT-2 - XL (HF) - 1.5B TinyLama (HF) - 1.1B Phi-2 (HF) - 3B Open-Llama2 (HF) - 7B CodeLlama (HF) - 7B Mistral-v0.2 (HF) - 7B Llama3 - 8B ViT (HF) / CLIP / Mobile-ViT ConvNextV1/V2 (HF) SegFormer Roberta-Emotion StableDiffusion V1.5 |
Convolutional Models |
ResNet 18 ResNet 50/101 Big YoloV3 TinyYolo V3 Yolo V5/V6/V8 YoloX EfficientNet-Lite EfficientNet-V2 SFA3D MonoDepth - MiDaS U-Net MoveNet DeepLab MobileNet V1-V2 MobileNetV2-SSD GladNet ABPN SCI |
SAKURA™ Modules and Cards
SAKURA-II modules and cards are architected to run the latest vision and Generative AI models with market-leading energy efficiency and low latency.
SAKURA-II M.2 modules are high-performance, 60 TOPS, edge AI accelerators in the small M.2 2280 form factor and are the best choice for space-constrained designs.
SAKURA-II PCIe Cards are high-performance, up to 120 TOPS, edge AI accelerators in the low profile, single slot PCIe form factor. With single and dual options, the best choice will depend on the overall performance needed.
Explore our Complete Edge AI Platform
Unique Software
Proprietary Architecture
Deployable Systems
EdgeCortix Platform Solves Critical Edge AI Market Challenges
Given the tectonic shift in information processing at the edge, companies are now seeking near cloud level performance where data curation and AI driven decision making can happen together. Due to this shift, the market opportunity for the EdgeCortix solutions set is massive, driven by the practical business need across multiple sectors which require both low power and cost-efficient intelligent solutions. Given the exponential global growth in both data and devices, I am eager to support EdgeCortix in their endeavor to transform the edge AI market with an industry-leading IP portfolio that can deliver performance with orders of magnitude better energy efficiency and a lower total cost of ownership than existing solutions."
Improving the performance and the energy efficiency of our network infrastructure is a major challenge for the future. Our expectation of EdgeCortix is to be a partner who can provide both the IP and expertise that is needed to tackle these challenges simultaneously."
With the unprecedented growth of AI/Machine learning workloads across industries, the solution we're delivering with leading IP provider EdgeCortix complements BittWare's Intel Agilex FPGA-based product portfolio. Our customers have been searching for this level of AI inferencing solution to increase performance while lowering risk and cost across a multitude of business needs both today and in the future."
EdgeCortix is in a truly unique market position. Beyond simply taking advantage of the massive need and growth opportunity in leveraging AI across many business key sectors, it’s the business strategy with respect to how they develop their solutions for their go-to-market that will be the great differentiator. In my experience, most technology companies focus very myopically, on delivering great code or perhaps semiconductor design. EdgeCortix’s secret sauce is in how they’ve co-developed their IP, applying equal importance to both the software IP and the chip design, creating a symbiotic software-centric hardware ecosystem, this sets EdgeCortix apart in the marketplace.”
We recognized immediately the value of adding the MERA compiler and associated tool set to the RZ/V MPU series, as we expect many of our customers to implement application software including AI technology. As we drive innovation to meet our customer's needs, we are collaborating with EdgeCortix to rapidly provide our customers with robust, high-performance and flexible AI-inference solutions. The EdgeCortix team has been terrific, and we are excited by the future opportunities and possibilities for this ongoing relationship."