Listen to this blog:
PyTorch, TensorFlow, and ONNX are familiar tools for many data scientists and AI software developers. These frameworks run models natively on a CPU or accelerated on a GPU, requiring little hardware knowledge. But ask those same folks to move their applications to edge devices, and suddenly knowing more about AI acceleration hardware becomes essential – and perhaps a bit intimidating for the uninitiated. There is a solution: MERA, the software compiler and run-time framework from EdgeCortix, takes the mystery out of connecting edge AI software with PyTorch, TensorFlow, and ONNX models running in edge AI co-processors or FPGAs.
In enterprise AI applications, the AI training platform often runs on a server or cloud-based hardware, taking advantage of the practically unlimited performance, power consumption, storage, and mathematical precision, limited only by capital and operating expenses. AI inference may scale down to a physically smaller server-class machine using similar CPU and GPU technology, or stay in the data center or cloud.
Edge AI applications often use different hardware for AI inference, dealing with size, power consumption, precision, and real-time determinism constraints. The host processor might have Arm or RISC-V cores or a smaller, more energy-efficient Intel or AMD processor. Models might run on an edge AI chip with dedicated processing cores or a high-performance PCI accelerator card containing an FPGA programmed with inference logic.
These edge-ready hardware environments may be mysterious territory for data scientists and others who are less hardware-centric. Likely concerns that crop up:
Getting an AI application to an edge platform shouldn’t have to jump a discontinuity. Ideally, edge AI application developers would perform model research in a familiar environment – PyTorch, TensorFlow, or ONNX on a workstation, server, or cloud platform – and make an automated conversion to deployment on an edge AI accelerator.
MERA powers a smooth transition to edge AI software. Developers work in familiar Python from start to finish instead of another programming language or FPGA terminology. MERA automatically partitions computational graphs between an edge device host processor and AI acceleration hardware, reconfiguring acceleration IP for optimal results.
At a high level, a typical MERA deployment process has these steps.
Data scientists and Python-first developers can quickly get applications into edge devices running EdgeCortix DNA IP with MERA. The development process is similar whether the hardware choice is an EdgeCortix SAKURA edge AI co-processor, a system-on-chip designed with EdgeCortix DNA IP, or an EdgeCortix Inference Pack on BittWare FPGA cards.
The latest release, MERA 1.3.0, is available for download from the EdgeCortix GitHub repository, with code files, pre-trained applications in the Model Zoo, and documentation. Using MERA as a companion tool to EdgeCortix-enabled hardware, teams can worry less about edge AI hardware complexity and focus more on edge AI software robustness and performance.
DNN Compiler Engineer