Flex Logix PRs

Flex Logix To Speak at THE 2021 LINLEY Spring Processor Forum on Two AI INference Panels

MOUNTAIN VIEW, Calif., April 19, 2021 - Flex Logix® Technologies, Inc., supplier of the fastest and most-efficient AI edge inference accelerator and the leading supplier of eFPGA IP, announced today that its executives will be presenting on Day 4 and Day 5 of the 2021 Linley Spring Processor Forum. Topics will include high performance inference for power constrained applications as well as the critical need for software to maximize throughput, accuracy, and power and inference solutions.

Following are the details around each Flex Logix presentation. For more information or to view the presentations after they are presented, please visit the Flex Logix website.

Session 6: Edge-AI Software
Presentation title: Why Software Is Critical for AI Inference Accelerators
Speaker: Jeremy Roberson, Technical Director and AI Inference Software Architect, Flex Logix
Date: Thursday, April 22
Time: 8:30 am – 10:30 am PT
Summary: In this presentation, Jeremy will discuss the importance of software in maximizing the throughput, accuracy, and power of an AI inference accelerator. He will examine how codeveloping software with hardware allows for architecture tradeoffs that maximize throughput/power for customer models. The software compiler must seamlessly translate data into meaningful results without knowledge of the hardware inner workings. Finally, he will discuss how with the continued evolution of CNN models, software adaptability will continue to drive throughput/power/cost improvements for broader adoption of AI functionality.

Session 9: Efficient AI Inference
Presentation title: High Performance Inference for Power Constrained Applications
Speaker: Cheng Wang, Sr. VP, Software Architecture Engineering, Flex Logix
Date: Friday, April 23
Time: 10:10 am – 11:40 am PT
Summary: In this presentation, Cheng Wang will discuss AI inference solutions for power constrained applications, such as edge gateways, networking towers, and medical imaging devices. He will begin with the set of considerations for hardware deployment, as these applications have lower thermal constraints and usually do not have space for a full size PCIe card. This will lead into a brief overview of the M.2 form factor, where he will then talk about the role of an M.2 inference accelerator in the system designs for such applications.