LogicTronix has collaborated with AMD and Prophesee to develop an event-based vision machine learning solution (Edge AI solution) for road traffic analytics within Smart City applications. LogicTronix solution is based on IMX636 Dynamic vision sensor (DVS) from Prophesee-Sony which is neuromorphic sensor. IMX636 is 0.92 M pixels event sensor: 1280 (H) x 720 (V) and available in MIPI as well as USB form factor. We leverage the MIPI form-factor IMX636 sensor from Prophesee for data acquisition (DAQ) and performing AI/ML processing within a single AMD-Xilinx FPGA device.

To capture event data (which is not frame-based), we use the AXI DMA IP within the Vivado IP pipeline. This MIPI IP driver and IP pipeline has been customized to handle non-frame-based event streams and efficiently transfer the data to DDR memory for further processing.

LogicTronix has developed both vision and non-vision solutions based on Prophesee’s IMX636 neuromorphic (event-based) sensing technology. In addition, we leverage neuromorphic computing algorithms and implement event-driven processing directly within FPGA platforms to achieve efficient, low-latency edge AI performance.

LogicTronix Vision solution based on Neuromorphic | Event Sensing Technology

  • We trained Yolov7 , Yolov4 and few more custom CNN with eTRAM and custom event based dataset for the vehicle detection and tracking application.
  • For FPGA deployment, we perform model quantization and compilation, and then develop FPGA runtime–specific inference scripts to execute the model efficiently on the target hardware.
  • This solution can work on single or multiple neuromorphic | event camera from Prophesee and can perform 50+ FPS on the single camera setup.
  • The power usage of the ML inference pipeline or neuromorphic solution on top of FPGA is 6W.
  • We also have published our Event based Vision ML with AMD Kria KV260 work at Github – Kria-Prophesee-Event-VitisAI.

Neuromorphic ML application with FPGA and Prophesee IMX636

LogicTronix also has accelerated the sensor data capture pipeline (histogram generation) along with neural network acceleration to get higher performance and accuracy.

Histoframe and ML acceleration for neuromorphic solution - prophesee
Fig. Histoframe and ML acceleration for neuromorphic edge AI solution with Prophesee IMX636

LogicTronix Neuromorphic Vision based Solution:

LogicTronix Kria App developed in collaboration with AMD and Prophesee “Object Detection and Tracking with Event Vision-Based Sensors” : https://www.amd.com/en/developer/resources/kria-apps/object-detection-and-tracking.html


Advantages of Neuromorphic (Event-Based) Solutions

Neuromorphic or event-based solutions , either Vision or non-Vision solutions/systems offer several significant benefits compared to traditional frame-based RGB imaging systems:

1. Low Power Operation for Edge AI

Event-based sensors enable ultra-low-power operation, making them highly suitable for edge AI applications where power efficiency is critical.

2. Ultra-Low Latency and High Performance

Compared to conventional RGB sensors, Dynamic Vision Sensors (DVS) or event-based sensors can achieve extremely high temporal resolution — up to 10,000 FPS, as demonstrated by Prophesee. This enables very low latency processing and superior performance in fast-moving or time-sensitive applications.

3. Optimized FPGA-Based Edge Implementation

Leveraging power- and cost-optimized FPGA platforms is an effective approach for developing neuromorphic solutions.
FPGAs provide:

  • Flexible hardware acceleration
  • Efficient parallel processing
  • Low-latency edge AI execution

This makes them well-suited for event-driven workloads.

4. Neural Network Flexibility and Upgradability

FPGAs allow deployment of multiple neural network architectures, including:

  • Spiking Neural Networks (SNN)
  • Convolutional Neural Networks (CNN)
  • Hybrid architectures (e.g., sCNN)

This flexibility enables:

  • Easy upgrades
  • Deployment of new models
  • Feature expansion without major hardware changes

5. High Dynamic Range (HDR)

Event-based (DVS) sensors provide very high dynamic range, enabling reliable operation in challenging lighting conditions where traditional frame-based cameras struggle.

6. True High-FPS Processing with Event-Based Data

To fully exploit high frame-rate capabilities, systems must process event streams directly rather than converting them into frames.
Frame-based processing introduces latency and reduces performance, while native event-driven processing enables real high-speed, low-latency operation.



Other Neuromorphic Application from LogicTronix

1. Driver Behavior Monitoring for Automotive Application

  • Non-Frame application running 100+ FPS for the monitoring and safety application.

2. Ultra-High Speed Counting Application

  • Non-Frame application running 100+ FPS for the high speed counting of medicines and objects.

3. Event Optical Flow and Object Tracking


Like to know more? Contact us: