Agilex™ 7 FPGAs and SoCs M-Series – DDR5
- Adam Taylor
- Apr 6
- 3 min read
To support high-performance compute solutions, it is required not only to have high-performance logic but also the ability to interface with high-performance memories. As compute solutions have increased, so too have non-volatile memories, evolving from SDRAM to DDR, DDR2, and most recently, DDR5. Each of these evolutions brings higher performance and lower power consumption. The DDR5 standard was released in July 2020 and defines a synchronous double-data-rate RAM that operates at 1.1V and provides between 4 and 8.8 GT/s per DIMM.
One very notable change between DDR4 and DDR5 is the channel architecture. In DDR4, a single 72-bit data bus is provided (64 data, plus 8 ECC) in a single channel. With DDR5, two channels are provided, each 40 bits wide (32 data plus 8 ECC). Two independent channels, of course, increase memory access efficiency. As such, performance increases result not only from the increase in transfer bandwidth supported but also from improved efficiency.
The Agilex 7 M-Series devices provide developers with the External Memory Interface (EMIF), implemented within the silicon of the FPGA. The EMIF provides support for DDR4, LPDDR5, and DDR5 memories using hard IP memory controllers. In the Agilex 7 M-Series, the EMIF supports up to 5.6 GT/s for DDR5, 5.5 GT/s for LPDDR5, and 3.2 GT/s for DDR4.
As with the HBM2E example we looked at previously, the EMIF can be connected to the Hard Memory NoC. However, depending on the application requirements, the developer can also connect to the EMIF directly from the FPGA fabric, bypassing the NoC if required by the application.

The Agilex 7 M-Series Development Kit, which we previously used to examine the HBM2E performance, provides developers with a bank of DDR components connected to Bank 3A.
To demonstrate the performance of the EMIF when working with DDR5, a demo application is provided. This demo application uses a test engine within the fabric, the Networks on Chip, and, of course, the EMIF. The DDR5 devices used provide up to 5600 MT/s, enabling 22.4 Gbps of data transfer over the 32-bit channel.
The design of the test application limits the overall bandwidth to 17.6 GBps due to the test engine running at 550 MHz. In other applications, by increasing the core frequency, we’ve seen increment in performance.

Testing the application on the board requires System Console to configure and run the tests. The tests report the detected efficiency of the link.
Once the design is downloaded, we use System Console to see the results of the calibration and the status of the test engine. Once we have confirmed that DDR5 has been calibrated successfully, we can execute the tests.

To run the tests, we need to launch System Console from a command line. The test itself takes a few seconds to run, and the result is displayed.

Once the tests are complete, we need to convert the results provided by System Console, which reports the performance of the NoC, to the performance of the DDR5 EMIF using the equations below:

Using these equations on the generated results shows a read efficiency of 17.1776 GBps and a write efficiency of 16.808 GBps. This indicates that the EMIF bandwidths are very close to the theoretical maximum of 17.6 GBps.
UK FPGA Conference
FPGA Horizons - October 7th 2025 - THE FPGA Conference, find out more here
Embedded System Book
Do you want to know more about designing embedded systems from scratch? Check out our book on creating embedded systems. This book will walk you through all the stages of requirements, architecture, component selection, schematics, layout, and FPGA / software design. We designed and manufactured the board at the heart of the book! The schematics and layout are available in Altium here Learn more about the board (see previous blogs on Bring up, DDR validation, USB, Sensors) and view the schematics here.
Sponsored by Altera
Comments