This week, I wanted to delve even further into the fabric that interconnects the field-programmable gate array (FPGA) to the hard processor system (HPS) and vice versa. I discovered three main bridges that accomplish this task, how they are mapped and addressed, and what components oversee timing and access to them.
To accomplish the interface from HPS to FPGA, there is a protocol called the AXI bridge. The AXI bridge handles the width adaptation and clock control that passes the logic and data from HPS to FPGA and/or FPGA to HPS.
Figure 1. A visualization of the “FPGA Fabric” (Source: Intel® PSG)
There are two types of HPS to FPGA bridges: a high throughput and a low throughput bridge. The high throughput bridge can be 32-, 64-, or 128-bits in width. It’s designed for high-bandwidth data transfers, where HPS is the L3 layer that acts as the master.
The lightweight (or “lower” throughput) bridge is limited to 32-bits only; however, it’s optimized to minimize latency. Its primary function is to pass control and status registers to FPGA. It also diverts low-level traffic from the main HPS to FPGA bridge. A good analogy for this bridge is shown in Figure 1, where two bridges from HPS to FPGA are illustrated: One has a single (32-bit) lane but a higher speed limit, while the other has many lanes and allows for more traffic density (bandwidth) to move in the same timeframe.
The third bridge accomplishes FPGA to HPS data transfers. It’s designed to access the HP slave-interface functions or applications waiting in the HPS program for data input. It’s configurable from 32-, 64-, or 128-bit data widths. It’s also controlled by the HPS L3 master-switch clock.
To meld these bridges together, I began by reading the Intel® Developer Zone’s Golden Hardware Reference Design (GHRD) guide, which gives examples of how to set up the AXI bridges that make up the FPGA to HPS fabric. It was here that I truly learned to appreciate the Configuration Wizards and how powerful they truly are. Within six clicks, I had all three bridges configured and a usable device for configurable memory allocation. As a result, I learned that HPS bridges are mapped to on-chip memory to permit as little latency as possible. However, the FPGA portions are mapped to slave-access memory locations, allowing memory to be written as data is available.
So, what does this all mean? Bridges and layers are something that, as a low-level, low-power microcontroller unit (MCU) experienced person, I’ve had very limited opportunity to use. Nonetheless, these bridges may be familiar to developers who are accustomed to very low-level Arm® MCU programming. Essentially, these bridges are a set of control registers and memory mappings that are accessed at a very high speed and are particularly useful in multi-thread, multi-core systems that necessitate high-speed, multi-purpose data transfers. Of course, the idea of interconnects is common to all MCU enthusiasts. Using interconnects or bridges to offload tasks is familiar, yet accessing them as if they were memory or RAM is novel. Simply put, the L3 layer is that in which the FPGA to HPS fabric is introduced and allows data to transfer from one processor to another. It opens the FPGA to perform the tasks that the HPS would otherwise be greatly bogged down by, thus improving their performance.
Privacy Center |
Terms and Conditions
Copyright ©2022 Mouser Electronics, Inc.
Mouser® and Mouser Electronics® are trademarks of Mouser Electronics, Inc.
All other trademarks are the property of their respective owners.
Corporate headquarters and logistics center in Mansfield, Texas USA.