FPGARelated.com
Forums

When (and why) is it a good idea to use an FPGA in your embedded system design?

Started by stephaneb 7 years ago12 replieslatest reply 2 years ago8368 views

Once we are done with this thread, the vision is for readers to get a basic understanding of when should an FPGA be considered in the design of an Embedded System and why adding an FPGA makes sense (or not) for certain applications.  Some possible talking points:

  • The pros and cons of using an FPGA
  • Typical applications
  • Personal experiences...  

Thank you for any insight you may share! #FAQ

[ - ]
Reply by adamt99February 2, 2023

The skill of an engineer is selecting the right tool for the right job, and not just selecting a FPGA for FPGA's sake. 

FPGA and the more recent heterogeneous system on chip which fuse Processor cores with programmable logic should be used where your application demands the following 

  • Determinism - As you are using dedicated resources in the programmable logic, the latency of the computation is highly deterministic - much more so than with software where shared resources become a bottle neck for instance - shared DDR Memory. In my career I have created SIL4 FPGA for both ground and space applications, FPGA where chosen over software for those applications thanks to the highly deterministic nature.
  • Performance -  Similar to above implementations used dedicated resources, this enables optimisation for performance and not the need to adapt shared general purpose units. One great example of this is machine learning a hot topic at the moment, there is a move to use fixed point maths ideal for implementation in FPGA. A GPU however is designed for floating point operation, which impacts the performance and is wasteful in power.
  • IO flexibility - With programmable logic it is possible to create any interface you desire with the correct PHy. Often if you are using logic level signalling an external PHy is not required and can be directly connected to the FPGA IO. Thanks to this flexibility you can have as many SPI, I2C, GiGE, CAN interfaces as required for your application  - you are not tied to a specific number of a dedicated type as you are with processors. This IO is also great for use with legacy and bespoke applications and interfaces.
  • Rapidly Evolving Standards - Standards evolve quickly especially with things like Time Sensitive Networking and 5G where standards are still in committees, using FPGA does not tie you down, as would be the case with an ASIC. As the standards evolve so does your ability to implement them with an FPGA based solution.
  • Security - Someone else has already touched on it but when you want to implement high end high grade encryption you will be using a FPGA or Heterogeneous SoC. 

It is interesting a few people have mentioned the learning curve and implementation time, yes it is more complex then traditional SW however, the tool chains have increased significantly over the last few years. The tool chains now provide the user with several free vendor supplied IP cores, often which do complex functions available free to get your design implemented as fast as possible. Coupled with tools like HLS this does reduce the development time. Tools like SDSoC for the Xilinx heterogeneous SoC enable seamless movement of functionality from the processor cores to the programmable logic. I have been impressed with just how well this works. 

When it comes to the implementation time, it can take time. However, it is possible to perform incremental compilation as well which reduces the implementation time, it does reduce the time taken to implement a design significantly. 

When it comes to prototyping (and indeed production developments) I am a big fan of System of Module approach. For a few hundred dollars you can have a small board which has a everything you need, FPGA, Memory, Oscillators, Power Management to get you going. This frees up your design teams to focus upon adding, the added value of your solution. 

[ - ]
Reply by MichaelKellettFebruary 2, 2023

10 replies already so I'll try to cover some new ground and dispel a few myths.

FPGAs have a huge dynamic range, from the tiny 48 pin Lattice Ice40 parts to the XIlinx Ultra Scale Behemoths. They go in price from (in 1k off type volumes) from a few $ to a few k$, and use from less than 1mA to 10A or more. 

The decision to use a big one will commit you to a M$ + path to development. I can do more than 10 commercial projects a year using little ones.

No point offering advice here on the big ones - if you're spending that kind of money you'll have a team that can cope.

Little FPGA's (Lattice Ice40, Lattice ECP2, Lattice MachXO.., Altera MAX10 and the like) can all be developed using free software tools and very cheap programmer/debugger hardware. Lattice really do have the biggest range of low cost parts.

If you think you might need more than 5 74xxxx style logic chips, or need programmable flexibility later then consider an ICE40. I just used one to glue together a few audio chips and provide a weird interface (1.8V with sync pulse in an unusual place in the data stream) to a third party device. You could never do it with a processor but you don't need a lot of logic - this design used about 1000LUTs. That was enough to do the glue logic and configure the audio chips by SPI on start up - no processor required.

For more complicated stuff where you might need to do some maths the Altera MAx10s are nice - they do a complete range from 2k to 50k LUTs all available in 144 pin TQFP and 3.3V single supply operation. Built in EEPROM for boot up and, in my opinion, the nicest of the free toolsets. I've just used one to control the ADC and do the major grunt work of the maths in an 8 channel hybrid analogue and digital Lock In Amplifier. The great thing about the FPGA is the ease with which you can add features like providing synch signal to lock the switching power supplies to the ADC sampling rate, whilst at the same time doing loss free fixed point maths (on this application the data size expands as it goes through processing to 70 bits wide before it gets down to the speed at which the micro can munch it as floats). This job uses an Altera MAX10, about 6k LUTs as well as on chip RAM and multipliers.

Unlike one of the other contributors to this thread, I've never used a processor core on an FPGA, preferring to use separate micros communicating via SPI, memory ports etc. 

For me the key things are:

when you need or may need flexibility (which is often because of 'political' rather than engineering reasons.

when you would otherwise need multiple chip glue logic (the cost hierarchy in development is software, HDL, physical logic)

when there are complicated interfaces that standard micros don't support

when micros or DSPs are'nt fast enough

when you only have time/money for one pcb spin and a lot of stuff to link up on it - if it's feasible to put an FPGA in as the core of your system the flexibility can often mop up a great many problems that may crop up.

MK 

[ - ]
Reply by oliviertFebruary 2, 2023

As a Xilinx employee I would like to contribute on the Pros ... and the Cons.

Let start with the Cons: if there is a  processor that suits all your needs in terms of cost/power/performance/IOs just go for it. You won't be able to design the same thing in an FPGA at the same price.


Now if you need some kind of glue logic around (IOs), or your design need multiple processors/GPUs due to the required performance then it's time to talk to your local FPGA dealer (preferably Xilinx distributor!). I will try to answer a few remarks I saw throughout this thread:

FPGA/SoC: In the majority of the FPGA designs I saw during my career at Xilinx I saw some kind of processor. In pure FPGAs (Virtex/Kintex/Artix/Spartan) it is be a soft-processor (Microblaze or Picoblaze) and in SoC (Zynq APSoC and Zynq Ultrascale+ MPSoC) it is the hard processor (Dual Cortex-A9 and Quad-A53+Dual-R5). The choice is now more complex: Processor Only, Processor with an FPGA aside, FPGA only, integrated Processor/FPGA. The tendancy is for the latter due to all the savings incurred: PCB, power, devices, ...

Power: pure FPGa are making incredible progress, but if you want real low-power in stand-by mode you should look at the Zynq Ultrascale+ MPSoC that contains many processors and particularly a Power Management Unit that can switch on/off different region of the processors/programmable logic

Analog: since Virtex-5(2006), Xilinx has included ADCs in its FPGAs, which was limited to internal parameter measurements (Voltage, Temperature, ...) called the System Monitor. With 7 series (2011), Xilinx included a dual 1 Msps/12 bits ADC with internal/external measurements capabilities. Lately Xilinx announced (production and public availability next year) very high performance ADC/DAC integrated with the Zynq U+ RFSoC: 4Gsps@12 bits ADC / 6.5 Gsps@14 bits DAC. Potential applications are Telecom (5G), Cable (DOCSYS) and Radar (Phased-Array).

Security: The bitstream that is stored in the external Flash can be encoded. Decoding is performed within the FPGA during bitstream download. Zynq-7000 and Zynq Ultrascale+ supports encoded bitstream and also secured boot of the processor.

Ease of Use: That is the big part of the equation. Customers need to take this into account to get the right Time To Market. Since 2012 and the 7 series devices Xilinx introduced a new integrated tool called Vivado. Since then a number of added features/new tools have been proposed:

  • IP Integrator (IPI): a graphical interface to stitch IPs together and generate bitstreams for complete systems.
  • Vivado HLS (High Level Synthesis): a tool that allows you to generate HDL code from C/C++ code. This tool will generate IPs that can be handled by IPI.
  • SDSoC (Software Defined SoC): This tool allows you to design complete systems, software and hardware on a Zynq (APSoC/MPSoC) platform. This is SDK with some plugins that will allow you to move part of the C/C++ code to the programmable logic (calling VHLS in the background).
  • SDAccel: an OpenCL (and more) programmation paradigm implementation. Not relevant for this thread.

There are also tools related to the Mathworks environment:

  • System Generator for DSP (aka SysGen): Low-level Simulink library (designed by Xilinx for Xilinx FPGAs). like programming HDL code with blocks. This tools allows even better performances than (clock/area) HDL code as each block is an instance of an IP (from register, adder, counter, multiplier up to FFT, FIR compiler and VHLS IP). Bit true and cycle true simulations.
  • Xilinx Model Composer (XMC): available since ... yesterday! Again a Simulink blockset but based on VHLS. Much faster simulations, bit true but not cycle true.

All this to say that FPGA vendors have made a lot of efforts to make FPGAs and derivative devices easier to program. You still need a learning curve that is much shorter than what it used to be, but still existing, comparable to any new device you want to program.

[ - ]
Reply by Bob11February 2, 2023

The FPGA (and its baby brother the PLD) is simply another tool in the design engineer's toolbox. Most of my designs are for the professional audio space, and typically incorporate high-performance analog circuitry, a control processor, an FPGA, and a DSP on the same PCB. Once you've done it a few times it's not that hard to multitask between Verilog FPGA code, C/assembly DSP code and C++ processor code. Just make sure your development platform has as many cores as you can find to minimize the compile times, because you'll be re-compiling a lot.

The control processor is the conductor of the hardware symphony. It's always best to use the standard interfaces on the control processor (UART, USB, SPI, I2C, EMIF, etc.) because they're, well, standard, and typically the drivers are already written for you saving you tons of time and debugging effort. The DSP shines when it comes to parallelizing real-time algorithms at hardware rate (sample-rate conversion, FIR/IIR filters, ...) and which require floating point performance or fancy memory addressing (FFT butterflies and the like) with lots of user-tweakable parameters. The FPGA excels at gluing everything together, providing FIFOs, queues and mailboxes for command and control between the control processor and the rest of the system and at the same time the requisite high-speed high-bandwidth hardware pipes needed to keep the DSP buffers full and data streaming to and from multiple high-speed real-time hardware interfaces. The FPGA handles with ease interfaces that the control processor doesn't have (e.g. I2S, parallel LVDS, etc.), implements voltage-level conversions between onboard subsystems, and can manage high-speed interfaces (e.g. 10G Ethernet) that would swamp the internal buses and DMA engine of the control processor. 

And, as Murphy tells us, it's always the case that your preferred control processor uses an Intel-style /RD, /WR external memory interface with active low clock edges, while your hot-off-the-wafer sample-stock chip the factory rep just dropped-ship to you uses a Motorola-style R/W signal clocking on the rising edges. Oh, and the data bus is wrong-endian and the byte enables aren't working right according to the just-published errata. Not a problem when you've got an FPGA sitting between the two devices.

The downsides to the FPGA are that they typically have multiple banks with multiple power rails and you invariably end up two pins short on the one bank with the voltage you need because for some inscrutable reason the manufacturer made two of the pins on that one bank input-only. Also, it's difficult to know whether you need to provide 100mA or 10A of some low core voltage the rest of your board doesn't use because the manufacturer-provided "power estimator" won't give accurate estimates until your FPGA design is basically complete, which is always long after your PCB has gone to fab. Last but not least, like a child most FPGAs enter the world as blank slates, and for FPGAs the process of filling them with useful knowledge takes time and money in the form of bootloaders, serial FLASH, etc., as well as consideration for the potential need to upgrade the firmware in the field while keeping in mind the requirements for robustness and security.

[ - ]
Reply by KenwickvsFebruary 2, 2023

Many complex design constraints can only be met with an FPGA. For example:

  • Execution Speed - Concurrent, rather than sequential operations are required to meet timing.
  • Real Estate - Complex functions must absolutely fit in a confined space.
  • Proprietary IP - An interface or communication protocol is unique to the design specification.
  • Security - Hacking an FPGA for IP thieving is difficult.

Although not specifically addressed in the thread intro, most often the engineer's choice is between a microprocessor or an FPGA. This is somewhat blurred by soft uP cores which can be included in an FPGA design. Fortunately many new FPGA products offer higher performance hard silicon micros and an interconnected fabric. A "win win" for some applications.

FPGA's have their place and are worth the extra engineering investment for high-end designs. However, I always lean towards the uP (or uC) first if the design constraints permit.

[ - ]
Reply by martinthompsonFebruary 2, 2023

I wrote a blog post on just this topic... http://parallelpoints.com/why-use-an-fpga/ - my advice is always “Avoid using an FPGA unless you have to”. And I say this as a great advocate of FPGAs!  

The development flow is slower and more painful than software (even embedded software, never mind PC software!). The iteration time is much longer, so you have to be prepared to both investigate your problem seriously outside of the FPGA (which means using a whole different set of tools as well as the FPGA tools) and to think really hard about your problems.  You can't just throw potential solutions at it to see what works - you won't have time to place and route or even simulate every option you can try.

To summarise (from the blog, and tweaked)

Use an FPGA if one or more of these apply:

  • you have hard real-time deadlines - i.e. measured in μs (or ns!) and it matters if you miss the deadline
  • you need more than one DSP processor (lots of arithmetic and parallelisable)
  • a suitable processor (or DSP) costs too much (money, weight, size, power)
  • Wacky interfaces that are not available on a micro that you would otherwise choose. This is getting less and less likely as interfaces standardise and micros are more flexible!

And for students, there’s one more:

  • Because your assignment tells you to :) Although ideally the task will be something that is at least vaguely representative of a reasonable FPGA task (not a traffic light controller or vending machine!)
[ - ]
Reply by Tim WescottFebruary 2, 2023

Some additional points to ponder:

  • In general, in a professional environment, it's typical that there's no one who can do both FPGA design work and software design work at all, let alone both well.  (For instance, I can program embedded software all day long, but I can only do hobbyist-level FPGA work).
  • The dividing line between "should it be in an FPGA?" and "should it be in a processor?" can be fuzzy, in which case it's not a bad thing to let the makeup of your staff set the course.
  • Doing it with an FPGA always looks simple and direct to an experienced FPGA designer.
  • Doing it with a processor always looks simple and direct to an experienced embedded software designer.
  • The FPGA people and the embedded software people can get territorial, and try to put functionality into "their part" of the system just because.

I've seen stuff barely wedged into a DSP that worked easily on an FPGA.  Ditto, I've seen FPGA resources used profligately to service some moderately complicated calculation that needed to happen at 15750Hz or even 60Hz, that could be easily moved to a processor.

[ - ]
Reply by jmford94February 2, 2023

MartinThompson's post sums it up pretty well, but I have a perspective that I want to share.  

I am working with really big (OK.  There are bigger, now!) parts like the Virtex-6 Xilinx parts which cost 6-10K each.  We use them for massively parallel processing of sensor data.  The development of these is slow, tedious, and expensive.  However, I'm also working with tiny FPGAs for doing really fast SPI I/O and very precise timing of A/D conversions.  There are lots of smaller FPGAs (Xilinx's Spartan series, Lattice stuff) that are easy to program, and cheap.  (Lattice's ICE parts are ~ $5 in qty 1., and 2mmx2mm) The development systems are now pretty much all free for small parts, and for selected larger parts.  They all use Verilog of VHDL to program them, and aside from the I/O available, the Verilog and VHDL is pretty portable.

Yes, they are mostly BGAs.  But they have development boards with the chip soldered down and pins available for hacking on.  It's actually quite affordable now to have a prototype run of a few boards built and populated.

From what I see:

FPGAs are more difficult to program and debug.

FPGAs can perform complex logic at hardware speeds.

FPGAs are very low power in terms of functionality/watt, and specifically for DSP, they beat CPUs and GPUs by a very large factor in teraops/watt.  The Lattice ICE chips draw a few hundred microamps at 1.2 volts

FPGAs require more power supplies and a source of configuration bits.  Usually a flash memory or downloaded from a host micro.

[ - ]
Reply by LaszloFebruary 2, 2023

Including an FPGA in a smaller project is a sure path to be doomed and never finish the actual system. 

The only realistic scenario to consider an FPGA for medium to big projects, when prototyping of a customary core or hardware accelerated solution is needed, this means:

- Emulators - when a proprietary IC is developed, which is not yet available, and the firmware development is already starting. The FPGA is loaded with the soft-core and is used to test early the firmware.

- Digital prototyping, before going into producing the proprietary IC.

- There are also special systems, produced in small volumes, which do need to FPGA's, but those are not the topic here.

As an alternative, you can choose micro-controller + programmable logic in the same package, Cypress PSoC or NXP(former Freescale) Kinetis family for example. 

  • The pros and cons of using an FPGA
    • Pro: Fast and concurrent execution
    • Con: High complexity, expensive tools, high current consumption, expensive pcb's
  • Typical applications
    • Protyping
    • Small volume, very specialized systems
  • Personal experiences...
    • I've used just a small CPLD at some point to implement a combinational logic (at that time in ABEL-HD), also played around with the famous Spartan3 board, but never actually used an FPGA in a finished product.
    • I did use FPGA based emulators in firmware development, just a hint, those emulators usually start from a 10k price tag

FPGA's are basically RAM based devices, the netlist is loaded from a flash storage device into the FPGA before it actually starts. IP protection is actually difficult on a FPGA based solution.

Hope it helped.

[ - ]
Reply by rajkeerthy18February 2, 2023

When we talk about FPGA and its applications in Embedded Systems, we must consider 2 aspects 1. FPGA visa vis Software 2. FPGA visa vis traditional ASICs. FPGA has evolved to be a replacement solution to both Embedded software and ASICs(traditional hardware).

1. FPGA as an alternative to Software

When performance is a requirement certain algorithms that CPU takes a long time to complete could be offloaded to FPGA engine via some high speed interface. FPGA certainly  boosts performance and builds up the value of the system.

When versatility is a requirement FPGA is the solution, assuming we design for performance. Implementation of different algorithms, use of numerical analysis techniques, providing interface to various field IO such as RF ADCs, sensors, cameras and the list is endless.

This is interesting and it is a personal experience. For some designs FPGA is a low cost solution and adds value to system design. As an example, there are some CPU architectures which have Address/Data Bus multiplexed and provide Local bus access to some low speed devices such as Flash. Here FPGA is cheaper than discrete latch solutions provided by some Semiconductor companies!!. 

Certainly there is a learning curve. But High level synthesis has changed the game. With some restructuring of software code, the function which earlier was in software domain could now be ported to FPGA. FPGA tools are much more robust now. 

2. FPGA as an alternative to ASICs and off the shelf ICs

FPGA is programmable hardware meaning it is a "Soft system". 

For years, FPGA has been successfully deployed as routers/switches in networking domain which sometime ago was a thing of ASICs. Result! Hardware system in less time and flexible, could be fixed in the field. Barring a few applications which need extremely low power and applications that need huge logic, most applications could use FPGA. After some time the only ASICS left in the world would be FPGAs and that topic would be for "I have a dream" speech.

A note on Applications and System Integrity:

Just going through the Ultrascale+ ZYNC architecture, FPGAs would certainly be deployed in Low Power high performance applications that only ARM based systems achieved earlier because FPGA now has ARM cpus embedded with power shut down capability to different sections inside FPGA. Considering DSP requirements, FPGA certainly fits. Customizable computing - FPGA provides the architecture to implement bit slice type cpus, highly customized and preserve IP value of the company, pays out in the long term. We cannot always design to cost. As a System Designer it is worthwhile considering enhancing the IP value of the system while keeping the cost down, certainly a trade-off is possible. FPGA does provide hurdles for reverse engineering geeks and unfair competition via dumping, Value preservation!.

Conclusion:

As a system Designer it is always a safe practice to provide scope for FPGA during planning stage itself. This provides some bail out if any changes in the system that pop up at a later point. The long term benefits would override any argument about learning curve and slower development phase. Does not mean Learning curve/Slow progress is not an issue but teams must employ FPGA specialist now, just paying Paul is not enough, must pay Peter too, again the long term value must be emphasized.

[ - ]
Reply by antedeluvianFebruary 2, 2023

There are many reasons to use an FPGA and no doubt there will be many proponents in this thread. I have some experience with PLDs and none with FPGAs. These are the reasons that have convinced me to stay away. Some of these may no longer be valid and are just my prejudice.

1. There is a steep and prolonged learning curve. If you don't have several consecutive projects your experience fades and not only is it hard to start a new FPGA project months or years later, it is difficult to remember what you did and how to measure the results in the original (this is true of many things).

2. I have seen paradigm shifts in the development tools which only exaggerate the points in 1 above.

3. FPGAs can use an awful lot of current- no micro power circuits here.

4. FPGAs tend to be expensive when compared to a cheap micro and some ICs- no doubt they are better suited to complex and high speed applications.

5. Not many FPGAs have analog capability (or didn't when I looked at them)

6. BGAs- the curse of the prototyping community

7. Tied to the learning curve and development tools- it is not always easy to understand and quantify a problem when it is internal to the device