Part of the fun of FPGAs is that you get to chose exactly what size of integer or fixed point objects you use.
Need to encode voltages between +12V and -12V with millivolt precision?
Sure, just use a fixed point variable with a 5 bit integer part (+15 to -16) and a 10-bit fractional part (one part in 1024). No need to waste 32-bits when you only need 15.
Oh, and you need 37 PWM outputs? We can do that too.
If you realize your really needed 39, you can just add those extra two PWM outputs later.
Can you tell I love FPGAs?
I just can’t wait to see the Eureka moment when Sam discovers them. (Where’s that brain blown emoji when you need it?)
I’ve been wanting to build a 31 terts band vocoder in an FPGA for some time. I bought the fpga, and did a few tutorials but haven’t come round to actually doing something with it. There is a log of floating point calculations going on in the vocoder, there are some 90 filters and envelope followers involved in my design, and it seems I have to convert the computation to fixed point arithmetic. So that is where I need to make a start.
Some of Altera/Intel high end FPGAs have floating point hardware (Arria 10 and above) but they are very expensive so that is probably not what you are using.
So as @Caustic said, fixed point lets you use integer arithmetic to do computation with fractions.
In FPGAs that means you can use the integer adders and multipliers that are available in the DSP hardware instead of implementing them in gates (which is very expensive even for just an integer multiplier).
You have to determine the range of values you want to encode and the precision that you need, like in my example above.
You try to chose those values so that the total number of bits is smaller than what is supported by the hardware multipliers available in the FPGA you intend to use.
For example, affordable FPGAs like the Cyclone II have 18-bit by 18-bit multipliers in hardware.
When you multiply two 18-bit integer numbers together, you get a 36-bit integer number.
All the bits represent parts of the integer.
But if your 18-bits represent a fixed point number with for example, 5 bits or integer part and 13-bits of fractional part and you multiply two such fixed point number using the same 18x18 hardware integer multiplier, you still get a 36-bit wide result, but now the top 10-bits represent the integer part and the bottom 26 bits represent the fractional part. You then have to decide which parts of that 36-bit result you want to carry to the next operation.
Sorry, this is probably too complex a subject to succinctly explain in a forum post.
I think I understand. But does this mean following your example, that from code block to code block the split / ratio (integer bits vs fractional bits) may change? That may make things complicated to debug if there is an overflow or underflow and you want to find out where this happens?
Yes unfortunately, it may change.
The int/frac split may even be different in the two numbers you are multiplying, for example when multiplying an audio sample by some coefficient before adding that product to another product.
To minimize loss of precision, intermediate results might have different encodings before you return to your system’s main encoding.
With audio, you can sometimes constrain your values to be between +1.0 and -1.0 and then the integer part never grows with multiplication (it will still grow when adding, though…)
Ok, I see. In the vocoder there are a zillion of multiplications and additions (lots of band pass filters, moving average filters, multiplications and summation points), so I’d immediately go for constraining the values between +1.0 and -1.0 and only scale them at the DA level.
You also might be fine with the loss of precision. No idea if the vocoder needs a specific precision or noticeable aliasing will occur, but its entirely possible this aliasing at the end will sound just fine.
I haven’t thought about aliasing as I always see this as a bandwidth problem and since my sampling frequency is high enough, that should not occur. Or are you implying aliasing through precision loss? How does that come about?