I’ve been going through some csound examples and was inspired to extend one to make this morphing waveshaper proof of concept thing:
It’s a nice prototype - I think I should be able to port it fairly easier to Bela which supports Csound natively. Maybe a bit more work to get it onto the Daisy, but having this implementation as a guide sure will be helpful.
Next steps would be to add multiple detuned oscillators, and maybe external envelope control.
Well, then. While biking to work I pondered about summing signals in polyphonic synth. If one voice has maximum values of ±1 (which in this case is the maximum range of output), and it’s average power is 0.7, with to (uncorrelated) signals its ±2 and 1.4, with three ±3 and 1.7, etc. with ten ±10 and 3.2. We never want digital distortion, so signal must be attenuated to be again in the range of ±1, but then when playing single note it’s peak-to-peak value is ± 0.1.
How is usually handled in polyphonic synths? Add a compressor/limiter/softclipper to make single notes louder and limit the full polyphonic playing without distorting the signal too much?
Also you could have logic to lower the volume of the notes when multiple are played, but you can never know how many will be played. It would sound a bit stupid if you play long notes one by one and the already playing ones would get quieter when new ones are added.
I’m not sure what the “right” way to do it is, but here’s how I’ve handled it in a couple of different projects.
On my modal resonator synth thing, I just divide each single notes value by the number of notes. I could maybe get away with the square root of the number of notes or something like that, but this thing can get quite resonant, so I prefer a little more margin.
to_out += notes[j]->Process(to_in) / NUM_NOTES;
After that, I also have the option of putting it through some soft clipping code with different models. Here’s an example:
to_out = tanhf(to_out) * INV_TANH_1;
So that avoids any nasty digital clipping, but no effort is made to ensure the signal is bandlimited, so you can get aliasing happening. But to me, digital clipping or aliasing are somewhat part of the sound of these kinds of things, and if you don’t push it too hard everything seems to sound okay.
One of the benefits of working with floating point is that you have stupid amounts of headroom, so you can sum whatever you like in the signal chain and as long as you ensure that it’s in the right range for the final output stage you’re not going to get any distortion.
In the Csound waveshaper above, when I added two extra oscillators I did much the same thing, divide each oscillator’s output by three before summing the signal. One of the things Csound will do is tell you how many samples are out of range, so I just plugged everything in, wiggled all the knobs and ensured that nothing was reported out of range and called it a day.
Optimisation is a specialist skill now, though when I learned programming it was still considered to be the most important thing. Unrolling loops is great if your project is compute-bound, but do check to see if input or output is part of the problem. Using interrupt routines enables even the simplest processors to proceed with computation while waiting for slow data input or output to complete. A few bytes of RAM deployed as a buffer can make a big difference to execution, particularly speed of response.
And of course, try rolling the outermost loops back. If you have an outer loop that only executes, say, 10 times per second, the tiny amount of loop overhead time you’d save by unrolling it isn’t worth the excessive use of program memory.
Yeah, problem is in a data I need to recalculate basically in real time with the sample rate of the synth output. I’m trying to make additive oscillator that sums eight sine waves and is also polyphonic. The Hagiwo version of that has just barely enough processing power to multiply (and normalize back) all harmonics for one voice, and it loses lots of precision in different stages. I’m recalculating the whole wavetable in control frequency so the output samplerate can process four voices through their adsr-envelopes. Still, that’s not enough, so I do the wavetable-calculation in four parts in four different control frequency ticks. And I don’t have enough ram to save the whole table for original sinewave, so I save just first quarter of it and mathify rest of it from that.
Also, oscillators in the Mozzi library I’m using actually don’t support changing wavetables, they are always constants, so I actully run a 256 sample long saw wave in an oscillator and use output of that to read sinewave-sample from my own array in ram…
If it’s stupid and it works, it’s not stupid. Except when it is.
@Krakenpine it’s definitely not stupid. It sounds rather ingenious. If you don’t already have an Arduino Mega2560 I would encourage you to procure one. At least this would give you space in which to explore the possibilities in search of a solution that can be scaled down to the Nano.
Alternatively, there seem to be a number of ARM chips that compete well against AVR and are being adopted by the Arduino software platform. I can’t advise on that, but I feel I need to point out that migration to ARM may be beneficial.
Well, I got something harmonics-based and polyphonic done. Sometimes very buzzy, distorted, glitchy (I mean, there’s some proper bugs there) and aliasing, but technically close to what I was aiming. So that’s a midi keyboard to Arduino Nano with filter-circuit in its PWM-output and that’s recorded straight from that.
I used whole fifteen minutes to midi note-assignment and -stealing logic, that’s quite crude sometimes.
The soundcloud player is irritating. But yeah, basically monophonic version has much better sound quality. Maybe I’ll just remake this with Teensy or Daisy and have one potentiometer as a pseudo-bitcrusher, as I like the crunchiness of this 8-bittery, but I would like to have it without that aliasing and other artifacts that come from way too short wavetables.