I calibrated the pitch CV today.
McGovern’s software takes notes in the range A1 to C8, the range of a standard 88 note keyboard, and maps them to 0 to 4095 mV output from the MCP4822 12-bit DAC. Then this voltage is amplified by 1.77 to go from 0 to 7.25 V, which is 1 V/oct.
In principle, anyway; but if the actual gain of the output stage is different from 1.77, then the output will not be an accurate 1 V/oct.
I took some measurements, starting with looking at A in various octaves and finding I was getting, for example, 5.965 V instead of 6 V for A7. I measured more voltages, converted them to cents (1 V = 1 octave = 1200 cents) and made a plot:
The horizontal axis is MIDI note number (21 = A1, 108 = C8) and the vertical axis is the difference in cents between the measured value and the correct one. Clearly my output stage gain was a little low, I thought, by about half a percent.
So I went into the code and changed the constant of 47.069 (= 4095/87) used to calculate the output voltage to 47.069 × 1.005 = 47.304. This at first had the unfortunate effect of making C8 produce about 0 V, because the code was taking only the lowest 12 bits of a number larger than 4095, so I changed it to truncate to 4095 instead. With those changes in place, the new measurements looked like this:
That looks a lot better. C8 is still quite bad, more than a quarter tone off, because it’s been truncated to the maximum voltage the DAC can produce, but that’s just one little-used note. In the middle range the errors are mostly below the 5 cent level, which should be fine for most purposes. But at the high end and to a lesser extent at the low it’s worse, and there’s also a bad spot near the middle, MIDI note 52 being still about 10 cents flat.
All of which I think can be blamed on only one thing, and that is DAC nonlinearity. The MCP4822 has a typical integral nonlinearity of ±2 LSB, per the datasheet, with minimum and maximum ±12 LSB. Differential nonlinearity is ±0.2 LSB typical, ±0.75 LSB maximum. Here 1 LSB is 1 mV from the DAC or 1.77 mV at the output, which is 2 cents.
Then it occurred to me to try swapping the two DACs on the board and seeing how the other one behaved. That turned out to be a bit of a mess because I bent a pin on the other DAC in the process, and trying to straighten it out and re-insert it only caused it to bend again. Finally I gave up and swapped in a new chip.
And surprise!
Now the scale is off by about 0.5% in the other direction! But all I changed was the DAC. So it must be the voltage scale of the DAC itself that’s the problem, or part of the problem. (I shouldn’t have been surprised. The datasheet says the internal reference voltage tolerance is about ±2%.)
I ended up changing the scale factor to 47.022, just 0.1% less than the nominal value, and got this:
Decidedly better than with the other DAC. Still a little ugly at the bottom and uglier at the top, but the mid range looks great.
But compare with what I think you’d see from a perfectly linear 12 bit DAC:
That’d be quite an improvement! In the real world maybe there’s a DAC with better specs I could use for MCVII, or maybe just selecting the best of several MCP4822s is good enough.