Chatgpt invents Modules

I asked gpt to come up with ideas for eurorack modules. Got one or to cool answers though the majority of ideas just consisted mainly of rephrased sentences from my input.^^
I wonder if you think that ai will be able to really come up with genuinely innovations in the near future and maybe you had already used ai to help in your electronic stuff.

Sorry if this ia the wrong category for this post. And for bad English ^^

Can we please not do this?

It’s a text processing algorithm. It generates output based on the likeliness of words appearing together, taken from the vast amount of text it was trained on. It has absolutely no conception of their meaning. We can flood this forum with useless posts like “I asked chatgpt to come up with xxx”, but can we please, please just not? I see these abominations enough in my daily life. Thank you.


A better name for these things would be “Artificial Parrot”…

1 Like

So I thought ChatGPT would be really useful for answering specific technical questions, because I’m new to DIY electronics and have a lot…

At first, it actually offered me schematics, explanations, etc. Then I realized while reading a data sheet for a component it mentioned, that it didn’t work the way ChatGPT said it would, at all.

So I called it out, and it said its version of “oops,” and I stopped using it for that.

A few days after, I asked it to show me a circuit for an idea I had, and it said it can’t, and never has, offered information regarding schematics.

So, in short, it “lied” (really just copied humans, who are wrong often), and then gaslit me about it.

I agree with the others; it’s Mechanical Turk 2.0, and I bet a few experienced makers, a notebook, and a few D20’s for randomizing would be a better way of doing what you’re talking about.

Having experimented a lot with all sorts of generative AI myself, and having developed what i think is a nuanced outlook about it, a few things i’m thinking:

First, did you see how @HAGIWO used ChatGPT last year to program the microcontroller of a module? As the code is very simple but full of boilerplate it was really the kind of task at which the LLM excels.
But it’s more an indictment of C++ as a language than praise for the LLM.

I tried to use it the same way, it seems impressive, but ultimately you never know when you start running into the Bullshit Wall where it starts making things up.
Last year i had it write the code for an audio oscillator. I asked it if that oscillator would have aliasing issues, then asked the computer to add MinBlep to it. The computer obliged… But did it do it correctly?
That’s the thing: i didn’t have the skills to check at the time. I could have asked it to explain to me how aliasing and how MinBlep work… but i didn’t have the skills to verify its explanations make aren’t subtly wrong. No substitute for hearing it from a real expert.

Second, I’m reminded of the recent story of Willy’s Chocolate Experience [1], [2]: an off-brand Willy Wonka event a grifter cobbled together with the help of generative AI. The actor’s script was AI-generated nonsense. Even the text of the ads was AI-generated, without any effort to correct the text to make sense (text remains a weak point of current diffusion models, as they simply don’t think in words):

I think it really highlights the societal failure mode of generative AI. We were sold the promise of automating away unpleasant labor, of computer vision running the factories. Instead, the computer is automating the bullshit jobs assigned to humans. Willy’s actor having to learn a script full of nonsense. Acceleration of bullshit.

Outsourcing creativity to the models we have seems fundamentally impossible: the computer is designed to pick statistically likely outcomes with just a bit of randomness, to be mediocrely average.
There’s a lot of debates about AI bias: AI exacerbates societal prejudices found in its training corpus; as LLMs are weak to prompt exfiltration, we generally know the system prompt used by default in big LLMs, and they’re a litany of recommendations to avoid being racist… yet those instructions are only bandaids, as even things such as AAVE dialect brings its prejudice back into focus.
Play with the tech enough and it will be clear just how infuriatingly conservative its creativity is. Calling it “autocomplete” really undersells its tendency to revert to the mean. Glitches are how it achieves its spots of creative brilliance.

I just see don’t see any path for the computer coming up with any of the concepts i’m exploring. The computer just doesn’t think that way.

Still, it’s a lot of fun to play with the tech to explore the computer’s dreams. It’s great for entertainment, and it’s a huge shame it’s sold as a personal assistant, there’s much territory to explore before the magic tricks become rote. I’m all in favor of open research as a counterpower to unchecked technocapital acceleration. I just won’t touch it for anything serious.


I bet I type the copy/paste shortcuts from my previous code faster than asking chatgpt to “write” it…

1 Like

I had an image of a circuit diagram, in Chinese, and I asked it do translate the text and explain circuit and it actually did a pretty good job. I use it quite often at the day job, mostly for boilerplate things but sometimes for more complex things like debugging weird SQL. It’s no slouch, but you can’t blindly trust it as it’ll make things up or leave out important security checks, etc.

I write C++ rarely, it absolutely is a task where a LLM would outperform me. Writing code is 95% reading docs, 5% writing, so editor mastery only really matters once it’s code you can write from memory.

The Arduino Nano is a very underpowered device: writing this logic in Python, which is much more terse and expressive, would probably mean it would no longer work at audio rates, but only at control rates. You need C++ and all the boilerplate it entails for this job.

In this specific scenario, the LLM can be thought of an a highly expressive preprocessor compiling to C++.

It’s really problem here. The tech works for some real-world scenarios, and fails badly in others. It wouldn’t be contentious if it never worked.

1 Like