Having experimented a lot with all sorts of generative AI myself, and having developed what i think is a nuanced outlook about it, a few things i’m thinking:
First, did you see how @HAGIWO used ChatGPT last year to program the microcontroller of a module? As the code is very simple but full of boilerplate it was really the kind of task at which the LLM excels.
But it’s more an indictment of C++ as a language than praise for the LLM.
I tried to use it the same way, it seems impressive, but ultimately you never know when you start running into the Bullshit Wall where it starts making things up.
Last year i had it write the code for an audio oscillator. I asked it if that oscillator would have aliasing issues, then asked the computer to add MinBlep to it. The computer obliged… But did it do it correctly?
That’s the thing: i didn’t have the skills to check at the time. I could have asked it to explain to me how aliasing and how MinBlep work… but i didn’t have the skills to verify its explanations make aren’t subtly wrong. No substitute for hearing it from a real expert.
Second, I’m reminded of the recent story of Willy’s Chocolate Experience [1], [2]: an off-brand Willy Wonka event a grifter cobbled together with the help of generative AI. The actor’s script was AI-generated nonsense. Even the text of the ads was AI-generated, without any effort to correct the text to make sense (text remains a weak point of current diffusion models, as they simply don’t think in words):

I think it really highlights the societal failure mode of generative AI. We were sold the promise of automating away unpleasant labor, of computer vision running the factories. Instead, the computer is automating the bullshit jobs assigned to humans. Willy’s actor having to learn a script full of nonsense. Acceleration of bullshit.
Outsourcing creativity to the models we have seems fundamentally impossible: the computer is designed to pick statistically likely outcomes with just a bit of randomness, to be mediocrely average.
There’s a lot of debates about AI bias: AI exacerbates societal prejudices found in its training corpus; as LLMs are weak to prompt exfiltration, we generally know the system prompt used by default in big LLMs, and they’re a litany of recommendations to avoid being racist… yet those instructions are only bandaids, as even things such as AAVE dialect brings its prejudice back into focus.
Play with the tech enough and it will be clear just how infuriatingly conservative its creativity is. Calling it “autocomplete” really undersells its tendency to revert to the mean. Glitches are how it achieves its spots of creative brilliance.
I just see don’t see any path for the computer coming up with any of the concepts i’m exploring. The computer just doesn’t think that way.
Still, it’s a lot of fun to play with the tech to explore the computer’s dreams. It’s great for entertainment, and it’s a huge shame it’s sold as a personal assistant, there’s much territory to explore before the magic tricks become rote. I’m all in favor of open research as a counterpower to unchecked technocapital acceleration. I just won’t touch it for anything serious.