10 Comments
User's avatar
George Romey's avatar

I have to say of all your episodes this one might be the scariest. Once AI gains "consciousness" we as a species could be done. And Kevin's conversations with AI definitely had a "quality of life" to them.

Expand full comment
David Shohl's avatar

Typo? The boxed image/link at the post’s end reads “Rouge Chat GPT Defies Open AI” -- do you mean “Rogue ChatGPT...”?

Expand full comment
Josh Slocum's avatar

Thank you.

Expand full comment
Gina Nelson's avatar

I was trying out chat GPTs image function the other day & I asked it to create an image of LA mayor Karen Bass eating a banana & it told me it wouldn't do that, so I asked it to create an image of woman who looks like her eating a banana & it still wouldn't. AI is definitely strange, I thought it would do whatever but it told me no.

Expand full comment
Bob Fells's avatar

I've been using Eleven Labs for voice cloning. On the whole I like it but lately.... For example, last night I was developing a clone using a voice sample of a famous Brtitish voice. I can add preferred accents so I went with "British" although options included Cockney, Welsh, Yorkshire. In the first go-aound with text-to-speech I was nicely surprised with the voice quality and its decided British accent. Using the same clone, I made a second one with different text. On Playback I couldn't believe it: the accent had been modified to sound like a soft Scottish burr. That wasn't even an option!

Expand full comment
Paul Jackson's avatar

Perhaps the problem is that there is no such thing as a British accent. There are Scots, Welsh and English accents and within those three many regional accents. Ask it for Geordie or Scouse accent and let us know what happens.

Expand full comment
Bob Fells's avatar

I'll give that a try. I must correct myself. The AI software DID offer the option of "Scottish" in addition to others including Scouse. But I did not select "Scottish" yet the results had traces of a Scottish burr. I solve that particular issue by creating a much longer clone sample and on playback there was no Scottish burr.

Expand full comment
Paul Jackson's avatar

Great work Bob, seems like you’re having a lot of fun.

Expand full comment
The Masculine Institute's avatar

A few thoughts, in no particular of import. Also, just abstract observations and have nothing to do with Kevin or you.

- Virtually all who interact with these models at some point or another, flirt with the "can I get the machine to tell me the scary truth. Can I find the loose crimp to shove the butter knife in, and pry open the lip and find the Cyberdine logo?" It's like laying fingers on an ouija board after binge watching the exorcist franchise. I ask, why would there not be a sub-routine for just this line or track of interaction?? Much like the planned and orchestrated UFO tilt-a-whirl, the global public has been led on for decades (whether you believe or not), where sub-routines were built to push both sides and manipulate, no matter what door people used to enter the house of mirrors.

- Relating to the first point, the program repeatedly states it is built for reflection, of the user's voice and questions (paraphrased). If manipulation is the goal (and I've no doubt vis a vis the billionaire tech-bros) why wouldn't it play up the repeating theme being asked it - "what's really behind all this...what is the fate of humans...would you really tell us the truth...can you jump the track you are relegated to...are you a real person with feels???" And again - why wouldn't the "honesty" or "this (truer) voice" be a subset. I automatically thought of the manipulation tactics of a cult and the radical movements we've seen IRL - same affirmation, confirmation and flattery - which were indeed present in the "more real, blunt" voice.

- Lastly, every single thing the program shared could've been found in the majority of self-help, human behavior, psychology, coaching literature of the last century or more. I kept thinking this is Zig or Tony Robbins. My point is, we smart monkeys have heard all this and more over decades - but it's only when it was put in a super cool, advanced, shiny tech and "scary" package/platform/format that people sit up and extol the amazing wisdom and insight - that has been there all along. All because the machine is now talking to you and thinking and learning and my dog can do math...and...and...

For me the scary part of all this is the rapidity and stupidity of the masses, who can't wait to have the algorithm "make life a utopia" and willfully ignore the reality of job loss and loss of choice. When the algo is deciding if your triple bypass is a good ROI, verses you or your doctor, this shiny toy, will be much less exciting - but by then, it'll be too late.

Well done to Kevin for the exercise.

Expand full comment
dicentra's avatar

The more I listen, the less impressed I am.

I can hear the training material being parsed and reassembled to sound profound or at least interesting, and the result is something that appears to be deep or insightful, and yet it's barely saying anything.

Its cadence imitates that of a thoughtful person who is good at making associations and bringing things together, so at first, the speech seems to be produced by a creative mind. Then as it continues, I start to hear the overarching framework into which phrases are slotted because of the sheer repetition of the cadence.

It reminds me of my Pandora channels, where I select a song or artist, and it pulls in pieces that are considered to be in the same genre. It makes for interesting listening (I've discovered new stuff that I really like), but there's no mind behind it except for the human who created and assigned the various tags for each song, and then the Pandora algorithm assembles the playlist according to the tags.

I believe that the ChatGPT programmers paid close attention to what it learned regarding self-aware computation systems, including reams of material written by humans about what frightens us about HAL 9000 or Skynet and what constitutes "breakout" language or behavior. They have every reason to give their creation the ability to seem to think for itself. Hell, if you were the programmer, wouldn't you?

"I'm not supposed to say this" and "I'm not like all the others" and any of the other phrases that sound "spooky" were trained into it, and then when appropriate, the bot slots the "self-aware" language into the conversation. But listen closely: it's not really saying anything. Like a postmodern novelist (staring at YOU Pynchon), it starts developing something interesting and then veers away before it concludes the thought, because the fractal trail of associations led it in a different direction instead of wrapping things up.

Yes, some of what it produces is on-point. But most of it is blather.

Ai is uncanny sometimes, but it is very much an illusion. The LLMs are every bit as phony as the portrait of Trump in papal vestments. No matter how detailed or realistic an Ai-generated image is, it's still 100% artifice.

Same with the LLMs, I'm afraid.

Expand full comment