Soulful CXO Podcast

When AI Lies: The Trust Problem | A Conversation with Chris Roberts | The Soulful CXO Podcast with Dr. Rebecca Wynn

Episode Summary

In this episode, Dr. Rebecca Wynn sits down with Chris Roberts, AI & Deepfake Cyber Strategist at World Wide Technology and counter-threat intelligence expert, to explore the dark side of generative AI. From the rise of misinformation to the manipulation of data and trust, Chris breaks down how adversarial intelligence is reshaping the cybersecurity landscape. Learn why critical thinking, continuous education, and a human-centered approach are essential in an era where AI can make lies sound like truth.

Episode Notes

Guest: Chris Roberts, AI & Deepfake Cyber Strategist, World Wide Technology

On LinkedIn: https://www.linkedin.com/in/sidragon1

Website: https://www.wwt.com/profile/chris-roberts/bio

Host: Dr. Rebecca Wynn

On ITSPmagazine  👉  https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/rebecca-wynn

________________________________

This Episode’s Sponsors

Are you interested in sponsoring an ITSPmagazine Channel?
👉 https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network

________________________________

Episode Description

In this episode of Soulful CXO, host Dr. Rebecca Wynn welcomes Chris Roberts - also known as “Dr. Dark Web” - AI & Deepfake Cyber Strategist at World Wide Technology and a renowned expert in counter-threat intelligence. Together, they explore the hidden risks of generative AI. Chris breaks down how AI-generated misinformation, data manipulation, and adversarial intelligence are eroding trust in digital content. He emphasizes the need for critical thinking, multi-source verification, and ongoing education to combat the spread of disinformation. As AI tools become more embedded in daily workflows, Chris challenges organizations to move beyond checkbox training and build a culture of awareness, responsibility, and vigilance.

________________________________

Resources

Roadmap for Assessing and Selecting Generative AI

https://www.linkedin.com/pulse/roadmap-assessing-selecting-generative-ai-vendors-dr-rebecca-smc1c

Deepfakes for Good? How AI is Powering the Next Era of Personalization

https://www.wwt.com/video/deepfakes-for-good-how-ai-is-powering-the-next-era-of-personalization

Deepfake Deception: Can You Trust What You See and Hear?

https://www.wwt.com/video/deepfake-deception-can-you-trust-what-you-see-and-hear
________________________________

Support:

Buy Me a Coffee: https://www.buymeacoffee.com/soulfulcxo

________________________________

For more podcast stories from The Soulful CXO Podcast With Rebecca Wynn: https://www.itspmagazine.com/the-soulful-cxo-podcast

ITSPMagazine YouTube Channel:

📺 https://www.youtube.com/@itspmagazine

Be sure to share and subscribe!

Episode Transcription

When AI Lies: The Trust Problem | A Conversation with Chris Roberts | The Soulful CXO Podcast with Dr. Rebecca Wynn

Dr. Rebecca Wynn: [00:00:00] Welcome to the Soulful CXO I'm your host, Dr. Rebecca Wynn. We are pleased to have with us today, Chris Roberts. He's considered one of the world's foremost experts on counter threat intelligence. Also known as Dr. Dark Web. He has over 20 years experience working in enterprise, industrial and government segments addressing evolving security threats.

He gained global attention in 2015 for demonstrating the risk to aviation that allowed attacks against flight control systems. 

He is highly sought after keynote speaker of many advisory boards where he and I first met and has had too many published works to list here. So search for all his works.

Chris, my friend, welcome to the show. 

Chris Roberts: Thank you. it's nice to be here and hang out again. That's the fun part about it. We get to hang out again, which is kinda nice. 

Dr. Rebecca Wynn: I was gonna wait a little bit while for this, but we talk about all the generative AI that's going on and, how that's affecting ransomware as a service. What are your thoughts on that? I know it's scary for me.

What do [00:01:00] you think? 

Chris Roberts: It's, interesting. 

It's. When you take, when you take a look at artificial intelligence, you've got the regular narrow stuff, you've got the general ai, and then to your point, you've got generative ai where you start getting into language, linguistics and human interaction with this, entity that you can text and have, an interesting, shall we say, conversation with.

Dr. Rebecca Wynn: What do you. What do you see in your world happening with the, syndicates that are out there and the nation states out there?

Do you see them using the. The general AI like maybe using the plugins and things along those lines, those a p I calls, or do you see them using it in another fashion? 

Chris Roberts: Oh All of the above and more. Do you take a look at some of the forks taken off that intelligent architecture they've taken their own data sets and now you start taking a look at some of the amazing.

Counter-intel is probably one way of looking at it. You start taking a look at the disinformation and misinformation side of the world. What people tend to forget [00:02:00] is with any good, there's also the ability to do bad using ChatGPT 4, they're putting more disclaimers in place.

Hi am AI system, but aware I'm programmed by people. I'm like, yeah. Who's people, and then they start taking a look at the data sets that it's being used to learn we used to do adversarial intelligence, which is you teach it, here's a picture of something and I teach it. Here's the absolutely different thing that you're meant to be learning, that it is.

We did a talk a while ago about making pigs fly and how you show it millions of pictures of these things and go, this is a pig. But if you change literally about. Less than a couple of percent of the pixels, you turn it into an airplane adversarial intelligence, And you take that idea and that little bit of humor there, and you start going, okay, how do I apply it to politics?

How do I apply it to disinformation around humans in general? the problem comes back to humanity, how do we ask people to question more? How do we take it away from going, if I Google, 

do I like [00:03:00] pick your politician, You get the information asked for, but potentially you don't get the stuff in that database. You it's, now how do people ask more questions and how do you learn what's truth? What's disinformation, what's fud, what's handed over by a true intelligence system with disclaimers versus what I've poisoned it with?

 

Dr. Rebecca Wynn: That's extremely dangerous. . I'm always wondering about too is we've been using it for modeling for a long period of time.

if it can break out how do we know the events? Are the true events or hallucinate events, or are they hiding events? What do you see along those lines? 

Chris Roberts: Ah, unfortunately, quite a lot, Now I have to question that data. No longer can I trust the dataset that I have, or I can't trust the model or the engine that I have, or the system that I have. I've gotta have so many other checks and balances to validate and verify.

We used to do that. As humans, you've done the same You don't trust one source. You, have a level of. belief, then try to validate it. break it.

. Let me try to actually put some kind of [00:04:00] scientific process around this to see whether it's valid or not, and eventually you come to a conclusion.

I can't expect most people to do that. They're going to trust what they hear if they hear planes backing into mountains, or we've lost more people at sea, or a ship has turned upside down or whatever goes into news . Who do I trust? What am I meant to believe? how many sources do I have to pull in as a human being, as somebody who actually ask more question, cannot trust the data that's coming out of maybe the sources.

it gets rough. It's, that's us. I don't expect my mother to ask the computer 10 different bloody questions to get one set of answers, but that's where we're going with this intelligent architecture. 

Dr. Rebecca Wynn: It goes back to coming back to the human with the critical thinking and 

Chris Roberts: yeah 

Dr. Rebecca Wynn: just because something comes out of a search engine or a chatbot or something along those lines does not mean it's necessarily the truth.

I will tell you, I had a, I went on a show and a person gave you this bio and I'm like, It was a live event. And I'm like no. I was never the [00:05:00] VP of information security for the Girl Scouts of America. I've eaten thin mint cookies, but they had actually put in ChatGPT and it said I was the VP of information security for the Girl Scouts of America.

And there was just one of the many things and I'm like could I get those past paychecks? I would be fine. 

Chris Roberts: That's okay. That was interesting. This is where I get, this is where again, the new version of this out. We were messing around with version four for this talk I gave. And I'm asking you questions as in do you lie?

And, I'm asking all these questions. And what is interesting about, again, about the new version is those first couple of sentences I'm in intelligence system, but I do this, and this. And I have got data behind me and I have got programmers. And then it goes into a couple of paragraphs of stuff.

And those first few paragraphs are pretty accurate. Those next couple of paragraphs, you're like, where the heck is he pulling this from? And again, back to humans. They're gonna look at it and they'll go, oh, that's great. unless, as a [00:06:00] human, I speak up and go, actually that isn't correct.

And how do I correct that? What's my recourse for correcting anything on the internet these days? It used to be you couldn't phone Google but you could at least put an error system in. Same thing, Wikipedia, you could put an error correction capability in or something like that, but nowadays, if ChatGPT is lying to you and you don't know it, or if you realize it, what's your recourse 

One 800 ChatGPT? 

Dr. Rebecca Wynn: Yeah. Holy crap. Batman. One of the things that we saw recently , is when the lawyer went ahead and did his briefing and then it cited cases and all that stuff, and then he turned it in and the judge came back to him and here in the United States and was like, there's no such case They made up the It made it up the cases. 

Chris Roberts: Yeah. 

They sounded feasible. But it's, not allowed to make it up. it's pulling data sets and sources from its parent system. This is back to garbage in garbage out. this is the data swamp.

This is who's checks, who's doing checks and balances on that data that it's [00:07:00] learning from We get told that it's a set of programmers and developers and coders, great. There aren't enough people out there to keep track on everything that's gone on with ChatGPT. So now we get into this, how much is it learning from its own data?

How much is it learning from? And then how is that model actually training itself learning from itself and validating 

Dr. Rebecca Wynn: I was reading the other day about ChatGPT 5 and they said ChatGPT 5. . Basically digested all the information that's ever been created in humanity. I'm really wondering about the hallucinations and stuff like that to possibly come there.

Just because you talked about swamp. What does that on Swamp on steroids and have you, if you've been able to digest all the information that was made by humanity. That would be scary. 

Chris Roberts: so now let's take that in. Let's look at that from context. if we break the internet down so everybody understands it, we have the open stuff.

So the stuff you find on Google, just the clear web, regular internet of things, not internet, that's all over the place. Then you've got stuff that's [00:08:00] typically behind paywalls or register walls, and that's everything from your regular library to stuff that's sitting behind, individual systems.

Then obviously you've got the darker stuff. And so the question then becomes and, that stuff is spread all over the place. You've got the regular good old TOR Onion, but then you've got 15 different flavors of that, let alone the IRC IQ and all the other stuff So my question is, where's it learning from?

Is it learning purely from the open web? what did you do? Did you just pipe it at Wikipedia and Facebook and then give it a feed? And then how might again, how often is it learning that? Because having built a couple of these engines over the years, That a ridiculous amount of compute, like an absolutely astronomical amount of compute.

Not just to learn it, but to crawl everything, to learn it, and then cross reference with it, and then also start building relationships. You build a dataset. That dataset is absolutely useless unless you build a relationship between those two disparate pieces of data. Is it a strong [00:09:00] relationship? Are there correlatory effects?

Is it a weak relationship? Is it purely a subset of a relationship because somebody abstractly knows the third cousin on this side of the family tree, therefore, whatever. So that question becomes in a data mining, in a data standpoint, who built that engine? And how often is somebody re-looking and re-refreshing and understanding those data sets?

And I'd argue it ain't where it needs to be. 

Dr. Rebecca Wynn: No, cuz you know, the right to be forgotten isn't the right to be forgotten. I wish it was, we could opt in on opt out. To your earlier point, how, who is correcting the data and to which bot area, for lack of a better term, is it being corrected?

Because they don't correlate, per se, against each other. So how do you get, if you do inject corporate data or customer data or privacy data or pre-SEC filing data in 

Chris Roberts: Yeah. 

Dr. Rebecca Wynn: Into the systems. it's like they're not private. there's nothing private about [00:10:00] them.

how do you sanitize that data back? 

Chris Roberts: We know blocking tactics very rarely actually work. People find ways around it. So what we're doing is educating back to that human conversation again, say, look, if you want to know who's got the recipe for Chick-fil-A or any of that stuff feel, free to ask ChatGPT.

Please. Go ahead. Enjoy yourself, have it. But if you are working on, let's say, fluid modeling for supersonic travel a don't ask ChatGPT about that. Cause one, again, once it's out there, it's out there. You ain't pulling that back. We saw that one.

Gosh, who was it? There was something that was in the news. It was another, unfortunately it was another military leak. It was something got put up and pulled down like stupid quickly, but not before it had been reshared like. Ridiculous amounts of times. We're talking minutes.

So yeah you, type it on a computer, the chances are it's gonna end up on the internet somewhere at some point in time. 

Dr. Rebecca Wynn: Yeah. Right now statistics say, and I think they're understated, that seven to 10% [00:11:00] of corporate information is going into. Generative AI I think that's understated.

Because the other thing is I, if you're watching people on their computer at work 

Chris Roberts: Yeah. 

Dr. Rebecca Wynn: Are they pulling it out on their phone? Are they doing it in some other place that you're not watching? Cuz you can't watch every single system that someone is attempting to touch. 

Chris Roberts: No.

You can put controls in place and the standard stuff in place and data leakage and some other interesting stuff to your point, no, I. We're humans. We find ways around things. This iPad is here, I'm at my mother's table, but I can still work on it.

do I go through certain browsers settings and multifactor authentication of VPNs? Absolutely. But that's cuz I'm a little more fanatical, shall we say, is trying to put stuff in place to make sure. But for the most part, I see way too many companies. They are like, yeah, here it is.

Go on. Have at. It's gonna end in tears. 

Dr. Rebecca Wynn: And as we expand a little bit on that I had spoken to a couple of CEOs and they said, yes, [00:12:00] we have training in place. We've actually adopted an acceptable use for, using AI intelligence, generative ai. But you know what, I just go around the proxy and do whatever I want to.

So I tell people you have to be walking the walk. And, showing by example, making it part of your, fiber of the culture. And it's like you said, it starts with humans. 

Chris Roberts: I hate the training for sake of training. 

Dr. Rebecca Wynn: Check the box. 

Chris Roberts: Flipping true, we talked about this beforehand.

if the mentality is check the box and you do your quarterly training or annual training, congratulations, you literally have put the tick in the box. But if you're actually gonna do it properly, you've got a channel set up on Slack, you've got a monthly training awareness 

Your people are brought in, they are actually telling you when the scammers hit them and you get screenshots of the stuff and, you reward. You say Thank you. You give out gift cards, all this stuff that's doing it properly. That's actually when you and the teams running it. Now, [00:13:00] you still gonna have robbery and you still, stuff's still gonna happen, which is why you have training, which is why you have incident response procedures, which is where you have monitoring and which is why you have tabletop exercises to go, Hey, somebody clicked on stuff.

It's life. What do we do now? And so again, so many organizations just are not at that point and that doesn't take much, that's not serious maturity. That just means you gotta put your hand up and take ownership. 

Dr. Rebecca Wynn: Yeah, , I think one of the best compliments I've ever got back on doing training and talking and it's a fulfilling, as a person went in and they said, I wanna let you know I was gonna do something at home.

And then immediately I stopped. Cause it says, what would Rebecca do? Yeah, she would not do it this way. Yeah. And so I didn't, I'm like, what would Rebecca do? She told us to do this and they applied it at home. Yeah, that's what is, you need to make it a 24 hour a day culture inside work, outside work, and that's how I think you can make things a lot more effective.

Plus it's a better ripple effect human wise, [00:14:00] because that can be generational effects on someone's life, especially if it hits their finances or something like that,

Chris Roberts: That's, how it should be done. 

Dr. Rebecca Wynn: Chris, our time has run short. 

Thank you for being on show. . 

Chris Roberts: Thank you for having me, honored.