Soulful CXO Podcast

Master Class: How to Spot AI-Generated Content | A Conversation with Michele Stuart | The Soulful CXO Podcast with Dr. Rebecca Wynn

Episode Summary

In this FREE MASTER CLASS, we deep-dive into the complexities and potential dangers of AI with valuable insights and practical advice. Learn how to spot AI fakes, how to proactively train yourself and your team, learn practical tools, deep-fakes awareness, cyberbullying, legal issues, and how to better protect your privacy. Don't miss out on this eye-opening episode!

Episode Notes

Guest: Michele Stuart, CEO, Keynote Speaker, OSINT Trainer, JAG Investigations

Website: https://www.jaginvestigations.com/

LinkedIn: https://www.https://www.linkedin.com/in/michele-stuart-jag
 

Host: Dr. Rebecca Wynn

On ITSPmagazine  👉  https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/rebecca-wynn

________________________________

This Episode’s Sponsors

Are you interested in sponsoring an ITSPmagazine Channel?
👉 https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network

________________________________

Episode Description

In this episode of the Soulful CXO, host Dr. Rebecca Wynn welcomes back Michele Stuart, CEO of JAG Investigation. Her expertise lies in  Open-Source Intelligence (OSINT), counterintelligence, insurance fraud investigations, financial investigations, threat assessments/mitigation, due diligence, organized retail crime, and corporate and competitive intelligence. She provides consulting and training services to federal, state, and local law enforcement agencies, military intelligence communities, Fortune 500 companies, and the financial and insurance industries. Additionally, she has served as an instructor at Quantico (FBI Academy) for international training in OSINT (Open-Source Intelligence), is a keynote speaker, and teaches classes for many professional organizations.  Learn how to spot AI fakes, how to proactively train yourself and your team, learn practical tools, deep-fakes awareness, cyberbullying, legal issues, and how to better protect your privacy.

________________________________

Resources

AI Generated or Human?
https://www.whichishuman.com

Reverse Image Search
https://www.tineye.com

Face Recognition Search Engine & Reverse Image Search
https://www.pimeyes.com

AI Art Generator
https://www.starryai.com

________________________________

Support:

Buy Me a Coffee: https://www.buymeacoffee.com/soulfulcxo

________________________________

For more podcast stories from The Soulful CXO Podcast With Rebecca Wynn: https://www.itspmagazine.com/the-soulful-cxo-podcast

ITSPMagazine YouTube Channel:

📺 https://www.youtube.com/@itspmagazine

Be sure to share and subscribe!

Episode Transcription

Master Class: How to Spot AI-Generated Content | A Conversation with Michele Stuart | The Soulful CXO Podcast with Dr. Rebecca Wynn

[00:00:00] Dr. Rebecca Wynn: Welcome to the Soulful CXO. I'm your host, Dr. Rebecca Wynn. We are pleased to have back with us, Michelle Stuart. Hi. Michelle is JAG Investigation CEO with over 34 years of investigation experience. Her expertise lies in open source intelligence, OSINT, counterintelligence, insurance fraud investigations, financial investigations, threat assessments, and mitigations, due diligence, organized retail crime, and corporate and competitive intelligence. She provides consulting and training services to federal, state, and local law enforcement agencies. Military intelligence communities, Fortune 500 companies, and the financial in industries of insurance.

Additionally, she has served as a instructor at Quantico, the FBI Academy for international training and OSINT, is a keynote speaker and teaches classes for many professional organizations, which is where she and I [00:01:00] first met well over 13 years ago. I took one of her classes then. In 2017, Michelle's dedication to the field led her to collaborate with the Pennsylvania Office of Homeland Security.

Together, they developed a groundbreaking Keeping Kids Safe program, which educates administrators, principals, teachers, parents, and others on the risk associated with online and social media activity. predator grooming, as well as dangers of application cellular security. Michelle, it's such a delight to have you back on the show with us today.

So you asked me to get straight into it after I had to give that great bio about you. We've recently seen in the news, especially here in Arizona, about autonomous cars about all the lawsuits that are against them, about them driving into lakes, running into vehicles, crossing traffic and things along those lines.

It seems to me like a malicious attacker could probably maybe take over those cars and cause real hectic [00:02:00] for everybody. But what do you see along those fields? What are your thoughts? 

[00:02:04] Michele Stuart: It's a lot of the software, the AI that's being utilized in these smart cars, smart devices and everything.

I don't necessarily always think it's a malicious type of attack. I think it's a learning thing that the program, the software has to do in and of itself still, but I'm sure that you guys have seen videos of the cars. They're driving. Unfortunately, the wrong way. I know that. I just saw 1 last week where a waymo card just, Was coming head on, I'm going the completely wrong direction that it was supposed to or in Austin.

I don't know if you saw that 1 that would actually made me laugh. There was a whole bunch of those cars all stuck together and they caused this huge traffic jam because none of them would move because there was 2 of them that were facing each other. Neither 1 would move. And so now it just created this huge coordinated or uncoordinated traffic jam.

And it was just, it's funny to watch the videos, but it's scary in that same sense. Because you don't have control of the vehicle, [00:03:00] right? It's the software that's supposed to be, going through its guidelines. But the 1 that I saw recently also was it was a Tesla and it didn't read the, flood zone sign.

And it just drove right into the waters and put the individual right into took him off the road and drove him right into the water. And so I think that and I've been trying to watch and study these and watch some videos on the explanation of why things are happening. And it's more of a software system where they're saying that especially on 1 where it wanted to go back on the right side of the road, but that side of the road had construction on it.

And so the driver kept pulling in. Back to the left side, so it's trying to do what it's supposed to do, but it just doesn't realize. The atmosphere that it's in, right? The environment that it's in, and it needs to stay on the left side of the road. I think that this is going to be something continually that's going to happen just because things change is fluctuating, [00:04:00] environments that these vehicles have to go through.

I personally, I won't do it. I won't ride in 1. I want to have control. Maybe it's because I'm a control freak. I don't know, but I want to control my vehicle. I've never been inside one of them, and I know some people that really enjoy them, but I still like to have the feel of the wheel right below my hands.

[00:04:20] Speaker: Yeah, I think there's another story here, actually, in Arizona, I think it was downtown Phoenix, where there was like a landscaper's truck that had trees in the back, and there was a Waymo car went behind them, and then kept thinking it was about to go into trees, so it kept stopping. In the middle of rush hour traffic because it thought that there was a real tree instead of it was a tree on top of a truck, which I thought was funny because it, because they would finally move and the car would move and then it was stopped because it was tree.

I'm glad safety first. I thought, it was interesting for people who are trying to get through rush hour. Who's Oh my God, please truck out of the way. 

[00:04:55] Michele Stuart: The other thing that I had to laugh about, I actually just saw it a couple of days ago. Oh, people were [00:05:00] taking cones the traffic cones and putting it on the hood of a way.

No, because it would just stop. It freezes them because it doesn't it thinks it's in a an area. It's not supposed to be. So they were doing these all across town. They were taking these construction cones and putting them on these cars and just freezing them all over the place. And they were taking videos of it and putting it online.

And I was like, my gosh. I thought it was funny it's not funny for them, but it is funny. 

[00:05:27] Speaker: It almost seems like a halloween or april fool's day kind of joke. Not an all time thing But yeah, that'd be cute to see but we are seeing AI get more and more in cars in tires And things along those lines.

It seems like what can be used for good could be used for evil Have you seen anything along those lines about some of the other ai components are coming into cars for even our personal cars? 

[00:05:51] Michele Stuart: There's an, I don't know if you've heard it, or the listeners have heard of it. I just actually sat through some training on this on smart tires.

[00:06:00] And there's a couple of different components on smart tires, not only the affiliation of the application that's going to give pressure gauge information, the tread on the tire, but actually now it has geolocational information. So now it's usually within the valve and I hope I'm saying this right.

Because I'm not really mechanically this person, but that's where the, the. chip is going to be located and so there was a case where a high end vehicle had all that start their tires stolen and they actually were able to track the tires down because. A geolocational information within the smart tires, so everything it's just wild that we have so many capabilities and so many things that can be utilized with AI, which I have always said, I think technology is a beautiful thing, but I also think it could be a very dangerous thing.

And it's not just [00:07:00] cars. We have it now in our medical fields. We have it in our financial institutions. We have it in our learning capabilities. And so is going everywhere. And I just everybody has their own opinion on it. I think it's moving pretty fast. And I like it I've utilized it, I've utilized it in my presentations, I've utilized it to create outlines and training products for me and material.

But in that same sense, I'm very weary of it also, because I see the negative aspects that can happen by people utilizing it wrong. We've all heard deep fakes. We've all heard that, but there's so much more involved. Then just deep fakes when it comes into some of this stuff, 

[00:07:45] Dr. Rebecca Wynn: can you explain a little bit more about what is going on?

Not only by deep fakes, but we have the music industry. We we end up having whose voice is being used for what? And is it okay to use it? Can you clone a voice? Can you not clone a voice things along [00:08:00] those lines? Can you explain a little bit more about what's going on that? And why people should be paying attention?

[00:08:05] Michele Stuart: Yeah a couple of things I could bring in I've had to work some of these cases already where somebody calls somebody. And I know you've heard this. It's been all over the news. And in the background, you're going to hear that person's voice like, daddy, help me, daddy or mom they're, they kidnapped me.

They're going to kill me if we don't pay the money. The person isn't there, obviously and I actually had this. Happened to somebody that I know where her father. Received a call and saying that the Mexican cartels had his daughter. And unless he paid a certain amount of money that they were either going to rape her or kill her or whatever they said to him.

And, he, didn't realize that it was fake. Thankfully he called his wife and she said she's standing right here. But the voice in the background was definitely his daughter and so we [00:09:00] have these really bad things that can happen now with technology and with the AI type of systems. And not only like that now, we also have these things called generators.

I talked about this in my class. And the thing about generators, which makes things. Scary for me in that sense is they look so real and so I can make it look like 2 people. We're talking over Instagram, right? Over Facebook in their in Twitter. I can make a fake text message and make it look like I got a screenshot from you.

And it was you discussing something inappropriate about work or something racial, and then it can make it look like somebody saying something proprietorial. And so now these, generators, when you're looking at them, and I've seen them, they look real. It looks like a true text message between 2 individuals, or I can make it look like a text message between you and I. [00:10:00] Right, but I'm the person who's in there typing everything I'm typing my response to you, but I'm typing your response to me.

Now I can ruin your job in seconds really and take the screenshots and go to your, HR and how many people in HR, or even in security know about these generators and they're very easy to find. Unfortunately, Google it. There's a generator for everything. So now we have to look at not only the deep fakes of creating these videos of people saying things that they're not saying, but now we have this ability through AI again to create these communications that didn't happen on numerous platforms.

And so those are the things that kind of get me, because as an investigator, as somebody, obviously, when it comes into trying to protect the public, it's, getting difficult. You have to make sure that you have a 2nd or [00:11:00] 3rd piece of evidential material that's going to back up what we're seeing originally.

But then look at it this way to Rebecca. And this is the 1 that as a mom makes my heart hurt is the ability that is out there now for cyberbullying. And then we're not even talking about the making an anonymous and how they're creating these fake Twitter accounts, but now they can create this whole fake conversation.

That never existed, and then all of a sudden they put that out there and now it is a whole new realm of cyber bullying. And so these kids can't just come out and say, I never said that. Because it looks like they did, it's, a dangerous game. There's so many different things that AI can assist that is a good ethical way, but so many bad things, too, can come from it.

[00:11:51] Dr. Rebecca Wynn: Yeah, we're seeing that a lot in a lot of the recent court cases. There's been televised because people are like, I took a screen capture and I took that screen capture [00:12:00] and I made a PDF and that's what I handed over to some law person or investigator and I'm always like, where's the date and time where's the metadata that proves that you didn't go in and alter it because you can always delete a part of that chat.

Oh, yeah. What is the sentences before or after? Or if you have an image, you can totally go ahead and falsify the time and date and do an image and make a PDF. You don't know it's from that person. And then even if it was from that phone, how do you know that someone else didn't have access to my phone either?

I don't know. Direct or indirect through maybe a software to do that. And so I tell people. It's really easy for people to put that up on social media, but it acts like, does it sound like, and even before you say, does it sound like the person? Does it seem like that? They would do that. We're in the area, but now with AI

my writings out there, your writings out there, make it sound like person X and the way that they speak. [00:13:00] And so it can really falsify. So I think that's one thing that really people have to pause. You have to give the person way more than we used to the benefit of doubt. We were always supposed to give them the benefit of the doubt.

We didn't always do that. I'll be guilty of that. But now you really have to go ahead and give the benefit of the doubt because you should assume it's falsified first, in my opinion. 

[00:13:21] Michele Stuart: But here's the thing too. Is we can say that, and I think most of the people who listen to your podcast also has maybe a technical background and understand that.

But. A, really a high percentage of people don't understand this. They don't know that it's out there. And maybe they've heard of it. It's not that I'm saying that nobody's ever heard of AI. Everybody's heard of AI. But how it actually can be utilized and how easy it is to utilize. One of my examples that I do in my class is I created this fake tick tock capture and I took a photograph that was years and years old and not even from our [00:14:00] country.

And I created it, and I asked my class, how many people do you think this is real? This and I made it into kind of a violent situation. And almost more than half of my class raised their hand. Not 1 people, not 1 person said did anybody right? Click on the photograph and do an image recognition.

Or a facial recognition, which if they had, that's the very 1st thing I would have done is I would have right clicked or cut it out whenever I could do right and done an image recognition. You would have immediately pulled it. That it was a 10 year old photograph out of another country that had made the news.

Right but most people don't do that. They look at it. They see it. They believe it. And they forward it, they comment on it. Right and now there's one thing that took me seconds to do has created its own life and it's out there everywhere. I think it's really important that we have more education on this for people, especially within work environments.

Because [00:15:00] I've been called in now on a couple of different things where somebody has been accused of saying something that they never did. And by asking a couple very specific questions, and especially on a text to text, I was able to show that situation never occurred. That what they were looking at was a generated communication.

Right so I think that it's going to take education, not only into work environments, but into school environments, because think of that again, not only to cyber bullying, but we have the bomb threats and make it look like somebody else did it. It's going to it's what I said I love technology, but in that same sense.

I also don't like it. Because of the dangers behind it. 

[00:15:42] Speaker: I'm hoping that what we've been seeing a little bit of trim where news agencies are trying validating, if there was artificial intelligence that were in play, or if someone's posted something as a news saying, Hey, here's the picture of this wreck or something along those lines.

I'm like, wait a minute. This picture is from two years [00:16:00] ago. This is from five years ago spliced and diced it and we're going to make the stop. I'm hoping that comes into play a lot more going forward. I think that'll help, But it's too easy for anybody now to throw information out there and make people believe it.

And part of it was good journalism. I saw it, but did I go ahead and get it verified versus five people just picked it up in the newswire and because it was in five different newswires, I'm going to assume it's the truth. 

[00:16:31] Michele Stuart: Oh my gosh, just wait. Look at how propaganda is being used, AI is being used in war, right?

And countries that are having issues. Just wait until elections come here in the United States, on both sides. And, it's going to be so filled with propaganda. This fake information, these fake videos, these fake people saying things that they're not saying it's going to be really difficult for [00:17:00] people in just general.

To understand what is real and what isn't, and that's sad. It really is. I just, I wish we could go back to the good old days sometimes. 

[00:17:12] Dr. Rebecca Wynn: Most definitely. And I wanted to touch back a little bit when we were talking about deepfakes. Can we talk a little bit about voice and things go those lines.

And then we also see the CEOs and CIOs and, the CFOs and people like that go ahead and, their voices cloned. I've also. Recently been consulting with people where they allowed AI just to translate their voice into Japanese and other language. 

[00:17:36] Michele Stuart: Yeah. 

[00:17:36] Dr. Rebecca Wynn: I saw that and went, by the way, that is not the translation and you guys should literally just put it out on LinkedIn and different places like that.

I'm just telling you, if I was you, I would pull it down, but I wanted to bring it to your attention. What do you think about things like that? As, especially when at times like. You and I both had to validate, both of us have had to save people's jobs because they falsely accused. It's not only biometrics, it's the facial [00:18:00] recognition.

And then there's a lot also on people making fake AIs. I just recently went through a test myself. It was a hundred different images of quote unquote people. And the whole thing was, which one was a real person and which one was the AI? I had one of the highest scores on picking up AI. But I would tell you it, it was not high.

Yeah. 

[00:18:24] Michele Stuart: What do you see along those lines? I'm sorry to talk over you. There's a site that I would tell you and anybody who's listening to go to, it's called whichfaceisreal.com. Which faces real dot com and it gives you 2 photographs, right? Once an AI derived photograph, and then 1 is a real image.

And it's I always tell everybody I love the site. I'm on it all the time when I'm on hold, or I'm on a zoom that I don't really care about. I'm on this right? And I'm always playing it. And the reason is because you teach your brain. It's like, when you learn how to shoot a gun, it becomes muscle memory.

When you learn [00:19:00] how to swing a baseball bat, when you learn how to throw a football. Right everything becomes muscle memory and it's with our brain, we have to teach it what to look for. And there are little indicators, but as AI keeps developing and developing, obviously, they're catching on to these things.

But, the, thing that I always talk about when it comes into trying to recognize an AI photograph to determine if it's real or not 1 of the best things that you're the 1st things you should look at. Not the best because it depends on the photograph. But 1 of the 1st things that I always tell everybody is look at people's ear lobes.

Look at the photograph scare lobe, because if you have an earlobe that goes up genetically, your other side of your earlobe should go up before it touches your face, right? But sometimes in these generated photos, you'll see one that goes up and then one that ties into the face. Not that can't happen, because I just had a gentleman who came up in my class and said, I'm a mutation.

I'm like, what? And he is look at my ears. And sure enough, he had one that tied up and then [00:20:00] tied down into his face. But normally. Those, earlobes are going to look the same, right? And second thing, but for women, especially is always look at their earrings. A. I. has a very hard time matching earrings.

Those 2 those 2 indicators are one of the things that I always train my classes at 1st, look at earlobes, look at errands. The 2nd thing, or 3rd eyebrows. And I has gotten better with the eyebrows, but sometimes you'll see a lady that has a very defined eyebrow and then over here it goes straight.

Now, I don't know about you, but I want both of my eyebrows to look the same. There are little indicators and then for men, which is really weird because I haven't seen it that much on women, but I've seen on men more, often is you'll see pixelation of their hair right around here (forehead) and then by their by right here (temple).

And you'll see these little bitty fine dots. And so I see that more. Again, [00:21:00] in men, but I don't know why more. So with women, the thing that I do notice on women, and you'll notice this too, you guys, if you're looking is if you're trying to determine if it's an again, if I have long hair right on both sides.

Most of the time, again, women like to have the same style on both sides, but I have a friend of mine that has long hair and then it's shut on the side. Again, it's, there are factors, but look at that. A. I, because you might see something that has very long straighhaiear straight hair. We are all of a sudden it's kinky curly over here.

Again, that could be an indicator of an imagery one thing that I would always say. Is I always run my photographs through a facial recognition system, too, and or an image recognition, which I really like is tineye.com for image recognition because it's looking at the image as a whole. I use a program called pimeyes.com. It's a paid for program. But it has, it's I always talk about in [00:22:00] my trainings, it's like the onion, right? And you have layers to an onion. And I always say, everybody has to remember what movie that came from because it's Shrek. I love that movie. And so you have that outer onion. Which is normally like your social media profiles, but then the inner onion becomes more of different websites.

It could be PDFs. It could be whatever, right? Somebody else posting that picture of them. And so I really like PimEyes because it goes through the whole multiple layers of things. But on the imagery of AI, it is becoming better and it's learning from its mistakes. And what you'll see a lot of is, they're starting to do more sideway views, right? So you don't get that ability to look at 2 ears. Imagery and then the other thing with imagery is it used to be mostly predominantly, you would get the face. You would just get the headshot. Now we have the [00:23:00] ability to take that headshot, an AI created headshot.

And now create multiple bodies for it multiple things, put them on the back of a truck, drinking a Coke walking down an alley with a coffee in their hand, looking out at the sunset and so there are a couple of ones out there that people have obviously heard of mid journey. But there's one that I really called starryai.com. And it allows not only for imagery of realistically looking pictures, but also it gives you the ability to create old time. The black and whites it allow you to take almost like pictures that look like paintings people take their face and then it makes it look like a painting.

Stuff like that, but there's generated dot photos again, that will create a created photographs and then allow you to create body to that photograph that head. It's becoming more where you can really [00:24:00] put together an entire figure. And, then again, we all obviously know the deep face taking somebody's face and putting on something else that happened, but that which face is real dot com, you should actually go to it. It's actually pretty fun to play. 

[00:24:15] Dr. Rebecca Wynn: I know one, two, the two places that I always look is the corner of the eye, because even if you had a bunch of Botox, it's really hard to get both eyes the same on the corners because most people aren't the same, even with a lot of Botox and the corners of your mouth.

Because you always smile slightly off because your, face is not symmetric. It's asymmetric. So those are the 2 things that I always take a look at, but I'm going to check out the ears because I usually don't pay attention to the ears. 

[00:24:38] Michele Stuart: Yeah, you're a perfect example of it is glasses. The glasses should have the same reflection on both sides.

So always look at the glasses too. 

[00:24:48] Dr. Rebecca Wynn: We do a lot of things on our phone and I won't be amiss asking about our phone. What do you think people should do more proactively to protect themselves on their personal phone and also on their work phone?

[00:24:59] Michele Stuart: Okay. [00:25:00] My answer to this stop down all these applications Oh, my gosh. 1st of all, read the permissible purposes. Because a lot of people don't even do that. And do I always think that all the permissible purposes are within the application that it's putting out there? No, I don't because it depends also on who the developer or what country it's coming or exactly.

And so my big thing is, again, I always say very, be very careful with what you're putting on your phone for applications. I'm anti application and not because I don't like applications. Just because I just want my phone be as secure as possible. 2nd is always look not only at the applications that are going on your phone, but constantly your phone's being updated.

You need to go back in through all of your privacy settings and look at what's being added and what has changed back or defaulted back. And I know that Apple had a big [00:26:00] update and one of the things that went viral and everybody was talking about was the journaling. Right and how you should go in there and disable that.

So it's not just the applications that we have to mind. We also every time there's an an update. I go back through the privacy settings, and I make sure that they're to where I want them to be. If there's something that shows up, I don't understand it. I Google it. What does this have to do with my phone?

If I disengage it, what is it going to do to my phone? So people need to be very proactive not only again on the application, but also be proactive by looking in your settings. And if you don't understand what it is. Google it, figure it out, find out what it does to your phone and if you disengage it, if it's going to affect it negatively, but one of the things that you and me have talked about in the past, when it comes into these applications, a lot of it has locational purposing, right?

Like a location information. And the thing that I always say is where that where's that information going? [00:27:00] And they could say all day long, they're not selling our information. There's a ton of sites that share information. There's a ton of applications that share information that they pull with other applications.

You have to really look at not only the application you're putting on your phone, but who's sharing your information with. And there's a grocery store app, but I don't want to say the grocery store. You guys can Google it and you can figure it out. There's a article about it, but it talks about how will you purchase things from this particular grocery store and especially within the app.

It shares that information, not only with Facebook, but with Google, with Pinterest and with Snapchat. So what you're purchasing at the grocery store doesn't stay within your app. It's now being shared with other avenues ever and it becomes part of that advertising identification number thing.

And, becomes direct marketable data. So there's a [00:28:00] whole thing behind all these apps. And so you've got to be really careful with it. Oh my God. I could talk about apps all day long, but you just be very careful with what you're on your phone. 

[00:28:09] Dr. Rebecca Wynn: I think it's a really good point that if you're going to upgrade, take a look in it.

And if the developer just says. I'm not going to tell you what I'm changing. I'm like, I ain't changing it. They do that consistently. There's a different application who's going to be more transparent and what they're doing from security and privacy perspective, you need an app. Look at those ones. It may cost you something.

It may not cost you something. I say there's no free lunch. As you said, behind the scenes, if there's a free lunch, theoretically it means that they're just selling your data behind your back and they're, making money off it. So, sometimes paying for something. Is a little better control but if not use another app as well, too, if you're not using them, turn them off, have something behind kind, you watch you and what I mean by watch is saying you haven't used this app in a month.

You haven't used it in 2 weeks. Do you still need it? And if you don't need it, get rid of it. You can always put it back if you do need it at a [00:29:00] future time. 

[00:29:01] Michele Stuart: This can be unrealistic too, but I also say, if you have the capability of having more than one phone. Right so you have a phone that you want to make absolutely secure, especially for work environment situations and then you have a 2nd phone that you don't care.

Maybe it's so much and you do have some more applications on that. But we have to be very careful with that too is because a lot of people use their personal phones for their work. And, I always say that's to me, it's a fine line that you got to draw. It depends again on what you're worried about.

And if you don't have that job, or you're worried about different type of information being leaked out, but having more than one phone is another option. 

[00:29:43] Dr. Rebecca Wynn: I do that with a tablet when we travel to conferences and speaking, and it's under a bogus name, a bogus email, throw away email address and all that.

And then if, somebody actually gets it so what you knew of something I might've been reading on a download and then I reset it all the time. Factory [00:30:00] reset every time. So you can do things like that, but you, I think the key point here is you have to be the one who takes the ownership of protecting you.

Don't, be naive enough thinking. A corporation out there is going to take that upon themselves to do that. 

[00:30:16] Michele Stuart: I agree. I really do. 

[00:30:18] Speaker: Unfortunately, our time has totally flown by. So I want to thank everybody for joining us for this session. Please go ahead and look at the descriptions. We will have the links.

We will go ahead and I'll put the links down there on the application that Michele has recommended. And she'll give me some other ones to put down there as well to her contact information will be there as well. Please go ahead and LIKE and SUBSCRIBE. The newsletter, Soulful CXO Insights, newsletter comes out every other week.

Remember, I do at least one article every other week as well, too, so please go ahead and look at that. Michele, thank you so much for coming on again and sharing your insights. You're always a pleasure to have on. [00:31:00]

[00:31:00] Michele Stuart: Thank you. I appreciate that.