Unspoken Security
Unspoken Security is a raw and gritty podcast for security professionals who are looking to understand the most important issues related to making the world a safer place, including intelligence-driven security, risks and threats in the digital and physical world, and discussions related to corporate culture, leadership, and how world events impact all of us on and off our keyboards.
In each episode, host AJ Nash engages with a range of industry experts to dissect current trends, share practical insights, and address the blunt truths surrounding all aspects of the security industry.
Unspoken Security
When Will A.I Replace Us All?
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of Unspoken Security, AJ Nash sits down with Ryan Cloutier, CEO of ScareBear Industries, to discuss the future of artificial intelligence. Ryan explains the evolution of AI, from its origins with Alan Turing to today's generative AI and large language models. He highlights the importance of understanding that AI, at its core, is mathematics. Ryan emphasizes the need for careful consideration of ethics and societal impact as AI continues to develop.
Ryan discusses both the exciting potential and the inherent risks of AI. He explores the potential for misuse and the need for careful governance. He also highlights positive use cases, such as AI companions for the elderly and advancements in medicine. Ryan raises concerns about job displacement and the potential transfer of power from humans to machines.
Ryan encourages listeners to become involved in their local AI communities and promote the safe and ethical development of this transformative technology. He stresses the importance of critical thinking and kindness in navigating the future of AI. He leaves listeners with a call to action: do a random act of kindness daily.
When Will A.I. Replace Us All?
Ryan Cloutier: [00:00:00] Critical thinking right now is the most valuable skill. If you can develop critical thinking, you don't need to worry about your job. Until you do, and I'll be right in the boat with you. Right? Like, if we get to that level, I'm right there with you. I am not a quantum physicist. I am not these things.
I, I know them. I talk to them. I work with them. Because I'm trying to bring hacker mentality, by the way, and and I will often wonder how much the the intelligence community because of its ability to deconstruct problems to paint scenarios
AJ Nash: [00:01:00] Hello and welcome to another episode of Unspoken Security. I'm your host, AJ Nash. I spent 19 years in the intelligence community, mostly at NSA, and I've been building and maturing intelligence programs in the private sector for about eight years, maybe going on nine now, maybe. I'm passionate about intelligence, security, public speaking, mentoring, and teaching.
And I also have a master's degree in organizational leadership from Gonzaga University. Go Zags. so I continue to be deeply committed to servant leadership. Now, the reason I mention all that is this podcast brings all of those elements together, with some incredible guests, and we have these authentic unfiltered conversations on a wide range of challenging topics.
It's not your typical podcast, though, all polished and put together necessarily. My dog makes occasional appearances. It's been a while, and she's not in the room today, so she probably won't. People argue [00:02:00] and debate here. We certainly swear. I mean, I definitely do, and that's okay. What I want with this is to think of this podcast as a conversation, maybe one you might overhear at a bar after a long day at one of the larger cybersecurity conferences that most of us attend.
These are the conversations we usually have when nobody's listening. Now, today I'm joined by my good friend, Ryan Cloutier. he's the CEO of ScareBear Industries, as well as the chief of artificial intelligence and visionary for Candor. he's cybersecurity professional, over 15 years of experience. So he's kind of an old guy like me.
he spends that time developing cybersecurity programs. I served as president of security studio. It's currently on several advisory boards, including the Idaho National Laboratories Center for cyber informed engineering. I certified information security, information system security professional.
That's CISSB. and he's proficient in development, security and governance, cloud security, DevSecOps, methodologies. Security policy process, audit, compliance, network security, I could go on and on here, right? Application security and [00:03:00] architecture. he also instructs the FR Secure CISSP Mentor Program, co hosts security podcasts, and he was listed among the most influential people in cybersecurity.
Yeah, he's a pretty impressive guy. I don't know why he hangs out with me. He's got terrible taste in friends. and I know a couple other people that would probably vouch for that. So, Ryan, with all of that said this, this impressive bios or anything I left out, anything you want to add, you know, I mean, you're one of the most influential people in cybersecurity, man.
Ryan Cloutier: Oh, yes, So I leave that little nugget in the bio as a tongue in cheek joke. Some years ago, I was doing a podcast with some mutual friends of ours, Chris Roberts and Evan Francis. And somebody put us on that list and, you know, we, we all kind of think those lists are bullshit. and so I leave it in there just to see, you know, does that change the pressure in the room?
and if I'm talking to folks like, like us, it's, it's because we know how bullshit statements like that are. Right? yeah, I've done a lot of stuff, done a lot of things. But ultimately, I'm just, I'm just a hacker. I'm [00:04:00] just a hacker. I'm a nerd. I love to, to explore and unpack and the people side of it, you know, is what really drives me.
So that's the only thing I'd add.
AJ Nash: Yeah. Very cool. And listen, a lot of those lists are bullshit. There's no doubt. I mean, you can buy lists. I mean, Chris Roberts talks about that stuff a lot, right? There are also valid lists and listen, you are actually pretty influential. All three of you guys for that matter.
Ryan Cloutier: all the work you, well, some of the work you've done, I don't know all of it.
AJ Nash: We'll talk about it. and I'm excited to have you here today. So I appreciate you making time. ironically for anybody who doesn't know Ryan and I both live in the twin cities area, so but we are not in the same room. He's in his house and I'm in mine. because this is the modern world. and it was just the way my recording is set up, frankly.
but we saw each other not all that long ago. So, all right, listen, let's jump into the topic today because this is a really interesting and kind of scary topic in my opinion, you know, that I wanted to get into, you know, mentioned before, obviously, you know, AI is a big part of what you do. And so today's topic, a simple question, you know, everybody wants to know is when will AI replace us all?
You know, it's, it's a big fear, right? People want to [00:05:00] know when, when are we going to all get replaced by AI? It's, it seems to be coming, faster than I'd like it to. So, so I think we can get in here and kind of talk about the pros and the cons and you know, what's happening and, and, and dispel some of the myths maybe.
So, you know, to jump into it, Ryan, like you're the expert, right? So the first question really is let's baseline some of this. Like, what is, when we say AI, what is AI? And, and where are we today, you know, in this technology?
Ryan Cloutier: So I think, first, let me humbly say, if I'm the expert, we're all in trouble. and second,
AJ Nash: that when I'm drinking, man. I almost spit my soda all over the screen just now. No, no,
Ryan Cloutier: asked often. first we have to, to separate artificial intelligence from. Things like automation, they can go together, but sometimes they're separate. So I like to tell people that for me, AI started with a man named Alan Turing.
And, and it was, and it was back in the forties, and he was trying to like stop these [00:06:00] Nazi people and the whole war. And he created the machine that ultimately broke the enigma machine. and I believe that wholeheartedly. That was the first example of an intelligent machine. That was day one. From there, we've, we've created this stuff called machine learning.
Now, machine learning is a type of artificial intelligence, but it's very narrow in its scope. And it's been around for a very long time. It's why your credit cards get flagged for fraud. It's why you, When you book a ticket, you know, shortly thereafter, you're getting advertising for the destination you're headed to.
it's been in use since the 70s. It's used in air traffic control. It's it's it's been around a very, very long time, especially in the defense industrial space. Then you have. what we are calling AI today or generative AI, which really is basically large language chat models. we have these things called LLMs or large language [00:07:00] models.
That's just a bunch of text. It's just shit tons of text in an unorganized database and the machine learning or its cousin deep learning, which is a type of algorithm that says. Okay, now that we figured out machine learning, now we're going to do deep learning, and deep learning is the act of the machine itself trying to make the next best guess.
Okay, so is the apple red? Alright, so we think we figured out the machine learning to take the red apples and send them this path and the green apples and send them that path. Problem is, when we count the apples at the end, we still got reds and greens getting through to the wrong basket. So deep learning is, okay, what's going wrong in that and how do I improve that?
And I'm oversimplifying all of this, by the way,
AJ Nash: good. good. Especially for me.
Ryan Cloutier: a large language model just does that with text. Now, the downside is that text is the comment section of fucking Twitter, Twitter, Twitter,
AJ Nash: Right. Twitter, [00:08:00] Reddit,
Ryan Cloutier: right? right? and the worst part is 4chan, 8chan.
AJ Nash: Oh my God.
Ryan Cloutier: That's in there as
AJ Nash: Cesspools of the internet right there.
Ryan Cloutier: So,
AJ Nash: we're teaching these models how to be the rudest and most, like,
Ryan Cloutier: pretty, People we have, basically, because I mean, and by the way, for anybody who's on 4chan or 8chan, don't fucking write me a bitch about It mean you're a horrible person, but let's face it, a lot of horrible things happen in those places, okay?
AJ Nash: And that's what the machine's going to capture. So, like, if you happen to be there for, I can't imagine what purposes that are good, but let's say you are. I'm not picking on you. Don't give me a hard time. But, but yeah, I mean, so we're training these things on. Like some garbage behavior. Is that just cause that's what was available?
I mean, that's the big that were free.
Ryan Cloutier: It was, exactly. So is the, the, so the data sets are technically known as corpuses, right? We call them corpuses. I like to think of them more like corpse because, because it's information that's already stagnant, right? So it's not getting update. So if you've ever interacted with a chat bot and it's like, Oh, I don't know that I'm, you know, my training data stops, you know, [00:09:00] at the 2010 level or whatever it is. So, so often then we start to think that that means that's, that's all it knows, or it's all it's capable of knowing. And that's not true. It can know more. You can get it to do things. It will go search the internet. Not all of them. It depends how set up. There's a lot of variables, but when we're talking about AI today, we're really talking about generative AI or chatbot AI.
We're, when we talk about things like Co Pilot, Gemini, right? some of those AIs are visual in nature. So we're creating pictures with them. We're creating videos with them. some of the projects I'm working on are towards the, the next level of AI. So what we call general intelligence or gen AI, which means more human esque.
we're working on things like reasoning. Emotions, ethics, right? And so that's a different kind of AI than what we've had before, but AI is not new.
AJ Nash: Okay. All right. I mean, that's good. I mean, that definitely [00:10:00] helps right now. I'm not gonna lie. You know, I've, I've been up until recently, I've been pretty down on, on AI. I've spent a lot of time publicly saying, listen, what you guys are calling AI isn't AI. All right. It's just stack language models. It's a model on top of a model on top of a model.
These things can't think for themselves. They can't reason. They're not intelligence. Stop calling them that. And I've had a couple of reasons for that. First of all, I've always believed that so far. And what I had seen up until recently, Okay. validated most of that, you know, people talking in grand terms.
But I remember when I worked for a company. A few companies ago now, I went and visited their team that was working on AI and their PhDs. And, and one of the guys in the room said, listen, we're a long way from having problems. Like we're not being replaced. This thing can't tell a dog from a cat. Like, don't get excited.
There's a long way to go. Right. And he was the one that actually convinced me. He's like, this isn't AI. It's they've given that, that name. Cause it's, you know, it's flashy and it's easy to understand, but this is all just machine learning and stacked machine learning. And we're a long way from these things, you know, knowing anything.
And I now realize I did the math here. I'm like a long way. Well, that conversation was six or seven years ago. Apparently that's a long way in tech, [00:11:00] because things have changed, you know, dramatically, to the point now where my, one of my concerns, and I want to bounce some of this off of you is it's good enough to look like it's real, but not always still there.
Right. I mean, you know, there's been news about like hallucinations, obviously make shit up. if it doesn't have an answer, it'll give you one. that may not be any good. So, and I keep looking at it. I'm an Intel guy, right? So my background is all about Intel and Intel is all about giving the right people, the right content.
At the right time to make informed decisions, people are going to act on it, and it has to be trusted. And my concern is that we've reached a point where these technologies. They'll spit out an answer, and they'll spit out an answer that looks, feels, sounds like Intel. It can have the right caveat language, it can even have sources now, which was a flaw until recently.
So if somebody asks, hey, you know, what's, you know, what is the threat of, pick a threat actor or group, and they get that and they say, you know, what are all the TTPs, what should I do? you know, what do you think is going to happen next? I mean, the machines will give them answers, but it might still be bullshit.
Because it's just not quite there yet. So that's got me concerned in that it can [00:12:00] do a lot more than it could, you know, just a couple of years ago, and there are some cool use cases, but I'm really worried still. You know, I don't know. We'll talk about it as we go through here. What use cases do you see that are?
Valid that are safe that are like, Hey, this works today as well as a person you can trust this. We're good, you know, versus areas where people are pushing and it's like, man, I wish you guys would slow down here. Stop selling this like it can do this because it's really dangerous to go down this path.
Like there's some use cases you want to talk about in here.
Ryan Cloutier: Yeah, I think, you know, so a couple things, I think, the use cases in manufacturing, they're, you know, using these technologies to improve quality, using them in, in design, you know, those, those non humanistic use cases. really does do quite well.
AJ Nash: So repeatable processes, for instance, things like that. Yeah,
Ryan Cloutier: systematic processes, you know, things like, where it is actually emerging and being applied quite a bit [00:13:00] is in the human interaction space.
So, when you call that call center, you're not talking, because what they're trying to do and to varying degrees of success is actually trying to improve the quality of your experience. They really are now. All be it for better bottom line reasons, and, and, you know, reduction of staff, and liability, and, you know, the robot isn't going to have a bad day.
but I fear that I won't have good shit to scroll through on Twitter. Cause if, you know, people aren't having bad days at the call center, they're not melting down, and where'd my entertainment go?
AJ Nash: Right. Don't worry. The machines will probably handle that soon too. We'll figure out a way you know, you can program them to have a bad day. So,
Ryan Cloutier: Yeah, so I think it's, it's manufacturing. It's, where they're applying it in medicine. I also think it has some, some good, validity because again, that's really, it's pattern matching.
It's, and while it affects humans, it's not the interaction. And what I want to get to are two, two points. Okay. cognitive deference is a term that I'm working on coining here. If, if [00:14:00] in this context, because we've begun to defer our cognition to the machine.
AJ Nash: That's not
Ryan Cloutier: So we are, we are going to the machine now to let it do the thinking.
The amount of copy pasting that is happening, right, is ridiculous because it's not copy, paste, read. See, it used to be copy, paste, send. And and for, you know, as an IT person, security coder, yada, yada, yada, copy, paste is, you know, I mean, come on, copy, paste, that's, that's my shit. But you got to read it.
You got to know what it is and the problem we're running into is the description of the use case to the machine. So to your point, yes, it does spit out garbage, but it doesn't have to. Okay, garbage in, garbage out. And what we're finding is some of those soft skills that used to be hard skills. So if you have anything like this on your face of the gray nature, there was a time before [00:15:00] the Internet.
Yeah.
AJ Nash: the time I remember the before time. Yes, I know that was a time. We had like four television channels and one of them was PBS, which you didn't watch if you were over the age of like six or under the age of 70.
Ryan Cloutier: Right. Right. And yeah, and somebody had to do one of these with some aluminum foil and
AJ Nash: oh
Ryan Cloutier: to be able to write. But what we inherently had instilled in us was problem solving. Because your show that you wanted to watch wasn't on, so what do you do
AJ Nash: hmm. Right.
Ryan Cloutier: What do you want to do? Well, I want to watch Scooby Doo.
Well, it ain't Saturday morning, so And, and, and I think we lack that, and we're seeing that manifest itself. So as we're looking at what is AI, what is the risk to AI, how, how is AI going to impact our society, our social fabric? My biggest fear is that it's already begun. we are, we are going to it for things that we shouldn't be getting from it.
I can share a personal intimate here where, in [00:16:00] my own marriage due to my wife's autism and my neurodiversity is a ADHD super squirrel, she has used the AI at my encouragement, but I didn't do a full risk assessment. I didn't, I didn't remember. I didn't remember that she, cause I gave her one that I had tuned, not to my advantage, but to be more, trust me, that was a hard
AJ Nash: The AI is always like, Ryan's right. Trust Ryan. He's telling the truth
Ryan Cloutier: right, right, right. her prompt skills aren't always getting her the higher quality results. So where I had the prompt skill to pre what we call pre prompting, And getting it conditioned to respond in the way I want it to respond
AJ Nash: Mm hmm.
Ryan Cloutier: and with detail and and and not made up bullshit and fake sources
AJ Nash: Mm hmm.
Ryan Cloutier: She's not doing as much of that
AJ Nash: Got
Ryan Cloutier: so so [00:17:00] also from a privacy perspective
AJ Nash: your shit's out there now, man.
Ryan Cloutier: like What the fuck you giving that to the public gpt
AJ Nash: all your dirty laundry is now out there being trained.
Ryan Cloutier: But I'm a real transparent guy, so I'm not too worried about it. are, those are big topics, man. Those are really heavyweight topics. And I think in our rush to make money with this and to put some AI on it. a friend of mine, Sean Riley, guy that runs around and gives talks, heads up some cyber games.
He had a talk. He had gave, I don't know. We'll call it a year and change ago. where he had made some comment about, you know, they're putting in everything, they're putting it into the diapers.
AJ Nash: Yeah.
Ryan Cloutier: And we're now starting to see, right? Oh, diapers full. We'll send a text to you. What in the hell are we doing? So,
AJ Nash:
Ryan Cloutier: yeah.
AJ Nash: Yeah. It's scary. Well, the thing about it is like the two things I keep looking at are, well, first of all, humans by nature, let me, for those who are going to [00:18:00] listen to half of the sentence, humans by nature are lazy. and it's, it's, it's all right. If you want a better way of saying it, humans by nature are designed to find efficient solutions.
and that isn't a bad thing. All right. Steve Jobs, I think was the guy who famously said, yeah. I like to hire people who are lazy because they will always find the most efficient way to accomplish something. it's not a bad thing, but as a result, if we find a tool that does something, we're really quick to go, Oh my God, this will take away pain for me.
I'm going to, I'm going to adopt that tool. You know, if you don't believe me, ask anybody who's over the age of two, if they do long division. No, they don't. We all use calculators, right? Nobody does. I mean, it's, I mean, that's a very simple thing, but don't tell me you're sitting down with pen and paper and doing long division.
You're not nobody in the right minds doing that. Cause we have tools for these things, right? you know, there's, there's, you can go up, there's a million of those things. So there's, there's the one hand of people that are. Very interested in efficiency and less effort, lazy, right? in a good way. but then there's also at least, for, for certainly for us and for a lot of the world, most of the world, there's this constant [00:19:00] ever present push to bottom line to profit, right?
So companies are increasingly, in my opinion, increasingly willing to put things on the market that they don't know the second and third order effects of, but they know the first effect, which is this is going to improve our bottom line. We can, we can become X percent more efficient. We can drop our labor costs by Y percent.
The output, we'll see what the output ends up being. and the risks that go with it oftentimes either aren't considered or just aren't even understood. Right. And so I think that's what I've been watching the last, I don't know what it is now, 12, 18 months. Yeah. That makes me really nervous again as an Intel guy specifically in my space.
I worry about it because intelligence on the surface. Now we've gotten to the point where there's tools that will kick out things that look and smell like Intel, there's a lot to this and intelligence is highly impactful. And my fear is that we're going to have companies come out and say, Hey, listen, you don't need these Intel teams.
You don't need this Intel analysts. They're very expensive. They're paying the ass to work with. Believe me, we are. And, [00:20:00] instead of that, just forget it. I got a machine, it'll, it'll kick out all the answers you want with all the proper language, you get thousands of reports, as many as you want. We don't care.
You know, the volume is not an issue. and we'll do it for a fraction of the cost and companies are going to go. Okay. Looks, I mean, we'll test it out. Right. And they'll have somebody write a report. The other machine doing, but yeah, that's close enough. we'll go with that and that's great until it's not right.
And, and the risk is gonna be catastrophic. And, you know, when the results show up in that you make decisions that are flawed or somebody gets in and is able to manipulate, the, the language model, right? Manipulate the ai. You know, we talk a lot about bias, for instance, in Intel, and teams work on this constantly, and, and we have peer review, et cetera.
But even if you have some jerk on your team who's constantly biased against a specific adversarial group, it's one person if you're. Machine is biased. It's systemic. Now everything comes out the same way and you don't have anybody even check to see why that's happening. So I'm really worried about that.
This combination of people who will just dump stuff in the AI and then use it. I've seen that. you know, I've been in organizations where we were sending things out to a [00:21:00] vendor for marketing content. They were coming back. I was like, this shit's garbage like this. This is nonsense. It reads fine to you, but I know it's nonsense.
And it tells me somebody wrote this with a I fire them. They're charging you. Cool. To do two minutes of, of questioning on a machine and just sending it to you. and I worry that's going to happen with the Intel side too. So I, that's a little bit of my soap box. I don't know if you're seeing more things like that,
Ryan Cloutier: Well, I am. The first thing I want to do is I want to go back and I want to unpack a very important word that is grossly misunderstood when we about AI.
AJ Nash: mm-Hmm.
Ryan Cloutier: Bias is a function, first and foremost, a function in AI. Okay? Bias is a function of statistics. is, it is an omnipresent function.
It has to be there. It's actually part of how the AI works. Now, what we often talk about is social bias. When we are talking about race, color, creed, religion, emotion, identity, etc, etc, etc. Okay? Add something new to the list today. Somebody [00:22:00] cares enough to tell me, you can tweet at me, right? But it's this, it's this, so what I call a social, a human social element.
That's bias, and then it's a function of mathematics, and it's important to understand that as a function of mathematics, we still have to account for it, and what, what finding is we're accounting for this half, but not this half, so we're trying to say, well, it, it leaned too hard politically, but we're not going back and finding how The reason it did that was that the training data was weighted to a particular leaning or the prompter, the one interacting with it, had introduced additional elements that weighted a response, right?
So this is, you know, when we talk about AI governance. It's also, it's very, very different because you have to, to a certain degree, you almost have to language train the human [00:23:00] to safely interact with the AI to limit that because there will Like right now, one of the projects I'm working on is how do you get it to say no?
AJ Nash: right?
Ryan Cloutier: Because it's built to say yes. It's because it's a math problem, guys. I want everyone to understand this. I don't care if it's language, I don't care if it's AI, ML, ABCD, CNNs, Cognitive Neuro Networks, if we want to get fancy. Right? Hyperdimensional, like all this crazy quantum, okay? It's still math. It's still math.
It boils down to math. And the math has to balance. And the problem is, is that the mathematicians are not the philosophers. And the philosophers are not the mathematicians. And we've attempted to create a technology that plays both.
AJ Nash: Yeah. Well, and you make a good point, you know, about the training of this thing, right? Because the biases are gonna come from what's, what's been put in. So, I think you mentioned governance, you know,
Ryan Cloutier: Mm [00:24:00] hmm.
AJ Nash: I, I'm curious.I don't know enough about this. I'm gonna be really honest. I need to, you know, I did say, not long ago I was talking to some people.
I think it was our say. And I mentioned, I said, you know, I think the big growth industry probably that's coming for us. Obviously, AI is here to stay and get to keep growing is going to be who Who monitors that? You know, so I didn't governance wasn't worded across my mind. It should have at the time. It didn't.
But yeah, the governance, right? The people who are going to actually be inside the code to keep track of the code to monitor the code to make sure it's valid, you know, to validate it. And then also the people are going to protect it. You know, if I'm if I'm a threat actor group. This is now a crown jewel.
If I find out you're very dependent on this technology, I don't need to change your opinion. If I'm in a negotiation, I want to make a deal with you. I don't need to convince you. I just need to convince the machine that you're going to trust. So if can poison that and it comes back and says, Oh, you should totally make this deal.
We've got a 92 percent confidence rating that this is going to be great for your business. And meanwhile, you're buying my garbage business. That's, you know, it looks really good, but it's actually three untrained monkeys on a wheel running around in [00:25:00] circles. You know, that's a problem, right? So, It seems like those are going to be the two places where industry is really going to have to grow.
And, and I don't know, again, that gets back who are you going to trust
Ryan Cloutier: so governance, safety, trust, right? This this is an umbrella. So, you know, we talked about GRC in the past and I think I think it's going to evolve. I think it's going to become G. S. T.
AJ Nash: Governance, safety and
Ryan Cloutier: trust.
AJ Nash: trust. Okay. Got it. Yeah. Yeah. I mean, that makes
Ryan Cloutier: sense.
I really do. I really do. Because the way that you risk managing AI is vastly different from the concepts are the same. The fundamentals, you know, to quote my good friend, Evan, right? The fundamentals haven't changed. To have it, you got to know what you got. You got to know why you have it. You got to know who's using it, right?
These, these basics, but what happens after that? I think is fundamentally up for, for grabs. You talk about intelligence. So, you know, one of the ways, you know, we've got many different means of gathering [00:26:00] intelligence. We have, we have humid, we've got signet, right? We've got, there's passive collection.
There's open collection. There's active collection. There's signals and well,
AJ Nash: There's, there's OSINT. There's, yeah,
Ryan Cloutier: one of the things that I'm exploring on one of my research projects is, is neuro. Right. We're looking at the bio side of things, the neuro side of things. We're working on things like the Internet of bodies. Here's a good Google search for everybody.
Internet of bodies, Internet of bodies using a embedded,
AJ Nash: that,
Ryan Cloutier: embedded nano neuro net.
AJ Nash: Internet of Bodies sounds like a movie
Ryan Cloutier: I know, right?
AJ Nash: to people, right? Internet of Bodies. Oh, RANF. Okay, RANF Corporation's got a thing on it. It's one of the first things that pops up. Internet of Bodies, our connected future. Alright, so if anybody wants to check that out.
I'm not familiar with it, so I'll start looking at it myself.
Ryan Cloutier: what it is, is it's this, this progression, this convergence. Right. So while we're talking about AI today, I think the bigger threat, to [00:27:00] intelligence is the convergence of technology. So it isn't that the human gets replaced by AI. The human gets replaced by the AI enabled humanoid robot that's partnered with the drones, drone swarm that's partnered with the, do you understand, do you see this interconnected?
AJ Nash: yeah, yeah, yeah. You got to stop having, this is, this is nightmare fuel. We're heading down that path. I mean, we're going to have this discussion. It's, it's the next couple of questions that come up, but this is nightmare fuel. You're, you're scaring people way too early. We want them to stick around a little longer, man.
We don't want to get to the apocalypse yet. That's the third question, but, but no, it does. I mean, you are leading into the next question, right? So we talked about what it is and what the baseline is and where we are. Where do you see the technology going? Again, normally I talk, you know, five, 10 year windows, but here it's pretty quick.
So like two year window, you know, the private sector, the military personal lives, like, what are you seeing as, you know, what's, what's coming, what's here in some cases, or what's near, you know, near horizon and what's coming, and say the next couple of years.
Ryan Cloutier: Well, to, to lighten up the nightmare fuel, [00:28:00] because I have occasionally been accused of preventing people from sleeping after they hear me. and I don't mean to do it. I think there's a lot of positive on the horizon. I think that there's actually a very bright future for us. If we, as the listeners of the show, as the practitioners of the industry step into the void.
Now, we're all waiting for someone to fill the gap, but nobody's stepping into the gap. And I encourage you to join. You know, there's a handful of us that you can find who are trying to do this. And my dear God, we need your help. We need all of your help. Because it's not that hard. You don't have to be an expert in quantity.
You don't have to be an expert in this AI shit to make a meaningful impact in how, how does our society unfold? You know, I watched an election go to shit because we couldn't agree how a society should unfold. it doesn't matter who won, and it doesn't matter who lost. I don't get into politics like that.
What I saw was a societal function struggling to function. I saw humans, and I saw a reliance on [00:29:00] technology acting as a catalyst of negative input. Not a
AJ Nash: we're 12. We've seen that for 12 years now, at least two. I mean, this
Ryan Cloutier: Right?
AJ Nash: this is, this has become a pattern.
Ryan Cloutier: But I didn't think all hope was lost, right? I didn't say, okay, there's no, no future for us yet. I think what, what I'd like to focus on is, you know, the healthy, safe use of these AIs.
They can find your cancer faster. They can come up with new medicines. They can, they are going to help the elderly. I built one that, that, it's not like medically certified, so I'm not claiming it cures any diseases, prevents or treats a damn thing, but it's a, it's a companion bot. For the elderly who, they're lonely, their last friend that had the relatable.
Do you remember in 1926 when that soldier blah, blah, blah, that story's gone.
AJ Nash: They outlived all their friends.
Ryan Cloutier: Yeah, right. And so to be able to reminisce and relate and not that it's their friend, you don't want to fool them. But it's, it's [00:30:00] knowledgeable of the colloquialisms of the time. It's knowledgeable of, right. And so it can, it can reminisce, right.
Maybe, maybe that'll be the company name. Reminiscence. Anybody wants to buy it off me? Send me some money.
AJ Nash: and I mean, there's studies that show that people who do that, like it helps them keep their own memories, right? It helps them their cognitive, like they're focusing on, on things and they're functioning. If, if there's nobody to tell those stories to and nobody to relate to those stories evaporate for them to like, there's nothing left for them.
I mean, they have friends and family or whatever, but again, it's not relatable. You know, grandpa telling, you know, somebody who's 70 years, their junior, you know, Hey, do you remember, you remember what it was like when we did this? I mean, you and I joked, joked about it. Remember when there wasn't the internet.
That's not relatable to a whole bunch of people on the planet anymore.
Ryan Cloutier: I, I have a floppy disk. I guarantee you, if I walked into an elementary school right now held it up, everybody to ask me why pre, 3d printed out the save button,
AJ Nash: It's a good point. Well, yeah. I mean, kids don't know how to use rotary phones for God's sakes, which fine. [00:31:00] mean, I say that Yeah. Yeah. They look at just poke and they're like, why doesn't it do anything? Oh my God. Yeah.
Ryan Cloutier: beyond.
AJ Nash: It's, it's, you know, it's funny to watch. So, so I mean, I think you're right.
Right. There's all these, these potentially positive, things, you know, in medicine and in a lot of it's in
Ryan Cloutier: And assistance like robotics, this is the one where I'm a little. Okay, so at the same pace that AI is moving and has benefit and risk, so is the humanoid domestic robot.
AJ Nash: Exactly.
Ryan Cloutier: And, I asked a question once, I said, I was giving a talk to some bigger insurance companies. And I said, I have a question that I call my homeowner's guy.
And I said, who, whose homeowner covers when my humanoid robot gets hacked and Kung Fu kicks the cat through the bay window and takes out grandma across the street. Is
AJ Nash: Oh, that's good complicated issue.
Ryan Cloutier: that Because the reality is that will happen. Can
I mean, in real time right now, who covers it? Yeah. Who covers now? If you're [00:32:00] Tesla, you're on auto drive. You know, runs through an intersection and kill somebody who's responsible. Am I responsible? Cause I own the car is Tesla responsible because of the code. are sealed, aren't they?
AJ Nash: I mean, that's, that's the question, right?
Like right now who's responsible. And I don't know, I don't own a Tesla and it could be other vehicles by the way, but I don't, I don't know if you're signing waivers that you own responsibility for anything the car does. I don't actually know where that lands. Right. So,
Ryan Cloutier: Probably the
AJ Nash: cars on. Auto drive have run over things, run over people.
So like, I think it's something our law has to start catching up on who's responsible now. Like if I drive my car and I hit somebody, I'm responsible. It's that's a given. if for some reason there's a mechanical failure, brake stop working, nobody blames the manufacturer. Mechanical failures can happen and therefore nobody's responsible.
It's sad. And it works out. But if it's auto drive, is that, is that like that? Is that like the brakes fail and just go, Oh, well, or do you go, Hey, no, you promised that this wouldn't do this kind of thing. And it didn't respond to commands for override. And it ran over people and, you know, ran us off a cliff.
I don't know where the law is. I'm not a [00:33:00] lawyer. And if somebody's listening and they are, you know, feel free to comment later on and let me know your thoughts, but, but you're, you know, you're bringing it to the next step, right? It's going to be everyday life. My, my humanoid robot, as you said, you I don't know.
Ryan Cloutier: okay, so now I took us up. I'm going to take us back down for a second. so we just, have confirmed a growing pattern of AI motivated suicides.
AJ Nash: Good Lord.
Ryan Cloutier: Okay, because we don't have the digital birds and bees. And I've been preaching this shit for years. If you're going to give anybody something they can fuck themselves with, you should give them a safety training first and preferably lubricant because what what's happening is, is, is we are, we're, we're just, we're grabbing onto this stuff and I love it.
I'm tech rich. Trust me guys there. See big old giant baby quantum computer cooking away in the back. Love technology. Love it so much. But I love people more. I love breathing more. I [00:34:00] love these things. And so I think my, my cry for help and call to action is as we embark on the not yet answered, not yet solved, let's not wait for somebody to tell us we've got to start getting involved.
Each one of you has a, an AI community in your community. They get together. They're a bunch of like really cool statistician nerds, you know, try not to scare them. lot of introversion, but go get involved. How can I help? Maybe you're just a clergyman. Maybe you're the, maybe you run the JC's, maybe you're the little league coach, whatever the fuck it is, go down there and start getting involved.
Start, start interconnecting this stuff because your kids have Snapchat on their phone. Every one of us that has an Apple device. If you've gotten the last update, got Apple intelligence, you did not get an opt out option. So are you, so you already have it now,
AJ Nash: Yeah. It's interesting. Apple intelligence. I, I meant to mention it [00:35:00] in the last question I got lost on in my own head and I didn't come back to it. We talked about, you know, people that are, are reliant, right? We talked about, you know, people get lazy, take your pick. There's a commercial Apple has out, which is both brilliant and horrifying to me.
There's a guy in the office who. Texts an email to his boss and it's just like the worst non professional garbage Well, this needs some zhuzhing uses the term zhuzhing and you know, ba ba ba and you know Hit me back or whatever and he hits a button and it turns in a super professional email that goes to his boss His boss is impressed.
It looks out. This guy's like a clown out and you know in the bullpen out there And I thought to myself, I mean, first, cool. Wow. You know, you can write anything and it'll turn it into something good. That's great. But I also thought, are we going to get to a point where nobody can communicate anymore? Like we just, we just spit out garbage and machines turn it into things.
You know, again, I mentioned long division. Nobody does long division. A lot of people don't know how to do math,
Ryan Cloutier: people don't know how to talk. [00:36:00] Or
AJ Nash: and they, seem very articulate and they're put together and then you find out, no, they're, they're, they're not, and, and it's a job that requires them to be articulate. Let's say, you know, it's, it's important you get them in the office, find out they're not, and they, and they don't understand appropriateness for instance, because everything they've ever written is totally inappropriate and hostile and sexist or whatever it might be, but the machine fixes it every time.
And now this guy or gal, but we'll go with guy in this case. Probably true. He's at the water cooler talking to a coworker, like a human being does without the machine talking for him. And you go, Oh my God, this guy is a walking HR violation. He's a, he's a nightmare. but that's all going to get masked during all the processes for hiring
Ryan Cloutier: but you forgot about the augmented reality. So now I have a safety device. That I put on my employees to prevent their actual humanness from interacting
AJ Nash: Great. This is not better. getting worse.
Ryan Cloutier: have you seen apple vision?
AJ Nash: No, I haven't worn it, but I know what it is.
Ryan Cloutier: Okay, all I could think was, if I was a shittier [00:37:00] person, I would release this malware and every last 1 of you would run into a light pole because ha, ha, ha, ha, right?
AJ Nash: Somebody is going to do that if they haven't already. I mean, that's, that's for the lulls, right? Somebody
Ryan Cloutier: Right the walls, right? But I don't, I don't hate the idea. I don't hate the technology. I do think that if we give people more ability to choose the definition of their reality, because we did that with the internet and we saw how that played and now we're trying to move that into the physical realm. I have concerns. You're absolutely right. Every point you just raised is completely valid. And that's what gets to the part about ethics, morals. Okay, whose ethics, whose morals? how do you define that mathematically? So, you know, there's a very small group of us that are working very, very, very hard on that.
But we, we are going to need some consensus help from the not you know, super weird nerdy nerds to, to, seriously, [00:38:00] because we have to describe it, right? We have to decide what are the acceptable boundaries beyond just the most extreme. It can't just be don't use racial slurs, which obviously we don't want to do, but we don't.
We can't stop at that line. We need to understand context of use. Is it appropriate to use this technology in this way and in this space? And if it is
AJ Nash: You got to do that within the context and within the community of, the wealthy, and, you know, the people that are, are owning most of these things that are bottom line focused, who, you know, there's a different opinion on how things can be used, where you want to put limitations. Cause now you're changing what the marketplace is.
You're changing what the addressable market is. You're chasing the value and the, the first amendment. both the valid First Amendment of hey, you know, freedom of speech and now the bastardized version. Some people use that suggests that there's no limitations to the First Amendment, which is legally untrue as the Supreme Court.
If [00:39:00] you if you doubt me, but people are, you know, going down those those paths. Right? And and so, you know, who's going to end up being the arbiters of all this, who are going to be the regulators who are able to say, Hey, this is valid. This isn't valid. This should be done. This shouldn't be done in a world where people are going to say, you know, fuck you.
I want to be able to use it however I want to use it. Right? It's it feels a little bit like.
Ryan Cloutier: hmm.
AJ Nash: right? It does, you know, energy and it does medical also can blow up the entire world. We've all agreed. It's too dangerous to just not have limits on it.
You can't build a nuclear reactor in your backyard. It's not legal. You're not allowed. Yeah, I wouldn't try it either. But even if you had the capabilities, it's not legal, right? you know, and gun control now, it's a topic that get nervous about, there's still there. What's that?
Ryan Cloutier: technically, as a, as a, as a citizen of the U. S. you can conduct all the research you want, but I assure you, you will be getting visited very quickly. Very
AJ Nash: And, and the thing, you know, the same thing's true with firearms, right? I [00:40:00] mean, we have the second amendment and there's, you know, push and pull and, and there's a real drag towards, you know, more things being legal for everybody, et cetera, but there's still limits and, and there's always going to be some for anybody who doubts that.
And then for anybody who wants to know, not that it matters, I own firearms. So it's not gonna be one of those discussions, but I also agree there are limits. There are, listen, if you don't think so, go ahead and try to get a tank and see what Have rocket launchers in your house see how
Ryan Cloutier: can ask
AJ Nash: There's limits. Right?
Ryan Cloutier: as you go about it the right way. And because you can, absolutely in Minnesota, we can have hand grenades, rocket launchers, we can have all kinds of things. But there's a process you go, you go talk to go get you make sure you're not a nut bag is step number one.
Step number 2 is you pay a bunch of fucking money because we don't just because you're not a nut bag also doesn't mean that you should have access. We need a of commitment here. Right? So,
AJ Nash: but who's going to be responsible for that same sort of protection? I want to call it control protections to AI, because there's nobody right now doing that. And once this is out, how are you going to get it back?
Ryan Cloutier: Well, we got to get [00:41:00] involved. It hasn't with the cats, not all the way out of the bag yet. I checked with Schrodinger. He told me still partially I asked
AJ Nash: is close though, man. And so it leads to the next question, man. And which is, all right. So we talked about, you know, what AI is and, and kind of the baselines of it. We talked about, you know, where we are today, I think, you know, at least to a point we talked about where we are today, you know, and where we're going in the next couple of years, what do you see as the biggest risks and rewards?
And you hinted at some of this already, but the biggest risks and rewards that you see coming. With AI and again, we'll stick with the next, I don't know, two, three years, you know, kind of keep it near term here. What are the big, you mentioned the, the The companion piece, for instance, for for the elderly, right?
So we talked a little about some of the medical stuff. But what do you see is like first that will give the risks like, you know, the bad how bad it could get. What are the biggest risks? Then we'll talk about, you know, the good and some of the, you know, one of the positives that can come out of this and the big rewards and how we're gonna figure out which path we're gonna take here.
Ryan Cloutier: I mean, I think the biggest risk is a [00:42:00] transference of sovereignty and power, from the human to the machine. And then by extension, the controllers of said machine, being a very, very small group, then hold all the cards. and that is done at the simultaneously to you losing your value as a contributor to the species.
I think that is a very real reality. I think that is a very real risk and I think we are taking very fast steps in that direction. All
AJ Nash: the movie I robot,
Ryan Cloutier: all
AJ Nash: it's like, it's like somebody looked at that and said, I bet I can make that our future and started down that path and didn't see the end of the movie, I guess like,
Ryan Cloutier: Figure AI. Okay, Figure AI. Everyone should know this company exists because you may have heard of Elon's Tesla robot, but haven't seen a whole lot of it. But Figure is the competitor to Tesla and Figure, in my opinion, is further along
AJ Nash: Oh yeah.[00:43:00]
Ryan Cloutier: with a higher quality product that is more market ready.
They are beginning, officially, unofficially, they are beginning their third generation robot, which is geared at the home for domestic purposes at a market price of 20, 000. Now that obviously won't be the cost when it first comes out, never is. But, it won't take long to get there. I also remember when somebody was like, Bro, have you heard of this thing called a flat screen? But this TV is amazing. And by the way, they used to be in wooden boxes and they were fucking huge. It was like we went back to the seventies and had a console TV, right? And now I can go into my local Walmart and I can get a 85 inch TV for, you know, 89. I mean, it's just ridiculous. And
AJ Nash: I gotcha.
Ryan Cloutier: that actually is part of my part of Delta off the wrist.
That's part of my worry because what happens when I can buy a humanoid robot capable of all [00:44:00] physical actions a human is off Alibaba for 17 cents and I got the unchipped version because they don't give a shit about us. So that worries me. It worries me is what will the shitty humans who are already shitty.
Do even more shittiness with when they're enabled with these technologies, because they're ubiquitously available.
AJ Nash: Yeah. When you mentioned figure, for those who don't know figure. ai is the company he's talking about, like when you open up their website, the first thing it says is giving AI a body, which I read that and went, Well, that's a fucking bad idea. Like that's, that's not what we want to do. Is it your AI out of the box?
Like, you know, not only is it like iRobot, but for, you know, I'm a movie person for anybody who doesn't, you know, who isn't, that's fine, but there's also a movie called virtuosity, which I think was ahead of its time. It's a Russell Crowe and, and, Denzel Washington, and they grow this AI really in a lab, It's, it's fed all the worst things.
It's built actually to be like the world's worst serial killer. Basically, it's built on all the personas of all, every, every [00:45:00] horrible serial killer you can think of. And then some asshole decides to give it a body and let it out in the real world. And I mean, it's a great movie. It's fun, but you can imagine how terrifying that is to have that happen.
Right. And so, yeah, there's a company out here saying, yeah, our job is to build bodies for these AIs. And I mean, how does that not replacement? And by the way, they're, they're AI's are going to make, they're going to be able to make new bodies. They're going to be able to repair themselves and make new ones.
Ryan Cloutier: plan. If you scroll down, that's actually in the business plan.
AJ Nash: Yeah. So what happens when AI, which is faster, smarter, stronger, you know, et cetera, than us has these bodies and can procreate because there's no other word for that. If you can make your own, you make new ones, you're procreating, right? What the hell, what are we going to do? Why do we need to exist? These things can learn, they can program themselves.
They can make new versions themselves. They can improve. They can evolve. Does anybody think there's a chance we're going to survive the army of AI enabled robots we're creating? Cause I, I mean, that's your worst case, right? I mean, my
Ryan Cloutier: Okay, so let me give you the
AJ Nash: scares the hell out of me. Yeah. Yeah. [00:46:00] Let's go with best case.
Yeah.
Ryan Cloutier: All right. No, because I'm right there with you. Trust me best case scenario. Somebody today listening doesn't. Just hear it. But actually takes an action. An even better case if two people do. And that action could be just simply getting involved somehow, some way.
Becoming more aware of. Sign up for a fuckin RSS feed. Right? And not have to be a big commitment here. But if all of us stand on the sidelines Then we know where it goes. But my positive side, my upbeat and uplifting side, and believe it or not, that's where the nickname scare bear comes from. Cause I want to love you like a teddy bear, but I scared the shit out of you.
is that we, as a people see this as the defining moment, it is. We embrace this change and we make sure that these technologies are enriching to our lives. Yes, I want more family time, so let's make sure that the tech doesn't get in the way like the last time [00:47:00] around we tried to do this with the cell phones and
AJ Nash: that's worked out well.
Ryan Cloutier: family time.
I got whole families don't even know how to speak to each other.
AJ Nash: Yeah. All in the same room, never talking to each other. Hell we text each other in the same room.
Ryan Cloutier: in the same room. Right, so, I think that that there's going to be advancements in the social side of this. I know I'm personally leading the charge on a few of these things, and I'm working very closely with others who are trying to create an AI that helps with loneliness. Yes, I want a robot companion for grandma, so she doesn't have to leave the house and go to the home. Right? Like and it isn't just the elderly. I know it's been an area of focus, but there's a lot of different mental illnesses. There's so there's the medical side from a personal side. hey, I don't it's I don't want to change the oil on the car.
AJ Nash: I don't want to mow the lawn. I don't want to,
Ryan Cloutier: I don't want to whatever right?
AJ Nash: Yeah. Whatever.
Ryan Cloutier: It's garbage day? Fuck yeah, take the [00:48:00] trash out, right? and I've actually built a few, like, lower grade robots. I've built some robots, I built one out of Legos to solve Rubik's Cubes. Because, you know, the board, but, but, but my, but I never got the beer fetching robot to work correctly because the human hand element wasn't quite so I could launch one, right?
I could, I could, I could, you know, chuck and toss, right? But I wasn't trying to build a catapult. no, I
AJ Nash: don't wanna open that beer
Ryan Cloutier: Yeah, the positive side, you know, back to that is. That there's this rich field of opportunity for us. To do new things, new ways to do faster things. Personally, I use AI daily to accelerate problem solving, to take on generating.
I, you know, I can read faster than I can write, but I haven't removed myself from the equation and I, and I'm hopeful there's enough of us that haven't yet removed ourselves. Because it's easy to do that. It's, [00:49:00] it's, well, I'll just, you know, like, think about throwing a like versus making a phone call. Your friend said something nice, you see it on the thing.
AJ Nash: Blo done. Mm-Hmm?
Ryan Cloutier: versus a comment, right? Yeah. The like, no problemo, whatever. Now that like actually has a physical effect on that person on the other end of the phone. There's actually a, a many, many studies that prove this and they get that dope, right? So we know that we like that stuff. So let's just design it together safely.
We've done this before people. We have, we have been here with cars. Okay. You can't, you can't buy or sell a car in this country. Doesn't have an airbag in it. I remember cars that had pieces on steering wheels that were deemed to be so damn dangerous from killing people,
AJ Nash: Yeah. They had metal spikes like on the steering wheel was a metal spike aimed right at the driver. Like, I remember that. I remember when there weren't seat belts. I remember when, if you, if you hit the grill. I remember when glass didn't break, it didn't [00:50:00] let you out. If you hit the glass, you would
Ryan Cloutier: I'll even, I'll even go one better. How about when they made drunk driving illegal?
AJ Nash: Oh, yeah. Yeah. Imagine. I mean, there's a whole generation now doesn't understand that that used to be legal. You could just drive around and be
Ryan Cloutier: You can Google
AJ Nash: when that was allowed.
Ryan Cloutier: You can Google this and there are dudes that are smashed in their pickups. I work a fucking day and I'll drink all I want to.
Yeah, that was the year 1970. What?
AJ Nash: I don't know what your DUI was made illegal.
Ryan Cloutier: So,
AJ Nash: be, I'm going to look it up now because I'm curious. It was a while ago,
Ryan Cloutier: because we've ran ahead, so, but we face these social problems, we've addressed them in the world did not collapse as a result. No 1 lost their freedom because they couldn't drink and drive anymore. Like, you know, they lost their freedom of the. Drink and drive after, but, but, you know, but the point is, is the world is, you know, from that safety standpoint is in a net better place.
Right,
AJ Nash: they go all the [00:51:00] way back to 1910, but,really started picking up in the 70s, 1960s. It was still considered a quote unquote folk crime. Like nobody did anything about it. Seventies is when they really started to get down to like, you know, 0. 08, and started really enforcing these things.
I'm sure. Cause somebody finally did studies and said, geez, we have a lot of people dying from this. Maybe we should stop letting this happen. And now it just seems insane. Nobody would look at this and go. Yeah, well, of course, you should be able to drink and drive. Nobody would think that. It's clear that you're inebriated.
It's clear that you're impaired. The thought that you would be allowed to get behind a death machine and just go out and make everybody at risk is insane. I fear with this technology, though, because unlike a lot of these other technologies and revolutions, I worry about this. Cars didn't suddenly start making themselves.
Cars didn't become independent. Cars didn't create a world in which you could You may not want to do you, I, but the car can go to you. I on its own. so, you know, this, this one worries me a little bit more. And again, combined with the combination of greed and laziness, I'm curious. And I got to, we got to get near the end here to the last question.
But one thing we didn't talk about that. I'm curious if you have any thoughts on what jobs aren't going to get replaced by AI, [00:52:00] like what, what are things that you look at and go, I can't see the machines ever being able to, is there such a thing as a thing that can't, I can't eventually evolve to do, or is it just eventually, you know, No, they're all going to, have new jobs or different jobs or the world's going to change, or we're going to need to trust, you know, and, and have faith that the powers that be aren't going to decide it's okay to let, you know, 7 billion people be unemployed and starving is, you know, there's no jobs left and they're not willing to go to socialism or whatever.
Ryan Cloutier: Yeah, I think that's a tough one. I think there's roles that it won't replace. I don't think it's lack of a soul is going to prevent it from ever really meaningfully connecting with a person. Having said that, though. There are people that while there are people that are so absent that anyhow that they do it.
I, you know, that's a great one. I don't, I don't think there is a job yet that I can't foresee a reasonable future that doesn't get replaced and or augmented and you'll hear a lot about the augmentation. Right? We don't want to say replace because then everybody's gonna [00:53:00] freak out. But the reality is, is they don't have people that run hand crank anything anymore.
Right. And when we talk about the next job you will have, well, unless you're a quantum physicist until that's solved, unless you are of the highest orders of, of, of emerging material sciences, and even those areas now are augmented by AI. So I think
AJ Nash: Well, and that becomes the question then, man, because, you know, there's eight, 9 billion people on the planet, something like that, right? Most people, 99 plus percent are not. just don't have the myself included, by the way, don't have the capacity, the capability to be the next, you know, Neuroscience.
They don't they don't have the capability to be these, you know, quantum physicists, et cetera. There's a reason that most people don't have those jobs. and it's not because there isn't demand for those jobs because most people don't have that aptitude. So if the only jobs that are left are jobs that are, you know, limited to people that, you [00:54:00] know, have, you know, 160 IQs and, and, you know, all this experience and all this education, et cetera.
What's the average person going to be left? To do assuming society doesn't agree to go, Hey, listen, now this, this could be the greatest thing ever to happen to humanity. It could be like garden of Eden, where we all have free time to think and learn and, and, you know, explore and be creative. And, you know, you have a basic, sustenance given to you by, you know, the powers that be by the government or whatever, you know, so you have a basic living expense, but it seems like there's a whole lot of people in the world who don't.
Want that don't you know, they hate socialism or communism or whatever. So, I mean, I worry about this. I'm not gonna lie. I don't think you have an answer. I'm asking an impossible question, but this is the thing I worry about is, is how does. Free market and greed, align with this without just going, well, I guess a whole lot of you are going to be living in a dystopian future with nothing.
Cause there's no reason to hire you. And there's no money to give you as a result. And the machines are doing it all. And by the way, the machines will protect the people who have fired you and taken all your jobs away. So you can't fight them necessarily. so, I mean, that's, I, I, [00:55:00] I, I'm scared of this stuff.
I'm not gonna lie. I'm, I'm worried. I don't know how to do anything. You know, that's why I'm working with you on some of these things. I'll, I'll get more involved, but, I think we may end up very dependent on the. Goodwill of people who don't notoriously show they have a great deal of goodwill to the general populace, as opposed to just doing what's best, for themselves.
and I'm, I'm
Ryan Cloutier: I can tell you this. Critical thinking right now is the most valuable skill. If you can develop critical thinking, you don't need to worry about your job. Until you do, and I'll be right in the boat with you. Right? Like, if we get to that level, I'm right there with you. I am not a quantum physicist. I am not these things.
I, I know them. I talk to them. I work with them. Because I'm trying to bring hacker mentality, by the way, and and I will often wonder how much the the intelligence community because of its ability to deconstruct problems to paint scenarios to I think I think you guys actually have. And we've talked about this offline.
You and I AJ that I believe that that that that [00:56:00] background gives you a unique Transcribed Position same thing with my cyber warriors that the background of how to think about these things Is what makes you job viable? I do think you are going to start to see the basic labor jobs I just ate at a restaurant with a robot brought food to my table, right?
They're running the whole damn kitchen on three people and the whole front of the house on two people Because all they have to do is sit you down punch a couple buttons so that the robot knows which one to go to You give your order and that's right now because you're because the robots just one of those simplified tray robots, right?
When the robots also capable of taking the order and then going and working in the kitchen,
AJ Nash: Oh, yeah, there'll be no people.
Ryan Cloutier: you know,
AJ Nash: There'll just be no people.
Ryan Cloutier: I think for the audiences here, folks that are smart enough to listen to the show, take those critical thinking skills and start seeing how you can use and adopt AI into your day to day life and into your day to day work so [00:57:00] that you're familiar with it so that you know where the pitfalls and perils are.
But that you also get the benefits, right? Like, and we don't have to go here if we don't want to, we can collectively believe it or not, as consumers change the market. We can say no to things we can say it's hard. Trust me. My wife's like, I'm getting the house robot. I know you ain't trying to, I'm like, no, we can't have it.
It's going to end the world. She's like, fuck you. We're getting it anyways.
AJ Nash: Yeah. I said the same thing. Like, I'm scared to death of these things. Meanwhile, I'll probably be one of the first people to buy one when they're on the market if I have the money. Like, yeah, I'm, I'm as big a part of the problem. That's the thing, right? Knowing it could be your own demise and still, so yeah, all right.
It's like lemmings going off the cliff. I know it's going to kill me, but I'm, I'm going to do it anyway. Like it's, it's, It's
Ryan Cloutier: I'm more optimistic. I think, I think enough of us, if we hear the right message that says we can do something, and we should. And we remember that we used to, or maybe we've seen something on a TikTok that tells us we want to. But, you know, I'm hopeful for us as a [00:58:00] people that we will not just march silently off the cliff.
Let's see what happens.
AJ Nash: I hope so. Yeah. I'm with you. So, all right, listen, we're going to wrap up, you know, we got one last question. You know, the name of the show is Unspoken Security. And you know, with that in mind, I ask every guest basically the same question. You're no exception. which is, you know, tell the audience something that to this point in your life, you know, has gone unspoken, something you don't tell people about, you know, tell the audience something that.
It's in line with unspoken security by telling us something unspoken.
Don't confess to a murder because this is recorded.
Ryan Cloutier: something unspoken. I often Find myself recharging like an introvert, even though it's well known by me, the tests and everyone that's ever met me. I'm anything but so people. So what? So the unspoken pieces, believe it or not, what you see out there, work in [00:59:00] the room can still get way exhausted by people.
AJ Nash: Yeah. Yeah. It's, that's a good one, by the way. So I, I mean, I don't grade these. Everybody has one. They're all very interesting. I'm the same as it turns out. So I know people think I'm this massive extrovert, you know, in my whole life. People have thought that, and I think there was a point when I was. I apparently I'm what's known as an ambivert.
Now. It sounds like you are too, in that I can thrive in the extrovert environment. but there are, there's a big introverted part of me. I'm, I'm no longer, it depends on the scenario, I guess. I don't know if it is for you. I'm curious. Like there's times I go to conferences and, and I, and I have all the extrovert in me and I, I, I finished the conference.
I finished the discussions, whatever. I'm still ramped up and I want to go out and do stuff. And then there's other times I'm just like, no, I'm gassed, man. I need some time alone. Like an introvert. I don't have you able to turn it on and off. but I do think it's interesting. I didn't know that about you because a lot of people make those assumptions, right?
Like this, this
Ryan Cloutier: Or that I'll give you to this episode to get to right. I do actually get tired. I do actually sleep. Contrary to everybody's belief. The energy [01:00:00] does occasionally not show up.
AJ Nash: Is it I'm curious now, so I I'm also notorious for not sleeping much. I'm curious if your patterns similar. So for me, I'm notorious. I sleep, you know, 2 hours, 3 hours, 4 hours a night and sometimes lesser, you know, whatever. I mean, this morning, I slept two, I think, 2 and a half. But maybe once a week, once every 10 days, I have one of those nights where I just, that's it.
I'm like 9, 9 PM. I fall asleep with my phone in my hand doing nothing and I'm done. Right. And I got like 12 hours. Is that how it works for you to do? Do you have a long stretch of where you don't then just have a night where you're like, that's it. I'm out. I crash.
Ryan Cloutier: I will yes, so I've also found that some of my greatest work is done on day 2. That, well, just because of the way I'm wired, so I'll go a day without sleep. And I will make and it seems opposite, right? It's like, Oh, you should have had some sleep to like recharge, but I'll do some of my best work in that second day.
But yeah, there'll be short stints. Usually the crash will follow [01:01:00] like if I if I've had a total lost night within two to three days of that. I'll have a like you fell asleep. I fell asleep with food like
AJ Nash: Oh yeah, absolutely. I fall asleep more often than I care to admit, frankly, with a video game controller in my hand. I have, I have actually, I play a bunch of games, but sports games tend to be my favorites and I play hockey somewhat regularly. I've finished hockey games where I don't remember the game I finished, but I was asleep and I got it done and whatever.
but yeah, I've, I've fallen asleep with food in my hands. I fall asleep in conversations. you know, it's, it's, it's, it's. But it's interesting. Again, I think you and I share that in that. There's a lot of times I don't sleep. I sleep very little and I'm good for days like that. And then it just catches up and I just accepted.
That's a thing for me, right? I'm not gonna force myself. I try and
Ryan Cloutier: It doesn't
AJ Nash: to get better sleep, but
Ryan Cloutier: doesn't work. I, if I, if I try to force the sleep, the sleep doesn't come. Then my brain, then my brain just sits and spins and spins and spins.
AJ Nash: Yep. Yeah. Another changes. If I go to bed at 9 [01:02:00] p. m. I'm still up to 4 a. m. It just means I'm in bed for 8 hours. First, you know, 7 hours. First, you know, shut off the phone doesn't necessarily work. TV doesn't necessarily work. Some do. Some don't. But, you know, I think people have to find out what rhythm works for them as long as you're not like getting psychotic because you don't sleep for days and days.
but that's cool. I appreciate you sharing that. And it's something apparently we haven't comment. So listen, we're up on time. so I do want to wrap this up. Is there any last thoughts? You want to throw in here, whether it's about the topic, whether it's about you, whether it's about scare bear, whether it's promoting something, you know, anything else
Ryan Cloutier: Yeah, too quick, too quick thoughts. anybody out there, if you've got AI questions, struggling, consulting, whatever, just reach out to me. I'm looking for opportunities to help folks get themselves going with AI. I've got a variety of services you can find and find me at scare bear dot com, or you can hit me up through LinkedIn.
Thanks again. the message I want to leave people with is to remember to be kind to yourself first and try once a day and you'll feel great about this, [01:03:00] by the way. It's super trick for finding happiness once a day. Do some random kind gesture. Could just be smile at a stranger, could be hold the door, could be instead of saying, you know, something angry back, you, you take, you take it in and you say, you know what, I'm just going to say something nice back, whatever that is.
But if you practice those things, I think you'll, you'll find yourself a lot happier, a lot less stressed out. And I think that, that will help you navigate these coming weeks, months, and years ahead of us and, and the way that society's going to change.
AJ Nash: Yeah. Well, thanks, man. I mean, that's good advice. Listen, I know Ryan, Ryan's brilliant and he's burdened by that. Quite frankly, people who are very smart have their minds going all the time, which means they see all the horrible things in the world. And Ryan's also one of the happier people I know, frankly.
So it's got to be good advice. You know, I, I mean, that and weed, I'm sure, but, you know, weed, yeah, weed's good. Right. but no, I appreciate it. That's, that's really good advice and I'll actually try to apply it myself. Only promises, but I got to try to [01:04:00] do better. Yeah. As well, because it's easy to get down.
It's easy to get frustrated or depressed or angry or sad or fearful, whatever it is about these topics and about the world in general. So, you know, doing a nice thing for somebody else. Yeah, it's proven to make us feel better as well. So, and who couldn't use a few more nice people in the world doing nice things.
So, yeah. All right, cool. Listen, I appreciate it. thanks for that closer. And in general, Ryan, thank you for coming on the show. I appreciate you making the time. I appreciate your friendship as well. but thank you for being here. I couldn't thank you enough. I'll, you know, I'd love to have you on again at some point.
We'll talk about something else. I'm sure you have a thousand topics we can chat about. but you know, we're going to close it up for today. And for everybody who's listening and watching, thank you for taking the time to be here. Please, you know, like and follow and subscribe and tell people if you love the show, make sure everybody in the world knows if you hate the show, shut up.
I don't want to know. No, I do want to know, but don't tell anybody else. I want to make it better, but please don't spread negative negative comments because it's hard enough to do these things. but again, thank you very much for being here. And until next time, this has been another episode of unspoken security.
[01:05:00]