In this episode of Security & GRC Decoded, Raj Krishnamurthy, CEO of ComplianceCow, sits down with Walter Haydock, CEO of StackAware, to explore the risks, challenges, and opportunities of AI governance, security, and compliance (GRC).

💡 Key Topics Discussed:
✅ DeepSeek AI Privacy Risks & Compliance Challenges
✅ ISO 42001 & AI Governance Best Practices
✅ Building a Business Case for AI Security Programs
✅ Regulatory Challenges in AI Security & Risk Management
✅ How Security & GRC Teams Can Adapt to Rapid AI Developments

🚀 This episode is packed with insights for security leaders, compliance professionals, and AI governance experts navigating the evolving world of AI-driven security.

🎙️ Security & GRC Decoded is brought to you by ComplianceCow.
🔔 Subscribe now for expert insights on AI security, risk management, and compliance automation!
📢 Learn more about ComplianceCow & how we can help your GRC teams!

💡 Connect with Walter Haydock💡

For more insights on AI security, governance, and compliance, follow Walter Haydock:
🔗 LinkedIn: https://www.linkedin.com/in/walter-haydock/

📖 Blog:
https://blog.stackaware.com/
📷 Instagram: https://www.instagram.com/walter.haydock/

🌐 Company Website:
https://stackaware.com/

Stay updated on AI risk management, compliance automation, and emerging security threats by checking out his latest content! 🚀

⏳ Timestamps & Key Moments (approximate)
(Click to jump to a specific section!)

0:00 Show Intro
0:30 Walter Haydock Introduction
2:00 DeepSeek Controversy & AI Security Risks
4:45 The Evolution of AI SaaS & Security Challenges
7:40 Walter’s Background (Physical Security to AI Governance)
10:55 AI, GRC & Security Governance
14:10 AI & Compliance (ISO 42001)
17:35 Building a Business Case for AI Security & Compliance
21:45 AI GRC in Practice (Common Pitfalls)
26:40 AI in Security Operations (SOC Automation & Threat Detection)
31:00 Advice for Security Leaders (Inheriting GRC Programs)
35:15 AI Risk Management (Traditional vs. AI-Driven)
38:50 Closing Thoughts & Resources
41:30 Outro

uhh uhh [Music]

hey hey hey welcome to security and GRC decoded I’m Raj Krishna morti your host

and today we have the awesome Walter hok as our guest today Walter is the CEO of

Stack aware and is a practitioner and is a well-known figure in the AI security governance risk and privacy space let’s

decode Walter welcome thanks a lot really appreciate you having me on Raj uh thank you very much Walter so let me

um I’m going to start with something um let me ask you something controversial

what do you think of I don’t know if it is controversial what do you think of deep SE and how do you think companies

should react to it yeah so I think there are three key takeaways from a security

and compliance perspective so the first would be the origins of deep seek in

terms of the intellectual property that was used to develop it the second would be

the implications of using deep seek specifically the SAS version um just

looking at the terms and conditions the privacy policy and then the third would be you know looking at the failure modes

of of deep seek specifically the SAS version which uh I think there’s been some recent news on so I’ll I’ll talk a

that in three parts so on the first piece you know I think it’s it’s interesting that open AI is is alleging

or or or suggesting that some of their intellectual property was used to train the model and you know it’s certainly

possible that that happened I mean every piece of technology is going to build

off additional uh or or predecessors and really it’s a question of like how

closely are they matching those predecessors and you know I’ll leave that for the lawyers but you know certainly uh from a cyber security

perspective it’s important to protect your intellectual property make sure that you have mechanisms to prevent others from using it for their

commercial gain the second piece would be looking at the use of the SAS version

of Deep seek in its intended intended approach so first of all it trains by

default on the uh on all the prompts that you give it and they had some very

interesting language in the terms and service that said terms of service that said something like you know only to a minor degree will we improve our product

based on your prompts I’m not really sure what that means so I would basic basically expect that all your data is

being trained on if you’re submitting it to to the SAS version and then also you

know their their data is is stored in China so that China has different laws

regarding what the government can do uh gaining access how they can leverage that data so that’s something very

important to keep in mind and China has uh a pretty serious history of stealing

intellectual property and forcing its technology companies to assist in that process so that’s the second

consideration and then the third consideration is a failure mode of of using the Deep seek SAS application

which with recently released uh a report which I think they called Deep leak uh

showing that a lot of the uh backend infrastructure was actually exposed to

the internet and they could pull out unencrypted prompts and things like that

from the deep seek infrastructure so that’s the general approach or or or top

three issues I would call out from looking at Deep seek and there are a lot of uh you know layers and layers of of

considerations below that but that’s the platform has been down I was trying to do this yesterday and the day before I

keep getting these 503s um and I saw the post as well but uh do you see

especially the open source model do you see um the likes of AWS and as Bedrock

uh sort of embracing these models anytime

soon potentially yes I mean on the open source side of things that’s a little that’s a little grayer

you know for for me using the SAS version I would advise any of my customers just don’t do it it’s just

that would be an unacceptable risk unless you have like a completely public use case like unless you’re using it as

a my my best example is like if you’re using it as a meme or a joke generator yeah okay that’s fine if you are fine

with 100% of your content being public and being trained on then okay go for it

but you know for the vast majority of customers that’s not the case for the open source version it’s a little more

nuanced because of the power of deep seek like you know the the capabilities of the model and the fact that it is

free to use you know that is a huge uh potential business Advantage a key thing

to look at would be is if it’s doing any sort of callbacks or you know there’s any Telemetry that the open source is is

trying to send back to the model trainers and developers that would be important to look at

and then you know potentially in some circumstances I could see the risk reward making sense for some companies

if they’re confident that they can prevent any sort of uh data call backs to to deep seek the company then it’s

it’s possible that you know it could make sense and and maybe some of the major hyperscale Cloud providers will

will offer it you know in their kind of platform as a service offerings like Bedrock that you mentioned and you know

I I I could see it um you know there’re certainly other considerations like intellectual property if if openingi

somehow you know wins uh you know wins some sort of judgment uh and gets them

to essentially pull down that model or change the licensing that could be a risk as well thank you very much wal

let’s talk about you how did you uh get started on security GRC and why the

focus on AI yeah so I’ve been in the security

world since you know I started my career I was just in a different type of security I was in the physical security

game as a Marine Corps officer and then uh the Cyber aspect of of conflict uh I

was an intelligence officer of intelligence collection became you know increasingly apparent how important it

was and over time I you know I reskilled I upskilled um learned more about the

cyber security uh compliance world and actually my first professional

experience with it was working on Capitol Hill where I was a staff member for the house Homeland Security

committee overseeing what is now the cyber security and and infrastructure Security Agency it wasn’t called that

when I was there so really developed an appreciation for the importance of cyber

security for US National Security and decided I wanted to Pivot into the private sector did that I worked at a

big company as a uh uh product security guy for a couple years kind of cut my

teeth in the private sector to learn how things worked how software development ran and then uh worked at a security uh

specific vendor also also as a product manager after that and then in 2022

decided to start my own company and actually did a a pretty hard pivot um

for stack aware I we started as a vulnerability management platform and in

fact stack aware we still use our vulnerability management tooling internally but we don’t sell it anymore

because there just wasn’t a market demand for it with that said at the end

of 22 with the launch of chat GPT there was an explosion in interest in

artificial intelligence and actually previously in my uh in my career I had been on the fence about whether I wanted

to go more of the data science route or or be you know on the security side of things so uh I had an interest there and

saw that there was a lot of fear uncertainty doubt about AI out there and

thought that there’s an opportunity to fill that void with a a services- based approach so change the delivery method

and the subject matter of Stack aware in 2023 and we’ve been cruising Along on

that Focus since then and uh it’s been it’s been going well fantastic to hear

what is your view of uh security governance risk compliance and privacy and the intersection of that with the AI

what is your view on that yeah it’s a broad topic and uh you know I guess I I

earn my living trying to answer that question none of the fundamentals change

in my opinion you need to understand what is

your risk appetite based on business considerations based on regulatory considerations ethical

considerations you need to establish that before you really do anything else and

then you can build your GRC program around your risk appetite to ensure that

you stay within it and AI presents a lot of opportunities to exceed your risk

appetite in unanticipated ways because as I mentioned that fear uncertainty and

doubt out there where folks have difficulty wrapping their heads

around how the system work and they’re not magic but some folks do interpret

them or treat them as if they are magic recently I saw you know a gentleman he

mentioned you know that as soon as you provide information to any AI system you know it’s no longer confidential and

that’s not necessarily true you know that there are plenty of uh SAS AI tools

that have done a pretty good job of maintaining confidentiality and they’re not entirely different from normal

inform Information Systems there are nuances for example chat GPT and deep seek by default train on their inputs so

that’s an important consideration and uh you know this I I went back and forth with this gentleman

and then he he kind of uh corrected himself he said well you know any uh any

that you find on the App Store is not necessarily you know going to protect your confidentiality which is also not

true like you know chat you can get the chat GPT uh Team version on on on the

App Store you know you need to pay a subscription for it but they have guarantees of confidentiality they don’t train on your data you know there are

some nuances in terms of retention but it’s a complex topic and AI adds a

wrinkle to it that certainly is worth investing in and learning about but the

fundamentals still apply okay got it and Mark Anderson I think initially if you

remember he said software is eating the world he has changed that tune to AI it is eating the world

um and how do you see and especially as organizations Embrace this how do you see AI governance fitting into the

larger grcp strategy for any company what would your advice be yeah I think

that will be an interesting question so I wrote an article about this how there

are basically five people or groups who can lead AI governance efforts there are

security teams which I think have taken the lead right now that’s the primary point of contact between stack aware and

its customers are security teams there are privacy teams who are a little more focused on the regulatory aspects

they’re focused on you know protection of individual uh personal data they’re less concerned about intellectual

property and safeguarding that there are legal teams who obviously you know focus

on legal risk they’re focused on intellectual property assignment they’re focused on contractual restrictions you

know there are some argument for them being in charge there are the the data scientists and AI Engineers themselves

potentially you know they’re the most technically capable they’re going to understand the capabilities of AI

systems the best but you know having them manage the risk may not be the best

idea because they’re going to be more focused on the business outcome and then there’s a possibility that an AI

governance specific function will develop and that has pluses and minuses

you know pluses are that it’s there’s kind of no conflict of interest it’s focused on one narrow problem some of

the minuses are that it uh you know adds overhead adds more more people at the

table but I have certainly seen organizations establish AI governance functions and they’re full-time

employees who are just AI governance professionals so that’s certainly an option and you know it remains to be

seen what the best way to do it is my recommendation is you know have somebody lead the you know the risk uh advisory

and uh management efforts and then have that person be separate from the business owner who’s actually making the

miss the risk decisions so there should be an advisor and an implementor and then there should be a decider and the

whoever the adviser is should not be the the business leader because you know that would essentially uh it would

remove an external check or not not a check but an external source of advice

uh uh kind of a neutral source of advice from the business leader and then conversely the security leader should not be making decisions because um you

know that that’s not how business works and you know if we wanted it to be super secure we’ just shut everything down and

delete all our computers no and then the separation of Duties as you rightly puted also gives you a check and balance

right in in this um what do you think broadly is happening in the industry

today around security GRC privacy and and AI buing as you there’s been a heavy

push from um AI security specific vendors and you know that’s fine like I

think that the and stack aware is one of them specifically on the tooling side the uh you know there are unique

considerations for AI some wrinkles that I mentioned and there’s certainly going to be a need

for Technical Solutions to some of these problems I think that the vast majority

of problems in security are more on the proc process on the kind of risk

assessment side of things than they are technical problems like the the Technical Solutions that we have

available are are are pretty amazing you know the key is just making sure that they’re applied correctly that they uh

you know that we’re doing uh using these tools consistently so I think we still have a way to go in terms of Frameworks

for managing AI risk and once we nail those then then the you know the

technical solution the the product focused people can and uh you know be

more impactful you know they’re they’re they’re certainly developing good Technologies but making sure that we’re using them in the right context is

that’s going to be the most and I I know that you co-authored the highr AI governance framework am I right or what

is your role in that W yeah so I didn’t co-author it I uh I I would certainly not claim that I was a a major

commentator on it I worked with the high trust organization on the AI security certific ation um so yeah they they

authored it they took a ton of feedback from a huge slew of of industry and kind

of public policy people and they put that out at the end of last year there are no currently certified organizations

I am working this is this is a public uh thing I could say I’m working with one of my customers in bold Health to pursue

the eight uh High trust AI security certification so we are we’re working on

that as we speak and that is a very narrowly tailored framework it is specific to

security it doesn’t tackle responsible AI it doesn’t tackle transparency it doesn’t tackle ethics it doesn’t tackle

unlawful or un undesirable bias it only focuses on security which I think is a

benefit because it’s very clear about where it begins and ends and it gives a

set of firm requirements that companies need to meet and it’s it’s probably the

most specific and and you know it is a specification you know it’s the most

specific specification that I’ve seen which you can contrast to something like ISO 4201 which is way higher level much

more broadly focused it includes security and privacy but it also has a bunch of other things like transparency

uh data Providence those types of things talking of iso 4201 how was it to get iso4

01 for stack aware and what challenges did you face yeah so stack aware was ISO

40201 certified at the end of 2024 in October of

2024 and it was a challenge because it was a new framework and we were actually

the only only the second company uh that our auditor had been going through with

in the process so we were learning a lot along the way about you know what are

the requirements of AI impact assessments like 4201 talks about them gives some recommendations but it

doesn’t like give you an example AI impact assessment uh also the document

that specifically lays out how to do AI impact assessments ISO 40205 is still in draft it’s not been

approved yet so figuring that out was certainly interesting understanding the

scope of the AI management system you know would we include third party systems which we did because stack aware

doesn’t build any proprietary models we don’t even fine-tune any models right now figuring out the scope was important

and yeah figuring out how to present the information to Auditors was uh was certainly something to to work through

So eventually we settled on a way where you know essentially we would give them access to our Google drive to to see

certain things saaware makes most of its AI management system publicly available but some things we you know we obviously

can’t like our risk register so uh we we found out a way to provide essentially

role based access if you will to to the auditor so that they could see the inner workings of of our AI management system

got and for folks who are not familiar with ISO 4201 what um would you how should they

think about 42,1 why should they think about getting

42,1 certified can you describe that a little bit yeah I’ll get right to the

kind of what’s in it for me side of things is 42,1 is almost unique in that

there are already laws on the books for example in Colorado that specifically state if you’re ISO 4201 compliant you

will get Safe Harbor under certain provisions of uh like Colorado’s AI uh

AI act so that’s one example this hasn’t been finalized yet and it hasn’t been

confirmed but I suspect I predict that ISO 4201 will become what is known as a

harmonized standard under the European Union AI act and what that means is if

you are in compliance with the 4201 standard you will have a presumption of

Conformity with certain provisions of the AI act now again that hasn’t been finalized that’s my prediction about how

it will turn out so th those are two quick benefits more broadly 4201 gives

you a system for complying with AI related data privacy cyber security

regulation it gives you a way to develop an AI governance program for your

organization that looks at you know responsible AI architecture looks at uh

data Providence quality bias all these things that are

key to developing safe effective and responsible artificial intelligence

system so most importantly is 4201 tells you how to build an AI management

system and gives you a process for managing AI related risk got it got it

and how do you um how do you think practitioners should react to these

changes uh that you’re that you’re talking about and how should they adapt to these challenges yeah so from a regulatory

perspective the laws are quickly piling up so I mentioned Colorado in the United

States New York City has its own local level law about artificial intelligence

specifically as it relates to hiring Texas is developing a very

comprehensive AI govern responsible AI governance act it hasn’t passed yet but

it’s uh it’s in it’s in process right now so there’s a big slew of regulations

that are coming there are a whole set of threat vectors that are coming things

that were not on the radar of Security Professionals previously data poisoning

sensitive data generation unintended training from AI systems these are

things that are that are new on on the radar of a lot of security teams so

understanding how your data can be impacted and making sure that you have a flexible

compliance program that can identify these risks whether they be technical regulatory what have you is going to be

really critical for companies that want to prosper in the coming decade or so

got it you talked about the emergence of AI governance and you sort of articulated the security risk side and

compliance side of it as well um how do you build a program if somebody’s going

to ask you what would your advice be yeah I would recommend starting with a

framework so the two most popular ones would be the nist AI risk management

framework which is a us uh essentially government standard it’s mandatory for

the US government it’s not mandatory for anyone else there’s no way you can be certified on it but it does provide a

series of highlevel recommendations for how to structure your governance program

that is certainly one option for especially for us-based organizations that could be a good

approach if you don’t need a formal external certification the second one would be

ISO 4201 which we discussed a key piece of 4201 is that it matches up relatively

well with ISO 2701 which is an information security management system standard and even ISO

27701 which is a privacy uh information management system

so if you have a whole host of kind of

Regulatory Compliance requirements using uh either of those as your basis

is a good idea and then on the nist side you can also tie up

the AI RMF with the nist cybercity framework and the nist Privacy framework

so both systems kind of the us-based one under nist or the the international one

based on ISO give you a good place to start the key is implementation both of

these Frameworks are high level so the key is implementation like how are you actually going to achieve the objectives

and and we’re talking of implementation how can we use

AI um for a towards this implementation any any ideas suggestions

yeah yeah I mean stare obviously you know uses uses AI heavily uh we make

sure that you know all of our customer data is not being trained on that it’s

not being indefinitely retained you know those are types of things that are top of mine but with those safeguards in

place we can then use artificial intelligence to do a lot of cool stuff for example you know using it to

brainstorm about Potential Threat vectors that’s a great use of the tool you can summarize meeting notes and you

know pull out key uh risks that maybe you hadn’t thought about when you were

in in the moment you know maybe looking at it after the fact you have a better

understanding of your risk surface you can also use it on a technical level you can you know uh security operations

centers there are tons of AI sock startups out there that are automating a lot of the detection and response to

potentially malicious actors it’s useful in vulnerability management the exploit

prediction scoring system that’s a predictive AI Tool leverages uh you know

a proprietary formula that’s trained on a lot of vendor data that these vendors

provide uh for free to uh to the organization the first The Forum of

incident response and security teams so AI is both a force multiplier for

security teams and it can be a challenge for security teams the the key is understanding how to use it securely and

responsibly and and on on epss um Walter it’s a 30-day exploit probability model

right uh is there an open- Source model uh because epss model is not open

it’s open to use but the model it’s itself it’s not open right is there an model that is cor correct yeah the yes

the epss model is not is not open source the outputs of the model are open source

and essentially first is running like a free SAS app uh in the fact that they

are hosting the epss model and make available an API to call uh to call its

result to produce its results but no there is no open-source model uh to my

understanding that does the same and that is primarily because the

sensitivity believe it’s because of the sensitivity of the the data essentially the inputs to the epss are attempted

exploitation events and there are some concerns about the providers of that data um you know about potentially

exposing that obviously they go through a lot of uh security controls like anonymization they strip off you know IP

addresses associated with these exploitation attempts but you know there’s still concerns that you don’t want to open it up too much and then

have that turn into a vulnerability itself for practitioners who are thinking about

setting up the uh governance AI governance AI grcp model that you talked

about I think one of the challenges is that they have to build a case to

establish this to their leadership any advice in terms of how listeners can

effectively build a business case or take this to their leadership yeah I’d say there are three broad buckets of

risk that businesses are focused on and it depends on what type of business it is but specific to

AI basically the first risk is confidentiality risk like are we leaking

sensitive information to third parties that’s a big risk that some are

primarily focused on more b2c companies are focused on

reputation risk about somebody abusing a chatbot getting it to agree to something

or say something that damages us or even obligates us contractually which is

possible as we’ve seen from there’s an incident with Air Canada where someone essentially got that chatbot to tell him

something about the refund policy which didn’t end up to be true and then that person sued Air Canada and one and

forced air canada to perform what its chatbot said it would do and then the

Third Kind of risk that people are focused on is essentially litigation

they’re worried about being sued for using or or or training training on data

or using the products uh that have been trained on certain types of data and different organizations have different

concerns but generally they fall within those three buckets and I would pick

whatever bucket is most Salient to the organization and then you know do a deep

dive and present that to your leadership and say here are the risks you know here’s what H here’s what’s happened

here’s what could happen here’s my assessment on the probability and and impact and you know

as a business leader what do you think we should do okay and for somebody who’s

inheriting a program with some pieces of it existing or may not maybe

not is your advice going to be any different are the things that they can specifically do if they’re inheriting a

forra I would say the the general approach is the same if you’re taking over you know a Brownfield program so to

speak rather than building it from scratch your approach may need to be different because there’ll be different business units that have certain

interests certain equities that you’ll need to respect so getting the lay of the land understanding what the

priorities of the various business units are and then aligning your

recommendations around those would be my biggest proposal and that’s true

for a green field program but for a Greenfield program you’ll have more opportunity to to shape that

conversation than you would if you are just coming in from uh you know on top

of an existing program are you seeing any Innovations in the space as you look ahead yeah so

specifically uh from an AI governance perspective or just an AI perspective that has governance implications

governance perspective would be a good uh start and then you can generally talk about what you’re seeing on the ey

perspective yeah so I think having the existing GRC

automation state-of-the-art catchup is going to be important I know there are

some GRC providers that offer ISO 40201

modules with them there are like I said the technical security

tools that attack AI specific risks I think the autonomous socks will be you

know will eventually be able to alleviate a lot of the burden on security teams talk here as in s so and

the Security operation SOC yeah security operations centers yes not uh SAR Oxley

or uh systems and organization controls you know so many so many acronyms so

many acronyms yes so

those I see the biggest Innovations potentially a gentic security or

governance AI so you can have ai systems that are interacting with other systems

that are autonomously enforcing your policies procedures uh regulatory

requirements things like that internally I could see and then on the flip side a

Genentech aii uh from just a business perspective is going to be a huge risk surface massive risk surface so you know

another a red line that I would say with with very few exceptions is like I would

advise all of my my clients like do not enable the computer control capabilities

of anthropic right now like you just don’t know what the agent is going to do

right now and it’s it’s too risky unless you have like a very lock down system that you know there’s no way that it can

access sensitive data it can’t corrupt anything it can’t embarrass you you know only if you have those very controlled

circumstances would I say is it appropriate so a agentic AI uh you know

maybe a step back from that something that is talking to another system that’s

where you know there’s a lot of business value there but there’s also a lot of risk because we’re going to need to

rethink what it means to be secure because AI agents are going to make mistakes like it’s just going to happen

they’re just going to do something that’s wrong and Business Leaders need to understand that and and kind of

understand what their risk appetite is and then the security teams can implement the controls that that crank

down on the functionality or or open up on the functionality as decided by the business and how long do you think

security and GRC teams have time what is your time Horizon prediction they have

they have negative time they’re behind they’re behind already they’re always behind I mean there’s it’s kind of it’s

kind of impossible for them to not be behind because there’s nothing to secure until Business and Technology leaders

invent something and then then you need to secure it so obviously baking in security from the beginning is an

important approach and that’s something I’m I’m a big fan of it’s it’s always

challenging to do if you’re trying to iterate quickly and you’re trying to get a prototype out there you know it’s it’s

it’s good that security teams have threat modeling sessions and uh and review and code reviews and things like

that those are absolutely important but until there’s something to review there’s nothing to review so

security is just always going to be at least one step behind the business and

they should get involved and get upfront with the business as much as they can but at the same time understand that the

business is going to need to do things at a certain pace and and just be prepared for

it what would you tell your the younger self right or for a new

person who’s entering this field um especially the security GRC program right now I would

say focus on handson learning focus

on developing skills through projects ra I mean courses are great

certifications are great I’ve got tons of certifications like they were good Frameworks but you’re still GNA have to

learn a certain tool like get really good at a certain tool figure out a

certain tool and focus on that because there’s going to

be increasing demand for highly specialized people who can outperform AI

in uh in certain contexts there’s going to be less demand for people who are kind of like generally knowledgeable

about high level security Concepts like you know chat GPT can probably pass the

sisp exam right now so that’s not that’s not going to be a huge distinguishing

Mark for you to to be to have a sisp I I say that having a sisp myself like I

think being highly skilled in a certain tool is going to be very important and

focus Less on credentials that would be my advice

that’s great advice we are almost approaching the end of the program Walter and I would love to get your

perspectives of that part audio books books you read that you listen or

mentors uh that you look up to follow anything that you can share with our listeners today so somebody uh who I

follow closely on on the AI government side is Tristan broth he’s uh you know

he’s pretty active on LinkedIn I Follow Chris Hughes on

LinkedIn quite a bit I

I’m actually reviewing a draft for a book for someone right now that I also think is good but I don’t know if he

wants me to plug it publicly yet so I will hold off but I may follow up and send it to you for the show notes if uh

if uh if he says it’s okay so that’s that’s going to be an interesting book and uh yeah so those are those are my

biggest recommendations I think that the

availability of information is just continuously increasing and there’s kind

of a lot of you know garbage out there or a lot of very high level fluff out

there so find find the folks who are writing about or speaking about the very

specific use cases that you’re focused on uh you know to to I would say to your audience and and uh you know listen to

everything that they have listen to every piece of information that they can provide because you know they’re going

to be there going to be some leaders out there who who can see the future yeah I think that’s fantastic advice and on

that note Walter this was a great discussion great having you as a guest thank you very much

[Music]

Listen on your favorite platform and stay updated with the latest insights, stories, and interviews.

CCM NERO graphic

Want to see how we can help your team or project?

Book a Demo