Is it time to rethink SOC 2? (Spoiler: Adam thinks so—and he’s got the receipts.)
In this insightful episode of Security & GRC Decoded, Adam Brennick, Director of Security Risk & Compliance at Cockroach Labs, joins Raj to challenge the status quo of SOC 2, compliance culture, and how GRC teams should operate in a modern, engineering-driven world.
With a unique perspective from leading both security and GRC functions, Adam shares why today’s compliance efforts often miss the mark—and how we can fix that. From his hot takes on “a la carte” SOC 2 to building automation-first programs that actually reduce risk, Adam brings clarity, conviction, and practical wisdom to the mic.
Key Takeaways:
✅ Why SOC 2 should be customizable—and how that shift would improve both trust and transparency
✅ How GRC, security, and trust functions intersect (and where they often break down)
✅ The role of “vibe coding” and AI in enabling GRC engineering
✅ Real-world strategies for building a balanced, high-impact GRC team
✅ How to make a bulletproof business case for compliance automation using data (not just complaints)
Take Action:
→ Reflect on your own compliance program: Is it outcome-driven or check-the-box?
→ Re-evaluate how your GRC, security, and engineering teams collaborate
→ Share this episode with teammates who care about making compliance actually matter
👉 Follow Security & GRC Decoded for fresh insights on how to make your GRC program faster, smarter, and more resilient.
🎙️ Security & GRC Decoded is brought to you by ComplianceCow. Discover how ComplianceCow helps teams move from reactive compliance to proactive control automation.
🚀 Liking the show? Leave a rating and review to help us grow and keep bringing you bold GRC conversations.
💬 Connect with Adam Brennick:
💼 LinkedIn: https://www.linkedin.com/in/adam-brennick-959352158/
🌐 Company: https://www.cockroachlabs.com/
Hey, hey, hey, welcome to Security and GRC Decoded. I’m your host, Raj Krishnamurthy, and today we have the awesome Adam Brennick with us. Adam is responsible for security risk and compliance at Cockroach Labs. Adam, welcome to the show.
Adam Brennick (00:35.415)
Thanks for us, psyched to be here, looking forward to our conversation today.
Raj Krishnamurthy (00:39.028)
It’s yeah, absolutely Adam. Let’s we usually start with a shock and awe question. So let me ask you this. What is one controversial opinion that you hold?
Adam Brennick (00:52.271)
I have a hot take about SOC 2, which in GRC could mean several things, I think. But my hot take in the GRC space is that for a SOC 2 audit, all of the criteria should be optional for the pursuing organization. And what I mean by that is today, a company decides they want to go SOC 2, they want to get a type 1, type 2, whatever they feel.
they’re forced to do the common criteria, right? Out of the gate. They have to complete the entire common criteria, all the sections there. And I think that what happens there is it pressures these organizations to accelerate the SOC 2 control set they have in their environment. That can lead to the kind of commodity space, right? Not throwing shade at the commodity SOC 2 space, but…
What it does is to say instead of developing an internal control set that they can scale and use to cross map across other frameworks, they adopt a platform that says, wait till everything turns green and call an auditor and you’ve got SOC 2. what happens there is you get instilled with these practices that are become check the box exercises, right? It’s like, why am I doing security awareness training? It’s good. and also,
And we need it for SOC 2, right? Or I have endpoint verification on my laptop. SOC 2 says we have to do it. So we’re doing it, right? As opposed to saying, like, I’m going to design a security outcome that addresses these requirements. So what I think here is that it would reduce that barrier of entry for SOC 2, where a smaller organization that can’t do everything in the common criteria and just wants to get it done to make a deal, they can adopt what they can achieve.
The auditors can audit against what they have there, and they can scale that program over time, which can show that maturation that everyone wants to see from a provider they’re working with. And then it also, on the flip side, would force the other organization to actually read the SOC 2 report, which I know is just a… That might be a hot take in and to itself, right? Like reading a SOC 2 report. But you wouldn’t know, right? You would have to go through and determine what controls are in scope, what have they assessed against, and then that can…
Raj Krishnamurthy (02:59.99)
No.
Adam Brennick (03:13.281)
trigger that questionnaire, the lovely questionnaire that everyone wants, and say, listen, your SOC 2, you only covered in the common criteria points 1, 3, and 6. So I need to know more about your logging practices if you have any. And then you can have that engagement and maybe a design partner for these controls. And you can design them over time effectively as opposed to just shoving something in because SOC 2 said we had to do it. And then that
The line I heard one day, I didn’t coin this, is that there’s nothing more permanent than a temporary firewall rule. And in GRC, there’s nothing more permanent than a control. I think more often than not, once a control is in there and you start doing something, becomes the reason we do it is because of SOC 2, right? So that’s my hot take. That’s something, I don’t know, controversial. I don’t know if it is or isn’t, but I just feel like it’s a way to…
allow more entry points into the SOC 2 environment, encourage that trust relationship between service provider and consumer, and would just foster more collaboration through that vendor risk process.
Raj Krishnamurthy (04:20.21)
That’s a very interesting perspective. But what do you say to people who say SOC 2 has standardized sort of the controls language, if you will, to a certain extent. And if you try to create this a la carte menu, it’s going to be less standardized. It’s going to be completely free form. What do you, and makes it more difficult.
Adam Brennick (04:40.811)
Sure, I think that makes sense, I think as a community, like in a group of auditors, I mean, if you’re an auditor, say you’re on the other side of the table here, I’m not an auditor, I’ve never been an auditor, besides internal audit. I feel like when you’re entering a SOC 2 or any audit, it’s incumbent to understand the testing procedures that are done for a control beforehand. So if you’re establishing those testing procedures and you and the auditors are aligned on that, and that can be codified in the document,
then I think it takes some of that ambiguity out of it. And as opposed to saying like, no, there’s one approach, it’s standardized, we’re doing this approach to it. And that’s how you have to do it. Cause I’ve seen that happen before, right? Where it’s like, no, our audits for SOC 2 or whatever, there’s a standard approach to assessing that. This is how we’re gonna audit you for it. And it can adopt practices that aren’t you designed for your business, right? Like to reduce that business risk that you’re kind of designing the controls to address. So.
I would push back on, there’s standardization here, to say that if we apply standardized audit practices to organizations, then the outcome is not going to be the same for every organization, because their objectives aren’t always the same. And fundamentally, at the end of the day, all of these Beyond Customer Trusts is to reduce business risk for your organization. So I think if we have that more flexible approach, it gives
Business is the opportunity to properly address that business risk and meet their company goals as opposed to our business goals to make money, get a SOC 2, get it out of the way, and then we’ll move on with our lives.
Raj Krishnamurthy (06:18.424)
Got it. So what you’re basically saying is controls have to be contextual and they need to be commonsensical given your business context.
Adam Brennick (06:26.935)
100 % yeah, you have to do things that make sense for your business and actually Like I’ll probably say this a ton of security outcomes, right? That’s kind of what I focus on in the GRC space It’s you don’t do compliance for the sake of compliance you do it because there’s some security outcome from it So that’s to me kind of like a lot of the goal there this reduction of risk in a security outcome from anything you do inside of your GRC or security practice in your organization So it’s defensible right? Like if you say I’m doing it and I don’t I don’t know why I don’t know what the outcome is I just know that we get a sock to report or
PCI DSSAOC at the end of it. Like if that’s my defense of it, then I don’t feel like I have much of a leg to stand on with anyone. And I know if I was on the other side, I would be like, that doesn’t make sense. Can you dig deeper here? So when you have that security outcome right at the end, that makes it defensible. And you can have that back and forth with someone. And maybe you’re wrong, maybe you’re right, but at least you can have a formative conversation that you feel like is beneficial for both sides.
Raj Krishnamurthy (07:22.092)
What is your current role, Adam, and maybe walk our listeners through your journey?
Adam Brennick (07:26.743)
Yeah, absolutely. So my formal title is Director of Security Risk and Compliance at Cockroach Labs. That sounds lofty. My day-to-day role is management of our security engineering and compliance and risk teams at Cockroach Labs. So in practice, right, there are different teams and they do serve different functions in the organization.
But it’s, I think, beneficial to have some blended oversight there. Because I felt like something that was happening was I would be asked the question in the organization, hey, Adam, can we do this? Can we execute this control? Can we implement this new design or change to the architecture of our environment? I would be pulled in because I would generally be kind of like a focal point for the conversation. Working in compliance, they would say, hey, are we going to mess compliance up?
here by doing this? Are we going to break anything from our controls for PCI or HIPAA SOC 2? I would say no, but we need to have security involved in this. And they’re like, aren’t you security? Aren’t you the person that’s doing this? Isn’t this the security sign-off? I’m like, no, no, no, no, It’s two completely different things. I felt like it added a lot of friction to the process by constantly having to.
feel like everyone was done with their security gate and have to be stopped, loop someone else in from another team to go through that. So my boss moved up. I said, hey, I think it would be great if I could actually manage the security engineering and compliance teams in that way. Still the focal point, but I can divert to whichever team I feel like needs to be involved in the design and architecture process to keep things moving more efficiently in the organization and reduce some of that back and forth and friction.
inside the company. that was an opportunity I sprung. advocated for myself and I was able to, you know, move into this position, which has been great for me to do, to learn, to grow, to have more experience. This is my first time, I would say, managing an engineering team specifically. So I’m learning a lot. I’m growing in that space a lot, a lot of, a lot of bumps and lumps. I’m going to be honest, right? It’s been like a learning experience for me, but
Adam Brennick (09:51.614)
You only learn from those bumps and lumps. You don’t learn from constant success. So yeah, it’s been a super fun ride here at Cockroach. here four and a half years. Prior to this, I was at a company called MobileIron, which was mobile device management software. think a lot of folks in the space are familiar with MobileIron. used it some stop along the way. Was primarily focused on SOC 2.
Raj Krishnamurthy (10:13.336)
Yep.
Adam Brennick (10:19.383)
for the product and then supportive of our FedRAMP environment there. It was a brief stint. They got acquired by Avanti and developed my own exit strategy there onto Cockroach. A few steps along the way, IGT, I was the program manager for their PCI DSS compliance space. I did a little consulting, basically supporting merchant services companies becoming PCI DSS compliant in a capacity for a while.
and was an IT manager for about six and a half years at Flextronics, now called Flex in Austin. And that’s where I got more of my technical capabilities, because there was a lot of hands-on system engineering, cloud storage, or private cloud storage, SAN engineering, and things like that. like most folks, it’s not a linear path, I feel like, in security and GRC. It’s not generally like, you know, I…
finish high school, I’m like security or compliance, and I just march forward and do that. I think it’s, you find your way in there based off of other people’s opinions of the work you do, and it becomes interesting, it becomes a really cool, unique spot to operate in. And then once you get in there, you usually don’t see a lot of people move out, right, which is a really cool thing.
Raj Krishnamurthy (11:34.752)
I love the story. think the story of progression is beautiful, particularly sort of the opportunity that you took from GRC to security is a very, very inspiring story and I love it. Just so that our listeners have a little bit more context, Adam, what does Cockroach Labs do and how do you see compliance or GRC and security fitting into what Cockroach Labs does?
Adam Brennick (11:56.598)
Yeah, absolutely. So Cockroach Labs flagship product is CockroachDB. It’s distributed SQL database. The moniker Cockroach comes from it being indestructible. I think one of our core business drivers in the product is resiliency. It’s a hyper resilient product. It’s distributed SQL, right? So very easy to scale the product by adding nodes that automatically join to a cluster with low overhead.
Raj Krishnamurthy (12:08.098)
Mmm.
Adam Brennick (12:24.031)
and operational costs for the database itself. So really, really slick tech, cool product. Our founders, our ex Googlers, Spencer Kimball and Peter Mattis actually wrote GIMP, the image manipulation software that when I was applying, I was like, no way I use that all the time in college. This is incredible. So really, really great, you know, technical depth and knowledge here at the company. We have two offerings of the product. So folks can…
buy the binary and run it in their own either commodity hardware or cloud environments, which is we call it self-hosted. It’s a self-hosted version of the product. And then we also have a managed offering akin to like Mongo Atlas or RDS offerings in AWS or Expanner and GCP where customers can put their workloads in a public cloud environment of their choice. They can choose AWS, GCP or Azure.
And then our teams manage the operational efficiency of the database on the back end and the security pieces, which my team oversees, providing the perimeter security and then the interior security of the day-to-day operations of our teams there. yeah, it’s an amazing product. I tinker with it all the time. I’m in the process of building out.
some internal automations and having some of Cockroach on the back end there. So if folks are interested, cockroachlabs.cloud, you can go. We have a basic tier, which is free. Folks can spin up clusters and practice. And we also have Cockroach University. Go, you free certifications, highly recommended. If you’re interested in distributed SQL, take a look at it, because yeah, it’s a sick product.
Raj Krishnamurthy (14:11.182)
And in your role as the head of security in GRC, do you primarily deal with customers? Can you describe what’s the nature of that work on how you work with customers?
Adam Brennick (14:24.139)
Yeah, 100%. So we get involved with customers in various manners. I’ll start on the security side, and then I’ll move over to the GRC side here. On the security side, we will interact with customers for various things, I guess, that will pop up. First is, hey, we found a vulnerability, right? So like a disclosure program. And that can be,
customers who we have a relationship with that will raise that through internal channels or just the security researchers out there who say, stumbled across a bug. I still consider that customer because they could be running the software, even though some of that stuff is spam. But in those instances, we’ll take all the data. We’ll run it through. We have some internal data banks of commonly discovered vulnerabilities that have actually already been triaged and provide answers to them, which can be as simple as.
Thank you for the information here, but here’s its fix and here’s the details. Up into sometimes with customers, they may identify a true positive vulnerability and we’ll work directly with them to triage and get a fix, which can be as simple as something that’s bespoke to their environment that we help them with. Maybe up into like, hey, this is actually a bug in the product. We can submit a technical advisory to repair the product in whatever manner it is there.
Next, it’s kind of blends the security and compliance space is getting on phone calls, right, with customers, whether that’s a prospect or an existing customer, to answer questions they have. A database is the lifeblood of pretty much any application, right? Like, you can’t have an application without a database on the backend. So teams are always keen, right, to talk to us if they have questions about configuration, specifically in the database.
product roadmap items that are coming up, getting new features requests shared to our PM team to get things into the product that might not be there. And then also making defense statements if there’s something in the product that we haven’t prioritized yet as to why we might not. Now, my team isn’t specifically product security in that we have a separate team that actually writes product security features into the database themselves. We’re more on the security engineering piece, but we work very closely with that team, right?
Raj Krishnamurthy (16:29.902)
Mmm.
Adam Brennick (16:46.369)
We can help them threat model and go through exercises to make determinations about how some of these features might work in the real world while the database is operational. So we spend a lot of time interfacing. We’re trying to get more field CISO-ey, like my boss, Mike Guien, who oversees everything security compliance and corporate engineering IT.
You know, we were trying to figure out, right? Like, how can we be more field see? So how can we engage more with customers and be more proactive about it? Right? I think it’s something that we want to do, but something it’s not easy, right? To figure out like, how do you, how do you evolve that practice? How do you proactive because security usually it’s an incoming thing, right? Very rarely as you know, on the security side, I don’t say rarely, but I think it’s becoming more common, but it’s, it’s still unique where, Hey, our head of security wants to talk to you. And it’s like about.
Raj Krishnamurthy (17:22.606)
Totally.
Raj Krishnamurthy (17:39.49)
Mm-hmm.
Adam Brennick (17:39.958)
What is this? Do we get breached? Why do they want to talk to me? But I think it’s a good way to foster that trust, right? And like the face to the name, right? Like I know, I know Mike, I know Adam at Cockroach. Like I know them and I can one, talk to them if I have questions, but two, just grow my professional network and have more resources to talk to just about things that come across my mind there. So we’re still figuring that out. We’re spending time on podcasts, writing blogs to…
Raj Krishnamurthy (17:47.127)
Yeah.
Adam Brennick (18:08.767)
interact with customers a bit more. Then kind of the last piece of that skews more on the compliance team who manages this is the customer questionnaires and the audits that get done, right? So we do get audited by our customers, which is very much like any other regulatory or compliance framework audit, right? Where an auditor for the customer will be assigned to our team. We’ll work with them directly to provide evidence to them beyond what’s provided in our.
reports, whether it’s SOC 2, PCI, HIPAA, whatever they’re concerned with. And we’ll work directly with them, shipping evidence to them, getting on calls, asking questions, going a little bit more deep into our security posture and validating that our controls meet their controls. Then we get the normal, like, hey, here’s a spreadsheet. Fill this out. I still feel like that’s interaction with customers, though it’s
There’s a layer of disconnect there because you’re more or less dealing with emails and back and forths there. So you lose some of that human element there, but there’s still interaction with customer there because we might chat with someone through a portal or via email and going back and forth with customers. So I’d say that’s the gimmick on the team here, how we interact with customers and also what forward thinking there about how can we be more proactive about that.
Raj Krishnamurthy (19:27.086)
Okay, so when you had this heart take of SOC 2 has to be common sense and it has to be contextual, was that more from the perspective of you being a vendor and the way that you look at your customers or was that more of being a customer in the way that you deal with vendors or is it both?
Adam Brennick (19:47.778)
Can I be honest with you? I had had like three bourbons and I don’t know why it popped into my mind, but I was, yeah, I had had a couple of drinks and I was on my computer and I was, there was someone new on our team who I was getting them like more up to speed about what Sock 2 is and why’s and the how’s and I was reading through everything here and I was.
Raj Krishnamurthy (19:52.216)
Ahahaha
Adam Brennick (20:14.999)
Like thinking back to when, you know, trying to implement SOC 2 here specifically, because really prior to Cockroach, I’d always inherited a SOC 2. This is kind of like my first time to actually try to build out a SOC 2 program. And I just remember like, just like reading through and being like, I don’t, I don’t love this idea. I don’t love having to implement this thing just because you say I have to and trying to like navigate it and figure out this weird thing. Cause SOC 2, have that like,
leniency, it doesn’t have to be the letter of the law there. And I was thinking about her joining the team and being like, if I was in her shoes, what questions would I ask? And the question would be like, why do we have to do all this? Why do we have to do everything here? I don’t feel like everything in here, like control, know, 1.1 is important as 9.1, right? I’m just plucking numbers out of the air there. But I feel like 9.1 is way more important and reduce risk in our organization, like tectonically more.
than this other control. So why are they weighted equally? Why are they the same? So yeah, I don’t know. It was a mix of hot take and a couple of drinks and just trying to reposition myself as someone who hasn’t been doing this for like 10 years and saying like, it’s like someone says, if you could build a computer today, it probably wouldn’t be a laptop. But the laptop has been the laptop and it’ll be the laptop. So I was trying to just put myself in that.
Raj Krishnamurthy (21:18.307)
Makes sense.
Adam Brennick (21:41.752)
Hey, if I was starting fresh, right? If I was doing something new, like what would be the questions that I would ask? And this is one of them, like, why do I have to do everything here? Like PCI DSS, right? I can, NA things. I can say, isn’t applicable. I’m not doing this, right? Like, nope, nope, nope, nope, nope. And if it’s defensible, I can. And yeah, for me, it’s like SOC 2, it’s like, why can’t I just say like, I only want to do this section of it. So yeah.
Raj Krishnamurthy (22:02.072)
Makes sense. And you have an interesting role, Adam, because there are three functions. are these typically are three distinct functions, right? Security, GRC, trust and assurance. You seem to have all three. So from that perspective, how do you see GRC, the intersection of GRC and security? And maybe you can add privacy to that circle as well, but how do you see that intersection?
Adam Brennick (22:27.935)
Yeah, I like to think, you know, there’s the security isn’t compliance, right? Like security not equal to compliance. I understand the spirit of that argument, but I disagree with it from the sense that I feel like compliance should be the floor. It’s the lowest you’re willing to go, right? Like the baseline. Yeah, this is the lowest I’m willing to promise you that I will do for you, right? Like, because…
Raj Krishnamurthy (22:47.448)
Best of luck.
Adam Brennick (22:56.215)
Because ultimately, that compliance piece is for your customers, more or less. No one says, I have no customers. I’d like to do an audit. It’s not going to happen. This is for the customer. So it’s that baseline. It’s that the lowest rung we’re willing to go here. And then your security piece is you operationalize that baseline. And then you figure out what other areas there that we aren’t auditing against.
Raj Krishnamurthy (23:02.392)
No.
Adam Brennick (23:23.457)
do we need to put energy and effort, right? It’s kind of like the CIA triad on the security side, and you’re specifically thinking about data. And then the compliance piece, right, is that baseline, that policy-driven work that can help drive the work that security does there. So that, if we talk about it from a Venn diagram standpoint, right, like I think the commonality where the overlap would be is the, would say trust and like policy, right?
because policies can’t be written in a vacuum. You can’t have your compliance team writing policy, deliver those governance docs for your organization, and then just spraying them out to the organization and says, here’s how you have to do work. There’s distinct collaboration between those two teams there when you’re writing those policies and you’re issuing the policies to the organizations because not only security, think like broadly the organization needs to have say in that.
because they’re the ones that have to abide by it, right? So if you’re writing a policy in a vacuum, if your GRC team is all the way on the right side of the Venn diagram and they’re just authoring things in a vacuum and then they push it out, no one will adopt it, first of all, right? I can’t do it, I’m not doing this, like get out of here, right? And then when you’re trying to audit against it, nothing’s going to align because you say in your policy, you’re doing X, nobody is doing this here. So it lends itself down. The trust piece,
I would say between the two is the compliance team, right? It’s probably that assurances as well, right? Like trust and assurance there is that the security teams I feel like have more visibility into the data that’s being generated that supports your customer trust and how the compliance team assesses, right? The overall health of the program there. So making sure that the teams are aligned on
We’re implementing this new security control, whatever it is. How can we test that? How can we get the data for it? And how can we trust that the data is accurate and timely and occurs there? So I think there’s like that healthy relationship between the two teams of how does the security team who’s more involved in the, know, PMA collecting the data or working in the systems that are supportive of the compliance controls, how can they get that trust data to the compliance team to then help make
Adam Brennick (25:52.182)
decisions on this controls ineffective or the data we get doesn’t align to policy, right? Like there’s something misaligned here. So I see that as like the Venn diagram and then privacy being kind of the under arching thing that kind of under girdles, I guess, like all of this is that we all have our own data that is in various systems everywhere. So I think both
security and compliance need to be mindful and understanding of data privacy, the principles of data privacy, and how it affects not just the business, but the people, right? On the other end there, because the privacy piece is there. I think it’s the compliance team to understand the regulatory bodies. How do we meet the standards so we don’t have litigation?
that hits us, the security team being able to help bridge the gap between regulatory and technical, right, on an implementation piece, and then altogether working to ensure that we’re instilling a privacy program that hits those marks, whether it’s data minimalization, know, obfuscation, encryption, right, kind of all those core tenants of data privacy there. So it’s…
Yeah, privacy is that one that kind of sits quietly underneath everything. And you have to be aware of it. I feel like it’s always whispering in the back, right? Like you always hear like, GPR, CCPA, right? Like anytime anything happens there. So that’s how I kind of see, right? That like relationship between the two teams. it’s, I hate to use the word synergy, but there is kind of that like synergistic relationship between the two where you’re driving towards the same goals.
Raj Krishnamurthy (27:32.534)
Nyeh.
Adam Brennick (27:50.922)
at the end of the day, but the pillars that you operate in are unique and have their own kind of North Star, right, in often instances. And it doesn’t mean that they’re completely separate from each other, but it does mean they are unique in and to themselves.
Raj Krishnamurthy (28:07.95)
No, it makes absolute sense. So they each have to do their function, but they have to come together, right? And they should be sort of like the word that you use, synergy, and they have to be able to synergize amongst them. Makes sense. When we spoke last time, you said something like security is the two word of GRC. In fact, you used the word GCR implementation. I don’t know why. So maybe a two part question. Describe why GCR and not GRC and describe.
Adam Brennick (28:18.209)
Yeah, 100%.
Raj Krishnamurthy (28:36.184)
to our audience why you said security is the steward of that implementation.
Adam Brennick (28:40.193)
Sure, absolutely. So who knows, this might be a hot take too. But when I think about GRC, we’ll call it, I really feel like the workflow is GCNR. And what I mean by that is governance is first. So that’s your policies, right? Those are the documents that mandate the way you do work in your organization. And those are the first things you start with. You can’t just go into, you know,
compliance if you have no policy, right? Compliance is adherence to those policies, right? Like that’s the foundational meat of what compliance is. So you write your governance documents, you build out controls and testing procedures for adherence, which is that compliance, right? That C. And those are all in support of reducing risk of your organization. So I feel like what you’re doing practically
G, C, and R is the actual workflow there. And I know I’m being pedantic because it doesn’t matter, right? GRC, GCR, who the hell cares? But I don’t know. I feel like I look at it, I’m like, I see it should be GCR because that’s how it goes, right? Like those are the stages there. But yeah, I don’t know. That’s how I look at it. And that’s how I see. I think, right, like that is the core tenets there of
Governance, risk and compliance is your policies, your adherence in the support of risk. then on the part two of the question, As like security being the stewards of GRC implementation, it’s a couple different areas there. think like first security is the one that generally implements, right? The controls.
that you hand off. could be other engineering teams, but generally that’s under the oversight or the support of your security team. They tend to be a little more involved directly with the engineering teams. GRC teams, think, are getting more technical, but have traditionally been, I think, a little less technical maybe in a lot of instances there. So.
Adam Brennick (31:03.745)
Having someone who might understand like software development practices down to like software composition analysis, right, to that level. Having the security team kind of bring the controls to the control owners in a lot of instances and helping develop that being that like that shepherd, right, of these policies to the business and to the organization is something that I think is, you know, a critical role in developing that and then implementing.
all of these controls in the organization. So acting as shepherds of these processes and being directly potentially either responsible for them or developing data pipelines to bring them to the GRC team to have oversight of the work that’s being done that ultimately falls to the GCR team to validate and feel like that we have all our assurances in place. We’re audit ready, whatever.
your company deems as the thing that needs to be done inside of your GRC, GCR program.
Raj Krishnamurthy (32:11.768)
Totally. You remind me of this Amazon logo, where there is an arrow that goes from A to Z in Amazon, and I love that logo. And the way you’re describing GCR is almost like, I I was mentally picturing it. G and C is towards the R. So essentially, that’s the workflow you’re talking about. That’s beautiful. But on the second part of the question, are you making this claim that security is the seaward or the shepherd like you called it?
Is it because they are more technical and they have more knowledge on these controls and the operations and the procedures? Is that what you’re saying?
Adam Brennick (32:49.103)
I know in office instances, I think so, right? I don’t think that’s, I’m not trying to make like necessarily a blanket statement here. I think just a lot of times in my experience, right? And it doesn’t mean that, like I said, compliance teams are non-technical. Like they vary and people inside the team vary in levels of technical capability and what they do. So there are instances, right? Where you’re working with your HR team on the compliance team.
and you’re doing something non-technical, right? A lot of times, right, like your compliance team may work directly with your PeopleOps team to say like, background checks, right? Like, how do we process that there? But I feel like security is in large part involved in the practice in that they provide that oversight into whatever mechanism is occurring there. Because you can do something as simple as like,
Background checks, we need background checks. There’s another way of complexity in there. Those background checks contain personal information about someone, right? And you can say, yeah, compliance can get the whole background check and someone on the compliance team can be like, I don’t want this whole background check, right? You have that kind of foundational knowledge, but I do think oftentimes there’s a layer there in between where you may need security teams to come in and say like, hey security, we wanna get background checks from…
our people ops team, there’s private information in there. Can we work together? Can you be that bridge for us to figure out how we can get this process signed out and built out where we can get the information to us, but we’re not breaking or, you know, having any sensitive information where it shouldn’t be in the organization. So I think oftentimes like we see security being that
that bridge and that engagement between a lot of areas and they’re involved. And oftentimes, right, like depending on how your teams and people in your organization interact with each other, security can often be that first line, right? Like I’m gonna send a request to security. I’m gonna open a security ticket. A lot of times things don’t come to the GRC team directly unless someone’s like, I know our control set. I feel pretty confident this is a control violation. Compliance.
Adam Brennick (35:18.347)
Can you validate if this is a control violation or not? I think we often find, right, like that those questions come to security and fly into security oftentimes for that day-to-day operational things. And they could be kind of that like layer, right, between GRC and the day-to-day ops to bring information between the two teams together.
Raj Krishnamurthy (35:40.056)
Got it. And so you come from the GCR ranks into security. How technical should the GCR teams be?
Adam Brennick (35:53.815)
That’s a wonderful question.
I think the GCR teams should start with being technical in the domain of GCR. And what I mean by that is being able to interpret a control, like the letter of the law versus the spirit of the law. Because I think oftentimes what I’ve seen in the past is that someone will read a requirement from, like, say, PCI and it says, you need…
whatever, like you need to do firewall rule checks every six months. Right? So they read it and they say every six months, we recertify these firewall rules and I’m going to schedule a meeting and we’re going to get together and we’re all going to go, yeah, it’s fine. Right. And then move on with our life. Right. So what are they, what are they really trying to say there? Are all your connections approved? If someone’s making a change to your firewall, is it, is one person allowed to do that? Right. Like, is there security around that? So.
reading that, understanding the spirit, and then being able to speak to people that, whether it’s security, control owners, amalgamation of that team, and explain the spirit of the control, interpret a technical response to it, or technical outcome of it, and determine if that’s enough or not. So I think it’s not necessarily technical. And this is the low rung, I would say, of technicality in a GCR team.
is to be able to make that interpretation and be able to technically look at the control and the testing procedure and say, yes, this is sufficient and that’s defensible for me to take to an auditor and say, it says every six months, listen, every single time someone makes a change to an AWS security group, it goes through infrastructure as code, it’s scanned, it’s approved by a second person. So we don’t need to meet every six months because look, we audit just these checks.
Adam Brennick (37:54.594)
to make sure they happen every single time. So what we have is more robust than everyone scheduled a meeting every six months and us getting together and going, all good on that side. So that’s that kind of like low level technicality that I see. And from there, I feel like it’s evolving, right? Like the GRC engineering role is becoming more prevalent, right? You’re seeing like podcasts like this, right? And like Compliance Cow, right? I think is…
become like really involved in that, GRC engineering space where it’s not, Hey, I’m gonna ask someone to build this for me. It’s, Hey, can I build that myself? Right. Like, can I have low level access to some AWS APIs? And am I able to just bring the data to myself, right. From these systems. And that takes, I’d say that next level, right. Of.
technical chops where it’s not just being able to technically evaluate work that’s occurring and controls to get information. It’s building those testing procedures myself and delivering the data to myself. So that’s, feel like emerging it’s, it’s been there for a while, right? This is like GRC engineering, but I think there’s like, I’ve been seeing more buzz around it and more push for how do we, you know, be more proactive? How do we build these things? And then.
One of my favorite things to do is vibe code, right? So I think with AI being so prevalent now and not being able to be like, I’ve got seven years of CS under my belt and I understand GoLang top to bottom and I can write Python scripts with my eyes closed, right? Like that’s not a requirement so much anymore because you can start leaning on AI. And obviously that’s dangerous. It brings its own set of changes because nothing’s perfect there, but.
Raj Krishnamurthy (39:24.078)
Mm.
Adam Brennick (39:49.376)
I do feel like that ability to open an IDE and if you have Cursor or I’m blanking on GitHub’s co-pilot. Yeah, co-pilot, right? In your IDE, you can do that. Or you can just have chat GPT open, right? How does this look? Looking good. That vibe code back and forth. And it allows teams to get that more technical level of GCR in their team without that
Raj Krishnamurthy (39:59.448)
Go pilot.
Adam Brennick (40:19.307)
you know, years and years of like, we have to go hire someone who understands the language we want to build these automations in. So I feel like it’s going in that direction. feel like AI is really opening the door now to, for teams and folks that aren’t just native engineers that are building these APIs all the time, or these API calls and these like large data transformation sets to pull in like.
you can start doing some of that yourself and you become less, less reliant on some of these other teams to do it. So, like I said, I think the low bar is that technical assessment where you’re not just going by the letter of the law every single time you can actually extrapolate what you need and be able to speak technically to it and technically assess what a testing procedure is. And then that next step, like bar up is developing yourself, right? Like writing some things yourself, doing some compliance automation yourself.
through engineering. And then I don’t know what the next tier is. Who knows? GRC architecture, GCR architecture, right? Like it could go to there.
Raj Krishnamurthy (41:26.7)
No, I think you are making an interesting point because I absolutely agree with you because we believe fundamentally that GRC, GCR is as much an engineering discipline as security is. So you’re absolutely right. You have to talk about the engineering principles. You have to talk about the architecture principles. And then I think GRC engineering is gaining steam. So you’re absolutely right. Who would you hire today into this GRC role?
What background do you expect them to have?
Adam Brennick (41:59.096)
I tried to build the GRC team as a balanced team. I know something, a frustration that I’ve when I was coming up throughout my career is that oftentimes I think in organizations, and it depends on size, Like bigger organizations, smaller organizations can take different approaches, but it does feel like at times GRC can be very siloed, right? It’s like, this is my SOC 2 person, this is my risk register person, this is my privacy person.
This is my assurances person, right? And they all have their roles and they’re there and it becomes a toil on them in that like, feel like I can’t take any time off because we’ve got the PCI audit coming and I’m the responsible person for PCI and my manager is putting pressure on me because I gotta be here for it. So I can’t do it. And then also beyond that, it’s you can’t really grow. You don’t learn like, hey, you know, I’d love to be more involved in our risk register and.
you know, learning more about fair and all these types of things. And it’s like, no, can’t have you doing that. You’re too busy with this work and you’re stuck there. So I just, from my general frustration, right throughout the course of my career, kind of being stuck in that, right. And like starting in PCI and just being like there and only there. I’ve always wanted to build teams that are balanced and you need to come in, you need to be willing to learn everything we do and be responsible for, well, I’m responsible accountable for going.
like racing matrix, right? Like the other day the buck stops with me, but you’re accountable to be able to know everything, right? Not saying you have to be an expert in everything, but I want you to have the willingness to want to learn everything and the aptitude to be able to wear multiple hats, right? And it does, I think sometime can create some frustration with the context switching, right? Cause it’s like, I’m looking at this. Now I gotta go look at that. Now I gotta go look at this. But
It also, I think, leads to a really balanced work profile or history for someone in a resume builder, where I’m not just writing in my resume because my organization did all these audits that I now have that on my resume. I can actually practically share experience that I have in these domains. And it helps to really just grow this nice, balanced team where everyone can work together. And when someone wants to…
Adam Brennick (44:23.029)
someone wants to take a month off, like go for it, we got it, right? Like everyone knows everything that’s going on here amongst the team. Depending on the level of the role, right? In the organization, when I’m hiring like a GRC member, I am looking for, you know, folks that have an understanding, like some background in software development and it doesn’t have to be high-end. It can be scripting, it can be, you know, low level pieces, but…
Have you interacted with a cloud provider before? Vibe coding is number one on the list. I say, what’s vibe coding? And if I don’t get a good answer, I’m sorry. I’m going to have to stop there. Yeah. And I think the use of AI depends on where someone’s at in their career. Someone who’s, you’re hiring a more senior role, someone that might have been around before. They’re like, I don’t.
Raj Krishnamurthy (44:55.138)
Does vibe coding count?
No, no, no.
I don’t know, you’re not hired.
Adam Brennick (45:20.105)
I don’t use any that stuff. I don’t do it. That’s not a barrier, but it’s interesting to see. There was a discussion recently. I wasn’t a part of it, but someone at Cockroach, we have tech leads, and there was a tech lead summit at Cockroach. And the debate came up about, would you let someone use AI in an interview? And my immediate was like, no, of course not. But then I was like, wait, hold on. It’s kind of like an open book test, right? At the end of the day, and I think that’s going to like,
change over time the way we approach these interviews, because the answer is there in the AI. And if someone is using an AI can get the right answer faster, and they know how to do proper prompt engineering and things like that, that’s a valuable asset. I don’t want to say more so, but can be equal to someone who just knows the answer, or can figure out the answer by using manual tags and questioning and tab completion right there. So it’s a unique way to think about it, where it’s like,
Raj Krishnamurthy (46:12.536)
Totally.
Adam Brennick (46:17.623)
you hear it and it’s like, any of your process, it’s like, no, immediate. But then I’m like, maybe, yeah. It’s not a bad idea. But I’m also not talking about someone having chat GPT open on their phone and they go, tell me about, so you want me to tell you about a time where this happened? And then they read off the answer. Not that kind of AI use, but I think like, prompt engineering will become almost like table stakes for a lot of roles that happen in GRC and security and beyond.
Raj Krishnamurthy (46:30.094)
Ha ha ha ha ha ha ha ha ha ha ha ha
Adam Brennick (46:45.727)
Right? Like that’s part of it. But I see that as like, want someone balanced. I want someone who can solve problems. Right. And I want someone who likes the idea of not being micromanaged. Because I feel like in, know, GRC and security and a lot of these roles, right. Like you want people who want to go find problems and solve them and not be directed to problem solve. Right. Like that happens.
Raj Krishnamurthy (47:12.376)
So in this world of wipe coding, hyper automation, GRC engineering, and all the things that we talked about, what is the nature of your relationship dealing with auditors as a GRC professional or a security professional? Does it change, and how do you cope up with that change?
Adam Brennick (47:32.44)
Yeah, I mean, I think on the AI piece of it, It’s not necessarily emerging, right? AI has been around for a while. But the hypergrowth in the space is emerging. And it’s creating these new, I don’t know if challenges is the right word, but perspectives that we need to maybe shift about how we operate and how we do things.
I think one thing is like the ISO 42001 standard is new and
I’m seeing a lot of variance in the outcomes of those, like how auditors are auditing it, how people are reporting on it. You you get your certificate, but like a statement of applicability, right? What does that look like? Like, what are we committing to? How are we putting that forward to our customers? And then having, how do they interpret it right now? That is all emerging. And it’s something where like, I know folks in, you know, they want to get on the ground floor on it.
early by being an early certifier because you kind of get that leniency maybe from an auditor, right? Because they’re not sure what they’re not fully sure what they’re auditing.
You’re not a hundred percent sure on what you’re supposed to be meeting here. So it’s finding that like sweet spot from a partnership standpoint for some of these new and emerging frameworks and technologies. And I’m really excited to see like where the GRC space goes from AI. Cause you know, is that 42001 standard, but I feel like there’s more coming here. I feel like there’s going to be more deep, deeper maybe into the LLM level, right. And how that’s managed and how your organization works with those. And then, you know, generative AI, right.
Adam Brennick (49:12.493)
Like, how are we assessing generative AI? Should we be looking at that differently? if I, in my organization, 25 % of code is generated through AI, should we be assessing that differently? Can it go through our normal process, where we have an approval, we have a human approver?
We have scans on it, whether that’s through PR gating or testing. Is that enough? It’s tough to say right now. You don’t know. You feel like it makes sense. It shouldn’t be. But I feel like there’s movement going to have movement in that space. And then with auditors, like,
how much of the engagement with auditors will move to AI for validation? I feel like there’ll always be a human element to the audit practice, right? And there’s always that relationship you need to build with your auditor, and they need to be a partner, not an adversary, right? Or someone who’s forcing you to do things, but…
Yeah, with the emergence of AI, it’s how can we streamline the process between each other? Will we come to an agreement on how we want to leverage AI to validate evidence, but maintain that relationship where we’re not just pointing two AIs at each other and say, fight whoever wins. Either I get a qualified opinion or not on my audit report there. So I think there’s always that balance between auditor and audity, where it’s a partnership.
coming to the same goal to make Cockroach Labs or whatever organization more mature from a security perspective and promote trust. And then layering in these new elements of all this emerging tech.
Adam Brennick (51:06.417)
How do we integrate it into our business practice? How do we properly assess it? And then how do we use it for our own advantage? So we’re not spending four months to do an audit, right? Maybe we can get this down into two months or a month by doing that. it’s like, I think there’s going to be unique partnerships occurring between organizations and audit firms to really come together and do that. Cause I think if it happens in a vacuum where like the big four go and they go, here, this is how you, you
Raj Krishnamurthy (51:18.774)
Absolutely.
Adam Brennick (51:36.414)
audit with AI and the outside organizations, the auditees aren’t involved in that process, it seems fraught for disaster. So I hope as this technology builds, there’s a healthy relationship and bond.
between audit firms and companies. And there can be a coherent strategy to it that makes sense and promotes value for everyone and really gets that juicy AI benefit that we all want where the, you know, remedial tasks that we feel like could just be pushed away that are wasting time are off the shelves and we can focus on more productive work that we feel like is going to add value.
Raj Krishnamurthy (52:16.642)
No, that’s actually brilliant. In fact, that’s a brilliant call for action as well, I would say. Brilliantly put. But one of the fundamental things that I struggle with is traditionally audit and even security for that matter has been a very deterministic practice. We write code, we pull data, or you take a screenshot, which I don’t know why anybody does anymore, but you take a screenshot.
And it’s sort of very deterministic in terms of how we apply and how we test, right, whether it is test of design, test of effectiveness, or whatever that is. But that is changing, right, because generative AI inherently is probabilistic. So these two things need to come together, and it is no longer, you know, when we used to write models earlier, you can write very clear explainability models that can determine what caused that outcome. You can’t do that with It’s actually more difficult to do that with LLM. So how do you see
These philosophies reconciling the deterministic principles with which we have operated GRC, security and audit before and all this fantastic productivity tools that are coming about but are still probabilistic inherently in nature.
Adam Brennick (53:27.135)
Yeah, I mean, it’s a wild challenge to think about, right? One thing that we’ve seen recently, and this is not in the GRC space, this is more in the AppSec space, that’s been helpful is…
We’ve been exercising some application security tools that scan code for dependency vulnerabilities or static code analysis vulnerabilities and use some AI on the backend to try to triage this initially. I think those tools were previously more deterministic. It would be like, you have a critical and go fix it. And it’s like, you don’t have the content. Yep, yeah, exactly. Go do it. Layering AI in there as an assessment tool.
Raj Krishnamurthy (54:01.516)
Yeah, exactly. Yeah. Look at the CVSS score, right? Exactly. Yep.
Adam Brennick (54:13.655)
One, it can give you some good details on probability. I mean, have EPSS scoring, is its own. It uses parts of that. But beyond that, almost like an AI agent or an AI assistant that’s going to go through and look at the code and say, specifically on the static code analysis side, candidly, we’re a SQL database. We handle a lot of SQL inputs. So you run a scanner, a SAS scanner through our code base. It’s like you have a trillion SQL injections. It’s like,
Raj Krishnamurthy (54:19.15)
Totally.
Adam Brennick (54:43.679)
It’s designed to handle SQL inputs, come on. So something that we’ve seen in some of the tools is an AI assist that gives a recommended fix, but then also has a scoring on how confident it is that this will actually fix the problem there. So it’s almost like the AI watching the AI and having like, I have a probabilistic, probabilistic, sorry, outcome of what’s gonna happen.
But I’m also not high confidence that this is right. So having some of those gates built in, and that can even be apply that to the GRC side. I want a quarterly access review. I want it done through an AI agent to run and build that model. And one, does everything look right? I can say with high confidence or high probability, yes, that everything is right. And then from there,
Right, like trying to leverage that AI for a probabilistic outcome of like, how much risk is this generating? Right. And I know it’s tough without the business context necessarily to say like, Hey, you’ve got, you know, 40 people with full admin access to your cloud environment. Is that good or bad? But, I think like at the end of the day, it’s if you have quality data in to help support those AIs to make those, you know, probabilistic outcomes for you.
then you can see that increase over time and you can get those windows shrunk down, right? Where you feel like that probabilistic outcome is good and valuable and you can build that trust there. Cause like with anything AI related, right? It’s like the first time you try to use an AI like phone call, right? Where it’s like hit one, click two and you go through that phone tree or it’s like, tell me what you want. It doesn’t work. The next time you call that place, you’re like, operator, operator, right? Like, I don’t want that.
Very much with this, you know, things that are looking for probable outcomes. The first time it’s wrong, everyone’s going to be like, I don’t know. I remember the last time I used that and it said, I needed to do this to fix. I made the whole cold base go belly up because it missed this, you know, dependency call somewhere else in the software. And it immediately tanks that. So I think it’s, building in another layer of confidence with the outcomes from AI that you can use to help one tune.
Adam Brennick (57:04.629)
right, and get better data so you’re not getting garbage in, garbage out, right, at the end of the day. And having that ability, right, to help say, like, this is a good answer, this was right. Use that to build that probability outcomes and then continue to train that to a level where you have high confidence that you’re getting quality answers, whether it’s a control assessment, whether it’s a vulnerability assessment, right, like anything in so you’re building that confidence level and you don’t feel like, well, I have this AI assistant, but I have to babysit it because I feel like…
every answer I don’t trust it, so why even have it? Why even look at this? I’ll just do that more deterministic approach where I’m spending more time rooting through and running my own crafted tests, but I feel confident because I know that I did everything alone.
Raj Krishnamurthy (57:47.438)
Absolutely, and I think going back to your earlier statement about the auditor-audity collaboration that you beautifully spoke about, I think this could be an interesting set of interfaces, right? The auditors can come up with these validators, these evaluators, right? And everybody can understand how these evaluators are built. can maybe even pretest them before you engage with your auditors. So I think it will continue to get better. I think what I’m hearing you say is that you’re very optimistic that it will continue to get better.
You don’t have to call the operator anymore. Hopefully, it’ll get to a point where you can solve your problem. But I think we’ll continue to see improvements on the validators, is what I’m hearing you say.
Adam Brennick (58:26.461)
Yeah, I think so. it’s, like I said, somewhat incumbent upon that relationship, right? And it’s like, what data sets are we using to build these, right? Because you can do a lot with test data, but what our data looks like is going to be different from what, you know, like compliance cows talk to data, right? What are the testing procedures that you’re doing is going to look different. but yeah, I feel like there’s a huge opportunity for everyone in this space to
come together, work together, and build a really, really healthy approach that will optimize business for everyone, right? So we’re not feeling like my SOC 2 audit’s gonna start and I’m gonna be three months of toil here or something like that.
Raj Krishnamurthy (59:11.869)
What is your assessment of the current state of GRC tools and platforms, Adam? Where do you think they shine? Where do you think they suck?
Adam Brennick (59:22.751)
I think it’s really interesting because there’s…
So much, there’s a lot of variance between these tools, right? Like I feel like different tools can carve out different niches and operate in that space and want to thrive in that space. And then another tool emerges and it’s like, Ooh, I want, I’m using your, this tool now. I also want what’s in that tool. And it’s very tough for product teams to make such a hard pivot often.
right, to say like, we’ve built the product to do this one thing, like our backend doesn’t support this pivot or this other piece that this other tool offers there. I think there’s over time, like there’s starting to be a convergence towards more of the flexibility in the automation, right? Cause I really think like a lot of these tools started as like, you think like Archer, right? Some of like the more legacy monolith technologies.
I’m sorry, I know RSA is coming up. Forgive me. But there are those big programs that were designed to be like, we are going to be a one-stop shop for everything security and compliance in your platform. And because they’re so big and there’s so much complexity in the back end, doing a pivot to other areas is really tough. And then you have to hire maybe engineers internally that understand the software to help move it and shift it to your
organizations, but there’s folks that focus a bit more. I’m not trying to pigeonhole their names or anything, but like DRADAs and VANTAS that are very helpful to get SOC 2 to market quickly. They can really help to plug in. And now it seems like there’s a pivot for there to be more flexible and not just say, bolt us into your GCP environment. When you see everything turn green, you’re good to go.
Adam Brennick (01:01:26.965)
Right? It’s like, you can do that if you want, or you can have some flexibility. You can build your own kind of custom environments in those tools that are there. And then the flexible pieces, like I have some experience with hyper-proof, right? Like they’re more of a, you know, white box. Like here’s a box to work in, but you can largely have like a fair amount of flexibility inside that tool to do what you want there and connect how you want. So.
It’s, I think like what I want to see right from a tool perspective is a more agnostic tool that’s focusing on almost like a data broker, right? Between the system I want to collect data from and the system I want the data to go to at the end of the day. So that can be built inside of a system, right? On top of it, underneath it, anywhere.
But I’m loving the idea, right, that I think the voices have been heard a lot in the GRC space that we want more flexibility. Like, we want to have something where, ideally, I think like a beautiful end state would be that level of AI, like vibe coding, I’ll call it, where you have a tool. You say you can sync up your AWS environment, GCP, whatever environments you have.
And you can say, Hey AI, go check if all of my disks are encrypted, all my buckets are encrypted in AWS. And sure, Adam. then, you know, spits me out results there. So that way, I don’t have to spend a lot of time like figuring and troubleshooting out like problems or connectors and things like that. Right. Like I can just have a tool that integrates into everything. I can ask it questions. It can give me responses. And I can also say like,
that’s a perfect response, run that query for me every day and tell me if I get a problem here. And then I can kind of like set it and forget it and walk away. And I feel like a lot of the space is moving towards that, right? From the compliance automation piece. There’s that next layer of risk where things are interesting because every tool I think has a very different approach to risk management inside the platform, right?
Adam Brennick (01:03:53.09)
There’s some very primitive solutions from a risk perspective where you can, yeah, we’ve got a risk register and you can attach it to a control, but it’s very manual, right? The process of like, there’s not a lot of intuitiveness to say like, if the business changes in some meaningful way, like someone has to go in there and like turn a bunch of knobs and change a bunch of settings and do it there as opposed to.
being able to be really flexible from a risk perspective, build out risk scoring and capabilities for risk management that can read my control data and also influence the risk register as much as possible. I’m not quite so ready to get into AI risk generation and treatment plans because that’s, I don’t know, there’s a lot of business context around that, right?
that an AI would need to understand the goals, the objectives, like who are the people, like size of the business. It’s so much to factor in. And then it’s a spot where I think it’s emerging and it makes sense if you’re kind of starting out your risk program. And it’s like, look, I don’t have the time to do a whole risk assessment myself, right? I’m a two-person organization here. We can do our best, but.
We don’t have the capabilities to do a real, formal, robust risk assessment. So we can leverage maybe a tool that will pull data in and use AI to generate, here are the risks we found in your organization. We’re going to go for that. So once again, that’s interesting. It’s cool. I’m not ready to go down that road just yet. On my side, I much prefer a little more hands-on on the risk register pieces on our side, because I think we understand the business really well.
Like we’ve written the risk management procedures in the organization. And we can also stay flexible as the goals of the organization change quickly. We can adapt quickly there to read it. Whereas if you’ve got this stuff kind of auto-generated based off of like a platform that has your data in it and a business goal changed, like how do you feed that back and how do you refactor all these risks that you have? Like it might happen quickly, but do you have strong confidence that’s still aligned to?
Adam Brennick (01:06:11.583)
actually give you that good data that you’re going to approach business stakeholders with to say, like, let’s say I said that we’ve got a risk here. You need to take action on it. Right. It’s, once again, that same problem with the phone generator, right? The first time you have an AI generated risk, that doesn’t make any sense. That’s where I think like a lot of stakeholders are going to be like, Hey, no, I don’t feel strong that this is an actual risk for us there. So I love that. You know, I think like a lot of the GRC tools are
giving a lot of benefit in the automation piece and allowing folks to have a little more flexibility in what they do. I still think the risk management piece is something that’s new and emerging and I’m keen to see where it goes. I’m personally not ready to go full AI risk register deployment and oversight.
Raj Krishnamurthy (01:07:00.014)
was good.
No, totally. In fact, it’s very interesting that you say that because last week, we actually created a pilot for an MCP plugin on Cloud desktop, where you can simply go create your own dashboards and charts. You can ask questions. We generate answers. But one of the fundamental reasons how you can enable to make that happen, because I think while the consumption layer is changing, the underlying engineering and the design principles of GRC platforms need to be strong.
They have to be API-forced and API-driven. Unless you have those primitives in place, it’s actually very difficult to bolt on anything like Cloud Desktop or any of those sort of LLM generative AI layers on top. So do you see the existing landscape of GRC tools having very strong engineering and architecture principles to support that?
Adam Brennick (01:07:56.376)
think it varies, right? Like one thing that we really like out of a platform we use, our SIM, is Panther. We like that we have access to that data ourselves, right? It’s not guarded black box data. Like we have access to the backend of that data and then we can plug it in ourselves. We can do whatever we want with it. So in the GRC space, I’ve seen some of that, right? Where, hey, it’s like,
here’s your data, you can do whatever you want, you can connect with it directly. So it takes some of that layer of engineering, right? And the GRC tool, like clearly, you know, compliance kind like you’ve thought about that, right? Like this needs to be AI driven, API driven, right? Like we need to be able to surface this data easily by a programmatic endpoints, right? That it can get what you need immediately. And we don’t need to work inside the platform exclusively to get what we want.
I love the idea that, hey, I know we have this application layer above it, but here also, here’s the access to the underlying data for you. And it can be tough because schema is a different, right, in trying to work inside one schema versus another. But you can, a lot of times, use. I think these two terms are interchangeable. I always get them messed up. But ETL, ELTE, basically, extract the data into a new system on your end, too. So if you have that data.
Raj Krishnamurthy (01:09:02.317)
Exactly.
Adam Brennick (01:09:22.747)
I think ultimately like for me, like if I have access to that data on the backend and I don’t need to have that application layer on top of it that I interact with, like y’all have prepped for it. don’t, not sure everyone else has, right? To be honest with you. And if they can just give you access to that, the full data set on the backend and you can pull it to yourself and do what you need to from there. I think that’s a real boon for everyone. And I feel like everyone should want that, right? Like I.
I wanna have full access to that data, the underlying data that supports everything that’s happening. And API, obviously, right? If it’s there and it’s robust and it meets all the needs that I have, then cool, I love that. If you don’t have that, then can you give me a mechanism for me to just be able to get to my data myself and access it and do it that isn’t required on me to say, all right, well, let me talk to your product team and like, it’s on the roadmap, right? Because we all know what that means. everything’s been on the roadmap for…
you know, any given time there. And in that time where that development work happens, I’m stuck, right? Like I don’t have a lot of options to do there. So I’m a big proponent of right, like the application layer on top and the API programmatically, if that’s robust and that hits our needs, then amazing. If not, just give me my data. Like let me get to the data directly.
Raj Krishnamurthy (01:10:40.146)
I think absolutely, 100 % agree with you. You are a big proponent of automation, you’ve always been, Adam. What is, as a leader, how do you go about building a business case to your leadership teams? What advice would you give to our listeners?
Adam Brennick (01:10:57.909)
Yeah, I think at the end of the day, quantitative analysis moves the needle more than qualitative analysis. Because I can approach our head of security in IT and say, hey, I need $80,000 for compliance automation. Why? It’s going to make us more efficient. It’s going to make our lives easier. Why?
And I think you can make a case qualitatively, right? Like, hey, we don’t want to do screenshots. And it’s like, yeah, I don’t want you to do screenshots either. Great. Like, OK, how does that qualify into $80,000 that we have to spend on something? So how we’ve approached it to help identify the need for it and help get funding for it is to actually tangibly get the data on how much hours are used doing manual tasks.
Raj Krishnamurthy (01:11:37.441)
Correct, exactly.
Adam Brennick (01:11:54.934)
So how we’ve done that is during all of our audits, we use JIRA, we create tickets, we specifically create tickets for any manual task that exists. So anytime someone has to, hey, Team X, you need to go do something. Even if that thing is a programmatic, like run on their end, it’s not necessarily a screenshot. mean, even when another team is doing it and they kind of have to take a screenshot, right? If here’s a script I ran, here’s the time that ran it there. So in JIRA,
We create tickets, we track those tickets, we see the amount, the LOE dedicated to that ticket, right? At end of the audit, we can decompress that, pull all the data in and say, okay, we used 75 hours during our audit to do this. And then we do manual control reviews. So that’s 12X, that hour, those hours there. Cause we’re not just doing it during the audit, we’re doing it every month throughout the year. So taking that,
Ticket data that’s supportive that we can show this took us 60 hours. So we’re factoring that across the year, I’m not math, 720 hours, right? Throughout the year are dedicated to manual compliance operations.
We can POC a tool, we can come back to say like, look, we had these 25 tickets that we did for our SOC 2 audit. With this, we feel like we can get this down to seven. And now we can actually show that we will save this many hours through the year and during the audit by automating these tasks that are automatable, I guess. So when you approach business leaders or budget owners,
with that data, that strong data that shows hard line, right? It’s soft dollars, you know, because you can’t say like, it’s tough to say like, these are hard dollar savings, but when you show that operational efficiency that, you know, folks are wasting time doing something that can be automated, it’s such a strong business case. find it’s, reluctance is few and far between, right? I don’t think you’re gonna find a lot of reluctance when you can actually show.
Adam Brennick (01:14:08.363)
the dollar values associated with this automation and not just like screenshots suck, right? Because yeah, that’s cool, but it’s like you’re asking me to spend a nice chunk of change here. Does it suck so bad that we have to dedicate 80, $90,000 to it? So being showing that dollar amount that you can save by implementing these automations is such a strong business case that it…
becomes easy and it becomes easy for folks that aren’t doing GRC for a day-to-day basis, right? Like they don’t understand like what goes into this, but when you just show them like, listen, these are the hours that are saved. And then you can reflect it back, you know, in future audits and assessments where it’s like, here, we made five tickets. That was it.
Raj Krishnamurthy (01:14:49.102)
And these folks that you’re talking about, are these the GRC folks or are these also the engineers that are sitting in product development and other teams?
Adam Brennick (01:14:57.781)
Yeah, this is outside of our team. So it can be our team, our internal team to say like, hey, compliance has to run this in Confluence to pull all the policy in and we have to do X, Z, right? But having a compliance tool, we can pull policy in. Yeah.
Raj Krishnamurthy (01:15:00.974)
Thank
Raj Krishnamurthy (01:15:12.076)
Reduce. So you are basically, the argument you are making is engineer style, right, is actually a powerful factor because they are working on the launch the next release and you are trying to not, mean, quantitatively, you’re trying to make this justification, but what you’re also saying is the opportunity cost of them focusing on their core things and trying to reduce the number of hours that they spend on compliance and security.
Adam Brennick (01:15:35.64)
100 % yeah, mean like at our organization, a lot of the on-call engineers take on when we’re in an audit, right? So that on-call engineer should be focusing on on-call activities, not taking screenshots. Yeah, 100 % and that’s a toil enough. And then on top of that, having me show up and be like, can I get a screenshot from Ops Genie, right? Like nobody wants that. I don’t wanna do it, they don’t wanna do it, nobody does. So yeah, it’s that opportunity cost.
Raj Krishnamurthy (01:15:47.182)
It should be on call.
Raj Krishnamurthy (01:15:57.464)
you
Adam Brennick (01:16:05.151)
Right, where you get folks doing what they’re supposed to be doing and not doing things that are trivial and not necessarily trivial, but trivial to them, right? Like they’re not hyper concerned about that. Hmm? Yep. That context, yeah, context switching.
Raj Krishnamurthy (01:16:17.773)
It’s a switching cost for them as well, right? Because they have to switch from something to something. yeah. And we are approaching the end of the segment, Adam. Are there any podcasts, books, people you follow, anything that you want to do a shout out for our listeners?
Adam Brennick (01:16:37.885)
Yeah, so.
Adam Brennick (01:16:43.307)
Sorry, I’m gonna pull up the podcast. I forgot the name of the people that I did a podcast with. Give me a second.
Raj Krishnamurthy (01:16:47.07)
I can give you one security and GRC decoded as a podcast. I’m just I’m just joking. I mean you’re on this podcast.
Adam Brennick (01:16:52.235)
Yes, 100%. This one, absolutely. This is the one I listen to.
Adam Brennick (01:17:05.897)
It’s Cyber Sidekicks is the name of the podcast. And I feel bad because I was on it and I totally forgot. Sorry, bear with
Adam Brennick (01:17:32.472)
Okay, yeah, I would highly recommend the Cyber Sidekick podcast. I was on it, shameless plug, that’s just me, from the Richmond Advisory Group. So it’s really cool. have like a more broad podcast where they kind of talk about like new issues, like emerging threats, things like that. And then they pull in folks that might be interesting to have a chat with. And it’s smaller, like bytes with people to connect with and chat with.
So highly recommend listening to that podcast, subscribing to it. Like I said, it’s a new podcast. So I would definitely recommend that. Obviously, GRC Decoded, Best around. 100 % subscribe to this. And then there’s a Discord channel I recently joined for GRC Engineering. Highly recommend, folks.
Join that, there’s the GRC Manifesto, right? I believe that’s what it’s called.
Raj Krishnamurthy (01:18:35.096)
Correct, grc.engineering. Yeah.
Adam Brennick (01:18:38.667)
Yeah, and those are all great resources, right, for folks to subscribe, join, chat with like-minded folks in the GRC space and get to know each other better. So yeah, those are a few things that I would recommend folks take a look at, join up and get involved because I think it’s the best way for us as a community to really shine a light on things that we think that are important, that we think that will make our lives better.
Everyone else lives better in this space.
Raj Krishnamurthy (01:19:10.658)
Adam, you’re a fantastic leader, and I think your story is nothing short of inspirational. And I’m hoping our listeners take away a lot from this conversation today. Thank you for being with us. Thank you.
Adam Brennick (01:19:22.357)
I’m honored. Thank you so much. Appreciate it.
Want to see how we can help your team or project?
Book a Demo