What trade-offs are you willing to make in cybersecurity?
In this episode of Security & GRC Decoded, host Raj Krishnamurthy is joined by Trupti Shiralkar, a seasoned cybersecurity leader and Advisory Board Member at Backslash Security, to explore how risk, ROI, and real-world constraints shape modern security programs. With decades of experience across AppSec, security architecture, and risk governance, Trupti brings a rare blend of deep technical insight and strategic thinking.
They dive into cyber economics, AI-driven tooling, and why security storytelling may soon matter more than fear-based metrics. Whether you’re a security veteran or just entering the space, this is a must-listen on staying relevant and effective in the age of automation.
5 Key Takeaways
What You’ll Learn
This podcast is brought to you by ComplianceCow — the smarter way to manage compliance. Automate evidence collection, eliminate screenshots, and scale your program with confidence.
Learn more: compliancecow.com
Connect With Our Guest:
Trupti Shiralkar | Advisory Board Member, Backslash Security
Connect on LinkedIn
Rate, review, and share if you enjoyed the show!
Subscribe to Security & GRC Decoded wherever you get your podcasts:
Timestamps (Approx)
[00:00] Intro
[02:47] Why cyber economics goes beyond traditional budgeting
[06:10] Introduction of grey swan events and the need for proactive innovation
[10:10] Aligning compliance and security using LLMs
[16:56] Reducing cognitive load in cybersecurity decision-making
[20:00] Budgeting for innovation: Lessons from Trupti’s past security leadership
[23:00] Difference between cyber economics and cyber risk quantification
[33:50] The misunderstood strategic role of GRC
[54:30] How meditation and mindfulness help navigate the security world
[57:15] Trupti’s final shout-outs to historic and modern tech inspirations
Raj Krishnamurthy (00:01.15)
Hey, hey, hey, welcome to another episode of Security and GRC Decoded. I am your favorite host, Raj Krishnamurthy. And today we have a fantastic guest, right? An awesome and talented and super cool, Tripti Shralkar. And Tripti is a veteran in the cybersecurity space, 20 plus years. And she started with mobile gaming, went on to penetration testing, went on to application security.
and she’s a leader in the space of security. now she’s onto something very cool called cyber economics, which we’re going to talk all about today. Tripathi, welcome to the show.
Trupi Shiralkar (00:37.134)
Hey Raj, Good Morning and Happy Friday!
Raj Krishnamurthy (00:41.422)
Happy Friday, happy Friday to you. And I can feel it when you say happy Friday. Tripti, let’s jump straight into it. What is cyber economics? And maybe double click on it for us. And what do you see as the value of cyber economics in today’s security industry?
Trupi Shiralkar (00:56.982)
Absolutely Raj. So Raj, as you all can see, today’s CISOs are in tough spot. When they join a new role, they have to manage the security depth that comes with legacy infrastructure, software, stuff that was accumulated, built before their tenure. Then they have to also focus on existing challenges, existing commitments that enable the business, that support the business growth.
And at the same time, they are challenged with keeping track of all the emerging technologies like cloud computing, generative AI, agentic AI, as well as DeFi or edge computing. Zero Trust was emerging tech a few years back. Is it mainstream now? So how to protect your organization and organization’s asset?
from this increased attack surface as well as novel emerging threats, right? How do you actually allocate your budget? How do you make case for your budgets to fulfill these duties? That’s what cyber economics is. And Raj, unfortunately, there is no one size fit all or a formula that can fulfill everybody’s requirement. Depending on the industry, depending on the growth phase of the company, it’s gonna vary.
But through cyber economics framework, that’s what we look for.
Raj Krishnamurthy (02:27.54)
Got it. So the problem that you’re describing, the challenges, the growing assets, system services, the growing threat landscape, threat actors, all of these diseases have been facing for a very long time. So why this new term called cyber economics and what is it gonna do differently?
Trupi Shiralkar (02:47.32)
Correct. So basically, financing or budgeting can simply focus on optimization aspects. But when we use the term economics, there is a newer angle. We apply a lot of theory from the world of economics that has worked well in other industries, and there is a very important aspect, innovation. So it’s not just about enabling the business.
driving the operational excellence or cost optimization, but also considering the role of innovation. And I’m happy to talk about Gen.AI, right? With the current progress the security industry and other industries have made with the help of Gen.AI, we have this very powerful LLMs to help us. How does this change? We no longer have those big fat security team doing lot of manual operational work, right?
So when we bring innovation into our business strategy, we get to do something more breakthrough. For example, I may not anymore have 10 product security engineers on my product security org. I may have six product security engineers and four non-human engineer. So the very fact we are using current innovation to revise our budgeting, financing needs to meet organization growth.
I think that comes under cyber economics and not so much under finance or budgeting that we are used to.
Raj Krishnamurthy (04:19.828)
Got it. Now, I want to come back to this. So what in your experience, you have been in security, product security, application security for a long time, Tripti, what led you, how did that experience lead you to this new area that you’re trying to explore, which is cyber economics?
Trupi Shiralkar (04:39.336)
well, it just the today’s times we are living in, right? when I was a developer, I used to think all about, you know, shipping those shiny new features that generates revenue and didn’t even think about security because I didn’t know anything about security. But when I got my masters and got more into software security, that’s when I realized the importance of shipping secure software because the quality issues, the security bugs that we don’t fix, they come right back.
and bite us in a big way. And as a result of breaches, security incidents, we not only lose on revenue, but lot of customer trust, right? Fast forward 15, 20 years into that journey, now we are seeing this amazing innovative trends in the industry. Now let me tell you, AI is not new, right Raj? AI has been around since 1950s when Alan Turing wrote the paper.
and ask all of us a bold question, can machine think? What we see today is very systematic incremental innovation turn into something more powerful. So as a CISO, as a security executive, it is very important to keep an eye on this innovation and see how we can leverage that. And if we don’t, now this comes back to your question, right? What has changed now? So as a CISO, as a security executive,
If I don’t leverage this innovation for my team’s day-to-day operation, then you know what is going to happen? Grey swan events. Ask me what are grey swan events? So, you know, Naseem Taleb invented black swan events, and those who don’t know, black swan events are those rare and highly unpredictable events, like you can’t predict.
Raj Krishnamurthy (06:20.65)
You asked yourself, please go ahead.
Trupi Shiralkar (06:37.954)
but gray swan events are somewhat predictable. So when COVID-19 pandemic happened, everybody thought it’s a black swan event because in our lifetime, that is the first time we have seen an epidemic and the impact of epidemic on humanity, economy, tech world, right? That’s where we experienced the boom and bust. But if you ask a biotechnician or somebody who is in pharmaceutical,
area for them epidemic is no new thing exactly 100 years back there was Spanish flu and before 100 years back there was something else that means COVID-19 is a grace one event it’s somewhat predictable and its impact on humanity is also a predictable so similarly in the world of security these type of technological disruptions and their impact
if we fail to adopt it are somewhat predictable and we call those events are grace one events. And if we don’t prepare against them or prepare with the help of them, then the impact is very devastating. Then you still going to practice your classic application security. We are still going to rely on very rudimentary automation techniques without using the LLM power. But when we use the LLM,
suddenly your automation becomes lot more powerful. It has the contacts. It can serve your compliance engineering folks as well as security engineering folks very well through self services. Is that making sense?
Raj Krishnamurthy (08:18.292)
Makes sense, but double click on that. I think you’re making an interesting point. So your point is that we can predict some of these known unknowns, right? I think what you’re basically saying is go deeper. But you also said the LLMs can help you. What do you mean by that? How do LLMs help in predicting Grey Swan events and building a better security posture?
Trupi Shiralkar (08:44.472)
So I feel like LLMs are designed to bring in the context, right? I’m going to take a classic example of security engineering teams and compliance team. If there is alignment, most probably the relationships are going to be frictionless or less friction. But if there is no alignment, security engineering orgs often get
Raj Krishnamurthy (09:05.598)
Mmm. Yep.
Trupi Shiralkar (09:13.612)
very randomized by compliance people’s request because audits are happening year around. Oftentimes compliance departments are under a lot of pressure to enter newer territories and as a pre-sales operation, they’re supposed to acquire the certification. And if these activities are not planned hand in hand with the security engineering leader, this may come as a surprise and which gives rise to friction.
So how we can use LLMs here, right? Let’s say the compliance function exists with any organization for more than four or five years. We can feed all the common compliance requirements as well as schedule the evidence generated to your LLM. And we can leverage the LLMs to map with the existing engineering evidence that is already present. Next step.
Let’s talk about agentic AI and not just rely on the chatbot style self-service inquiry. If we further leverage agentic AI, they can use the context as well as the past history gathered from both the departments to create a self-service portal where compliance engineers or compliance analysts can simply query to get.
evidence needed for my next PCI audit or SOC to audit or some Australian data privacy framework. So basically we create a huge knowledge base of existing security engineering evidence and pick and choose with the help of agentic AI powered application based on the need. Is that making sense?
Raj Krishnamurthy (10:59.988)
Now let me make sure, I think it is making sense, but let me make sure I understand this. So what you’re basically saying is, number one, security and compliance in many cases operate in silos, and you’re calling out, the call for action is that there has to be a greater alignment between them, right? And if they were to align, your argument is that it need not be a straight line, starting from security and ending with compliance.
And all those signals that we create in compliance can feed back into security. And your argument is that through large language models, large listening models, and essentially an agentic approach, it democratized this process where you can now consume large corpus of data in order to save both sides, right? Both from a security perspective and also from a compliance perspective, but potentially able to, I mean, because it has a large corpus of data and can remember lot more, you can potentially also predict.
some of these graceful monuments. Is that the position that you’re taking?
Trupi Shiralkar (11:59.566)
Absolutely. So, Grey Swan events were more related to the impact security organizations or compliance organization would face if they don’t adopt the latest, greatest technology. And it could be zero trust. Five, 10 years back, zero trust started. It was very big. But now we all are hyper-focused on agentic AI and LLMs.
But are we actually leveraging those LLMs to make your zero trust journey much better? That’s the question I’m asking. Because if you don’t use this latest technological superpowers, then we are gonna fall behind our competition.
Right? And let me go back to the classic compliance and security engineering example and break it down Raj. So let’s say 15, 20 years back, a typical startup starts series A, series B, the first compliance they need is SOC 2, right? Most of the compliance analysts may not have the necessary security engineering background.
So they reach out to their product security folks, ask for certain set of, hey, give me the pen test report, give me this, give me the access control list, right? And it’s pretty painful manual process 15, 20 years back. Fast forward 10 years, there was a beautiful rise of compliance engineering, where the security engineering org and compliance org, they both decided, hey, we got to automate some of these reoccurring things, the engineering cost is low, let’s go ahead and do it.
Guess what? That was great, but things don’t stop there. Now, if you look at GDPR, which arrived in 2017, Raj, we have so many flavors of GDPR. Pretty much every state in America has their own data privacy framework. Pretty much every country has their own flavor of data privacy framework. Now, do you think security engineering teams have time?
Trupi Shiralkar (14:05.368)
to do all those custom automations? Do you think compliance teams have time to go chase for every country’s unique data privacy requirements? No. This is exactly where LLM comes into the picture. Any organization that exists for a while, they have huge amount of data, the compliance report, engineering evidences, auditors report, checklist, whatnot, NIST framework. Throw in all that to build your own LLM.
Have your security engineer’s compliance engineer seek out information in self-service fashion and don’t just stop at a chat bot. Go ahead and leverage today’s agentic AI because that’s precisely what agentic AIs are built in, right? They can not only, you know, if somebody wants to do airline reservation, now with the help of agentic AI, you just have to feed your requirements and they will do the airline reservation for you.
So yeah, it takes away all the manual steps, all the back and forth that human engineers are facing, and it reduces friction while allowing us to operate at a greater speed. And if you fail to not see all of this or fail to do this, then yeah, you may suffer a grace one event, yes.
Raj Krishnamurthy (15:26.73)
Got it. it. I was actually watching a fantastic speech from Andhraaj Karpathy yesterday. I think he was presenting this to Vaikon Vinay. And one of his, I mean, one of the big claims that he’s basically making is that you can take today’s agentic world and break them into two parts, right? There is a generative aspect of it and there is a verification aspect of it. And AIs are extremely good at generation and we still need humans in the loop for the verification.
Trupi Shiralkar (15:36.163)
Okay.
Raj Krishnamurthy (15:55.555)
because of the probabilistic nature and the hallucinatory effects of AI, so on and so forth. And his main argument and thesis is that in order to make this much more productive, you need to figure out ways in which you can shorten those cycles, especially the verification cycles, because human becomes the bottleneck here, not the AI, right? Because AI, in theory, is infinitely scalable. So…
So I think the excellent point he was making is that how do you reduce the cognitive load or the cognitive toil on humans? And what are the new AI type of systems that you can build to reduce the cognitive load? So my question to you is that, I think you talked about this beautifully in terms of the idea of using agents and the idea of using large language models and large reasoning models from a cybersecurity perspective. And given your experience,
What do you envision as this new system and what should be done to reduce the cognitive toil that Andres is talking about?
Trupi Shiralkar (16:56.952)
Yeah, you know, this is a very thought provoking question, Raj. I think first, we as a human, the way we admit our human limitations, we also have to admit the limitations of these LLM models, chatbots, as well as the agentic AI application. And we also have to admit the very fact all of this is still in their first generation. What that means is…
hallucination, inaccuracy problem. We as human, we have to pay more attention because when we realize the hallucination problem or where the AI has limitations, that’s where human creativity kicks in. And I do agree with having the layer of verification through humans who are knowledgeable enough, right? Who can spot those inaccuracies to shorten the cycle. So yeah, I do agree with all of this.
Raj Krishnamurthy (17:54.667)
From a cybersecurity perspective, given your long experience at companies like DocuSign, DataDog, Illumio, Amazon, how do you see the next generation of cybersecurity products, especially in this space of the verification space?
Trupi Shiralkar (18:10.35)
Yeah, absolutely. So I’m going to take a classic example of static code analysis tools. So those who don’t know static code analysis tools, they are expert in identifying presence of insecure coding flaws, insecure method, insecure design patterns, right? Available in coding. So 15, 20 years back,
We had first generation static analysis code tools which were very hard to configure and security engineers would spend months configuring them. Have we become better? Yes, definitely. Fast forward five to 10 years into static code analysis journey, we have now better rules that we can start using out of box. Does that mean we have zero noise, zero false positive? Unfortunately, no.
still security engineers, have to spend a lot of time doing that further fine tuning. It has reduced, right? It has reduced, but it has not completely disappeared. Now with the emergence of LLMs and copilots, has our life become easy? Somewhat, but we have newer problems. And let me talk more about that. So with the help of copilots, we can now see
that by training our developers on some of the shift left transformation of security, they can see and understand the basic vulnerability. They can even leverage copilot to issue the fix. But the problem is these LLMs, these copilots are still in their first generation. They can easily catch the low hanging fruits, but they still do not have context about layered security.
So let’s say I have 100 security vulnerabilities Raj and out of 100, 50 or 60 are actually accurate and the auto fix generated by copilot will create a nice fix without regression. But for remaining vulnerabilities, I may actually have another control at framework level or maybe at web application firewall level. Now my LLM is not yet smart.
Trupi Shiralkar (20:31.966)
nor my agentic AI that smart to understand the context of multiple layered security approach. And this is where I see human security engineers role is augmenting. Now, instead of spending a lot of time doing false positive analysis, I’m actually going to have three non-human machine team members doing the grant work. And I’m going to leverage my human security engineers to do this deeper, more creative work where they’re
years of expertise will get utilized. Does that make sense?
Raj Krishnamurthy (21:05.066)
That’s a beautiful way to put it. That’s a beautiful way to put it. And I think you’re absolutely right. And I wanted to come back sort of your central focus nowadays, Tripti, which is the cyber economics. How does this experience that you had with DataDog, Illumio, and all these other great companies, right, great brands, from a security perspective, fit into your current view of cyber economics? And what would you do differently if you were to go back to those roles again?
Trupi Shiralkar (21:34.998)
Yeah, so you know, I have realized the importance of allocating right funds to right innovative projects. Unfortunately, the security industry, lot of security tooling sales still happen on relationship basis. The classic security model is followed, you know.
We need a SAS, static code analysis. We need a dynamic application security testing or we need blah, blah tool because that’s what compliance is required. As a security executives, what we fail to see Raj is lot of innovation that these small startups and vendors are doing and that can actually solve lot of our classic problems right away. So as a cyber economist, know, I’m always going to allocate budget.
Raj Krishnamurthy (22:28.222)
That’s a term now, cyber economist is a term. Okay.
Trupi Shiralkar (22:30.19)
I’m always going to allocate budget for newer innovation. How this newer innovation, which could be cost effective, can help me scale my security programs and take care of that classic friction, right? So yeah, I would suggest like, do not just worry about, you know, classic finance and budgeting and cost optimization.
But think about how you can bring in this innovation that can energize your security compliance team. They’re excited to use this innovation to solve the classic problems, right? So yeah.
Raj Krishnamurthy (23:10.942)
Let me make sure I want to double click on this and I want to make sure that… So the idea of cyber risk quantification, CRQ, in fact there is a Gartner quadrant for CRQ, right, has been there for quite some time. How are you viewing the cyber economics as a discipline? Is it different than cyber risk quantification? Does it add on to cyber risk quantification? What is it? What is its relationship?
Trupi Shiralkar (23:39.191)
There are lot of flavors to how we quantify calculate risk. Do you want to pick a particular example Raj for our audience so that we can dive deep?
Raj Krishnamurthy (23:49.427)
No, for example, let’s say that the ability for you to have outdated services running, meaning unpatched services running that are exposing critical vulnerabilities is a good example. The other example could be that you have a bunch of users that have been terminated and you have not taken an active posture while you might have eliminated them from your gateway systems. They continue to linger around in the underlying system for identity authentication and authorization.
that could create some insider sort of attacks. Those are two risk examples. And the way that you typically look at them is you look at what is the likelihood of an event materializing. And if that event were to materialize, what would the impact be? And then it gets into the more details about how you quantify and how you present. But that’s how we look at traditionally, I’ve looked at cyber risk quantification.
Trupi Shiralkar (24:42.072)
Yes. So let me tell the bare bone fundamental pillars of cyber risk quantification, right? Security engineers mostly focus on the impact of risk, exploitation, and the overall likelihood of the risk exploitation. And then we have the beautifulness, risk metrics, and whatnot, right? And…
That’s the fundamental from where all other models are built on. Now, you know, with cyber economics, I’m going to challenge this approach. Every single organization have legacy software. And you specifically picked about unpatched vulnerabilities, right? Unpatched software packages. 95 % of enterprise applications
are actually built using open source libraries, open source frameworks and open source ecosystem. And to stay on top of patching and the vulnerability mitigation, we have software composition analysis S-bomb type of products in market. Now, the easiest thing to do is get a software composition analysis, deploy it.
and start patching this vulnerability in a very classic fashion, right? Now, as a cyber economist, I’m gonna look at the industry and look at the vendors who are solving this classic problem in an innovative way with minimal financial burden on my team, minimal operational burden on my team. Let me…
take a moment and educate our audience about why this problem exists. So even though we have organizations like OWASP since 2005, 2006, still we haven’t fully mitigated this basic software security problems, right? And the fundamental problem is it’s not that we don’t have enough tools in the market, but in our universities,
Trupi Shiralkar (26:55.05)
When or even in high school, when somebody takes their first programming class or algorithms class or database class, the professors and the teaching community is not teaching them about secure way of programming, secure way of deploying. That is something they learn much later when they join industry, when they face vulnerabilities or they hire, they attend the new hire orientation security awareness thing. If somebody takes the elective security class,
they learn little bit about software security. The reason I have shared this root cause analysis because unless we educate all our university programs to start teaching security as part of every computer science class, this problem is not going away. The new generations of developers are getting educated out of universities. They are joining our task force and they are repeating the same mistakes.
So do you agree this problem is there to last forever till we fix our education system? Okay. Now what newer vendors in this space are doing? They are taking look at the patterns. They are actually trying to see which of these software vulnerabilities are reachable and not just reachable, but actually exploitable given the architecture of the enterprise application.
Raj Krishnamurthy (27:57.546)
Absolutely.
Trupi Shiralkar (28:20.514)
because there is lot of legacy code which is not reachable. Nobody is using that and it exists in your binary package because some developer wrote 15 years back a module that we forgot to take it away. So as a cyber economist, I’m going to specifically look at economical aspects, like how this new vendor is solving the existing classic problems in different way. Are they giving me a prior to rise list of
only exploitable, reachable open source vulnerabilities? And are they giving me actionable remediation guidance to mitigate these at scale? Like for example, hey, Thrupti, if you upgrade your open SSL to this latest stable version, you will be mitigating 35 % of your cryptographic CVEs, right? Now I’m excited.
as a security executive, as a security leader, because my third party tool is not only giving me insights, but it’s also telling me how I can make life easy for my developers, as well as for my security engineers. They don’t have to do these things manually, the way we used to do 10, 15 years back, right? Another thing, Raj,
Trupi Shiralkar (29:45.106)
All this auto remediation, leveraging LLMs or you know, today’s fancy terms, they come with a certain type of risk. So if somebody tells me what’s the percentage of regression, I will able to tell my engineering teams to plan it well. You see by predicting risk here and the overall impact than the classic security, we are actually helping engineering with efficiency.
I see your last question.
Raj Krishnamurthy (30:16.488)
No, I’m not lost. think in fact I think I’m understanding it even more, even better now. Please go on.
Trupi Shiralkar (30:22.75)
Okay. So yeah, know, gone are the days where, you know, compliance team just shove over, hey, these are your compliance or security requirement. Gone are the days where security engineers simply share, hey, these are the findings and go fix it. We have to be lot more intelligent. We have to come up with, you know, hey, leverage these product safe libraries or leverage these tools integrate in your pipeline so that proactively.
You can find stuff. I’m gonna give one more example since we were talking about s-bombs So s-bombs and Staying on top of third party open source risk is a very reactive problem, right? But we can quickly turn that into proactive and let me tell you how in any organization there are these artifactory server Where first all the third party software gets downloaded and then developer further?
you know, make copies into their respective program. So that’s the right place to run your software composition analysis tool. And that’s the right place to influence practice of golden images for operating system, nicely weighted open source libraries and whatnot. And now just think from economics perspective. If we harden the original libraries and images, even before developers,
or engineering leaders start using them, how much pain we save them in terms of time, cost, efficiency, right?
Raj Krishnamurthy (31:58.761)
I think, let me replay this and I think I see your point. So what you’re basically pointing out is that security has been extremely transactional and tactical. And we talk about CVEs, we talk about the CVSS scores, exploit probability scores. But what is missing in this entire conversation is that, so what? What is the impact? And I think your argument is that we need to be able to uplevel these conversations.
So the leadership team and the management team can understand the impact, that’s one. And it also flows back because now we can make, there has to be, there is a method to this madness in terms of how we prioritize and why we prioritize and what we prioritize that comes back to the tactical set of things that we want to do because we are always constrained for resources, we being the security teams. And your argument is that the cyber economics is a new way of thinking about sort of a symbiotic relationship.
right, or the symbiosis of different disciplines, whether it is security engineering, whether it is risk management, and whether it is business strategy. That’s the way you’re presenting this. Is that a correct statement?
Trupi Shiralkar (33:05.28)
You are absolutely right. The whole industry is so hyper-focused on detection of vulnerabilities and accurate detection of vulnerabilities. But we as security leaders, we need to shift focus a little bit more on how we can remediate things at scale that will make our engineering leaders life a little bit easy. Instead of reaching out for, you know, let’s say compliance evidence, how can we auto-generate that evidence?
so that we don’t even have to bother our people, right? And all this has economics behind it, the return on security investment, like as a CISO, those are the investment we should be doing to scale things, to drive efficiency.
Raj Krishnamurthy (33:49.586)
Is this a formal study? Is cyber economics a formal study? I have not heard of it and that’s why I’m asking.
Trupi Shiralkar (33:55.886)
Well, I have taken a pledge to formally study it. Currently, I’m doing a Stanford lead education program, and this is my area of research.
Raj Krishnamurthy (34:06.036)
God, beautiful. Good luck and I think we need to hear more of you on this, Tripti, so good luck with that. What do you see as the role of GRC, particularly risk in GNR and C, or maybe even add governance to it, in this discipline or the study that you’re talking about? How do they fit?
Trupi Shiralkar (34:29.1)
I love it. So first of all, you know, unfortunately, GRC folks are not doing or playing a role of educator. Everybody has misunderstood GRC as some compliance analyst only reaches out to us when they need some sort of evidence to fulfill some audit requirements, right? That’s the common misconception.
It’s a very tactical representation of the facts. But if you look at GRC is absolutely essential function for any organization. The G in GRC governance. If we do not govern our own risk, our own security investment, either against a compliance standard or the company’s homegrown standard, then how we will find out how we are doing, right? If we don’t measure.
if we don’t transparently communicate. How do we know our cyber investments are giving us the bucks we desire? So governance is non-negotiable, absolutely important.
Now what risk? I told you risk has multi-dimensional risk. Every CISO, every security department simply inherits existing risk from legacy. Companies exist 20, 30, 40 years and we carry all that legacy risk. Then we have existing goals. Then we have future risk. So risk is absolutely important to stay on top of, not just the quarterly risk review, but we have to go more granular.
And compliance, again, very, very misunderstood. The technical folks do not value compliance. They think compliance is a checklist. Yes, it could be a checklist, but look at the value and impact. Compliance is not a security function. Compliance is a pre-sales function. It allows…
Trupi Shiralkar (36:31.724)
the organization to enter newer territories. It opens up newer market segments for that organization. It’s a business enabler and it is absolutely important for security engineering orgs to fulfill the compliance due diligence needed under the umbrella of GRC. So when GRC folks will invest time into talking about why GRC function exists beyond the tactical things, I think…
the Dunning-Kruger effect will get removed and everybody will be on the same page. And this is where automation will turn more into a sophisticated compliance engineering where it’s not a random request but a more systematic flow of information.
Raj Krishnamurthy (37:16.414)
What is a Dunning-Kruger effect?
Trupi Shiralkar (37:18.958)
Dunning-Kruger effect is exactly opposite of imposter syndrome. So what happens in imposter syndrome, people who are high achievers, they have high standards and high bar to evaluate their own, our own functions, right? Like how good or bad we are. So we have extremely high standard, we are never happy and we think things are happening by fluke.
Unfortunately in Dunning-Kruger effect which is exactly opposite, people who are less skilled or people who are never exposed to that particular, they have no ability to evaluate that.
So I’m going to give you an example. Let’s say there is a blue collar painter in a small village of Connecticut and he paints all the homes in that village and he thinks he’s the best painter in the village. Of course, yes, he’s good, he’s hardworking, he knows the techniques to paint the homes in that village. But if I take him and I place him on a project in Manhattan to paint the high-rise
you know, 180 floors with 30 more painters, then, you know, it’s a different skill set, it’s a different problem, it’s a different scale. He may not have ability to evaluate himself with the same standards he used to, right? So that’s a Dunning-Kruger effect.
Raj Krishnamurthy (38:44.82)
Got got it, thank you. So if I am a listener and I’m listening to this podcast and I am in GRC, are there tactical steps that I can take to learn more about cyber economics to see if I align with your worldview on how the intersection of cyber economics and security, what should I read about, what should I learn about, and what steps can I take?
Trupi Shiralkar (39:08.302)
So I don’t want to say Raj, this is like a brand new term or field, but cyber economics simply enables GRC folks to think about economic aspect of whatever they are doing right, whether it is justifying the increase in sales because they were able to achieve this particular certification or it could be…
because we collaborated with security engineering and we have these 10 compliance engineering projects in place and instead of taking 10 days, we are generating evidence every second. So whether it is efficiency, that’s what they need to understand. Like look at cybersecurity more through economic lens, not only how we can mitigate everybody who is involved, whether a peer, stakeholder, your own team members, you’re removing the pain points.
but also how you’re making sense economically.
Raj Krishnamurthy (40:10.166)
I love what you’re saying. In fact, I think you’re spot on in terms of the problem that we face, particularly in GRC, but I would say cybersecurity as a whole, which is that how do we effectively analyze options? In fact, how do we even create these options? And how do we make decisions on those options? I think this is almost a structured field of study. And I’m hoping that there will be maybe we should do a follow up car cast.
where we can talk about specific things that people can go do, right? And anything that you would suggest to make this happen, right? But I think you’re pointing to a very important part of the problem in terms of combining those different disciplines, right? Especially in terms of quantifying, understanding, bringing innovation into this and making the right decisions for a much more sustainable and longer term security posture. I think that’s what I’m hearing you say.
Trupi Shiralkar (41:04.238)
Absolutely, like do I leverage my compliance analysts to do the same work that they were doing last four or five years or do I give them this exciting innovative tool that can make their life easy so that they can go work on those aspects that machines can’t do, right? Yeah.
Raj Krishnamurthy (41:25.076)
Totally. And who do you think should, who should the cyber economist report?
Trupi Shiralkar (41:30.348)
I think Chief Security Officer, for sure. Yes.
Raj Krishnamurthy (41:32.074)
Okay, got it. I wanted to, when I was asking you very early on during our briefing and we were going to talk about the controversial or heartache questions, you talked about sort of the false equivalence of high performance teams and psychological safety. Am I saying this right? Can you double click on that if you, or.
Trupi Shiralkar (41:41.815)
Yeah.
Trupi Shiralkar (41:50.506)
Yeah, yeah, Oftentimes, you know, as a people leader, I get asked, hey, Thrupti, tell me whether people are important or security is important. And I’m like, this is absolutely a wrong question to ask because security does comprise of people, processes, tools, innovation, everything, right?
So if you remove the people element out of security, security may not be effective. It could be just bunch of papers, right? Or bunch of machines running program. So people are very important aspect of security and no leader should be in position to choose between security and their people. We as a security people, we not only have responsibility to serve our own team member.
peers, stakeholders in organization, but we also carry a responsibility towards our shareholder because when breach happens, they suffer, right? Yeah, so security executives, security teams have lot more responsibility. Now, with current trends and economic uncertainty, there are a lot of layoffs going on, right?
executives are considering leveraging agentic AIs to replace some tier one, tier two supports or some rudimentary operational work that human engineers used to do. So there will be more layoffs. So instead of worrying about psychological safety, I think we need to invest time into learning these newer trends, technology, and see how we can provide more value.
to our customers, to our peers, to other departments by leveraging them, right? That’s where your psychological safety is going to enhance. But if you keep worrying about, layoffs are happening, managers are asking for metrics, no, it’s not gonna help you. Managers will always ask for metrics. Layoffs are unfortunately part of today’s job life cycle.
Trupi Shiralkar (44:13.304)
So yeah, we as engineers, we as innovators, we need to focus more on how we can leverage the technology so that the technology doesn’t replace outdated version of us. We need to reinvent ourself as the industry is reinventing through these emerging trends.
Raj Krishnamurthy (44:33.588)
Got it, got it. So I think your call for action is we need to be much more proactive in embracing these new innovations, right? And then we can create a shell, but it will be a false sense of shell unless you’re willing to go on with the program, right? Or you’re able to get on board with the program of innovation. Okay, got it, makes sense. You’ve been in product security, application security for a long time. What do you see as the current state of the product security tools?
and the application security tools that exist.
Trupi Shiralkar (45:05.71)
I see that, you know, I mentioned this 10, 15 minutes back also, like lot of vendor sales are happening due to relationships with, you know, CISO or higher up executives. And product security organizations are not given enough time to look at innovation. It should be part of their function.
Because not every vendor or organization is able to innovate with the speed the world is going right. They are still offering a of different problem solutions. Those were real five, 10 years back. And I’m going to give you an example, Raj. 20 years back, we were having this giant monoliths which were tightly coupled.
we still have Amazon or the Google, some of the products are still giant monolith. Then futuristic companies like Netflix, they went through these decade long migration where they broke down their monoliths, broke it more into microservices architecture so that the application can serve their growing customer base, they are loosely coupled, they are more scalable, right?
more cost effective. And then we saw low code, no code. And then just two, three years back, we were struck with LLMs. And we saw a rise of, yes, now we have wipe coding where teenagers are creating software. Older people are using chat GPT for therapy or whatever, soul searching. But the younger generation.
Raj Krishnamurthy (46:41.876)
Now we have vibe coding.
Trupi Shiralkar (46:59.768)
they are using wipe coding to spin up those web applications or their small businesses or software, right? Now, security engineering org, they really need to act fast. They really need to study the innovation, the adoption. They cannot get stuck into just generating guidance. They actually need to use LLMs to serve the…
LLM based growth. And what I mean by this, a diamond can cut the diamond, not a metal or other things, right? So if everybody is adopting LLMs in their enterprise software, we need to secure those foundational models. Do we have tools today who can do that? Or are you still using a five year, 10 year old tool because you have a long term contract? No. So security teams need to be more agile. Looking at
not only their organization’s current technology, what are the future technologies they’re adopting and rapidly increase their own powers by adapting to more innovative tools.
Raj Krishnamurthy (48:13.418)
Got it, got it. You talked about compliance engineering and its intersection with security engineering some time ago. From a GRC perspective, right, we see GRC more with the back office corporate security than with product security. What do you see as the intersection of product security and GRC? Do you come across them at all or do you not?
Trupi Shiralkar (48:38.03)
I have always come across, right? Because product security has the engineering chops to produce that evidence that GRC can take and present to auditors, right? GRC people are very busy scheduling those audit meetings. They’re busy dealing with auditors. They’re busy working with salespeople, answering customer questions, right?
if they’re entering into newer territories. So GRC people’s plate is already full. Now their proactive collaboration, the good symbolizes that you and I talked about with not only product security, but infrastructure security can really make things smooth. So what I have seen is in whether infrastructure security, cloud ops or product security.
when they collaborate with GRC proactively understand the requirements and build that software or leverage vendors who have, who can provide those software for automatic data collection. That’s where the win-win situation is.
Raj Krishnamurthy (49:43.967)
Makes sense, makes sense. Because the reason I’m asking that question is that we all see the security harness as part of your CACD pipeline. I’ve rarely come across, not that nobody has done it, but I’ve rarely come across the need for a compliance harness as part of your CACD pipeline. And that’s why I’m asking that question.
Trupi Shiralkar (50:00.718)
So if you look at, you know, there is something called common compliance framework, right? Eventually, it all boils down to, you know, authentication, authorization, confidentiality, privacy, auditing, and I can go on and on. It’s the same requirement, but with different flavors, different customization. You know, India may require the data to reside within India. Some other countries are more liberal.
Raj Krishnamurthy (50:06.516)
Yep.
Trupi Shiralkar (50:29.71)
I’m going to use Netflix example. So for Netflix, they have a different classification system for Middle East, what they can telecast or make it available through Netflix streaming versus United States versus India. The rating, the classification is different. But the underlying are exact same requirements. It’s the wrapper that is different, right?
Raj Krishnamurthy (50:59.678)
Got it. So you, think to your point, you are basically, there is a need for proactive GRC and there is a need for integrating GRC into product security as part of your releases. That’s something that you advocate.
Trupi Shiralkar (51:15.05)
Yes, the common compliance requirements are pretty much same as security requirements. GRC doesn’t care whether your architecture is pure agentic AI architecture or microservice oriented or monolith. They care for the security’s posture. They care for evidence who support that security posture. So yeah, this should be taken care. Like the way product manager works with, you know, customers, business and engineering leaders, it’s same.
GRC must be licensed between all these different stream. Now, a product security leader may not have depth into infrastructure security. Or let’s pick some other stream. Let’s say data security, right? Because GRC is combined bridge between all of them. Because they have to govern irrespective of the security sub-domain. They have to produce risk.
and they should be absorbable to the CISO, the executive team, as well as to these regulatory bodies. So they can’t divide themselves. They are the glue, they are the bridge, and proactively they worked with different subgroups, whether it is part of three-year strategy roadmap building or quarterly OKRs. Sooner, the better it is.
Raj Krishnamurthy (52:24.468)
Makes sense.
Raj Krishnamurthy (52:39.156)
Got it, got it, no it makes sense. We are approaching the end of our segment, Tripti. I know that you are a certified meditation practitioner. And my question to you is, and that is super cool by the way, how has that helped in your cybersecurity career?
Trupi Shiralkar (53:00.8)
It has helped a lot today. The reason I’m standing in front of you smiling is all because of my meditation practice. Security can be very, very reactive. I told you in the beginning of the podcast that the problem of software security will continue to exist no matter whether we have smart LLMs, agentic AIs or quantum computers.
the software security issues will exist unless we fix the education problem, right? And because it will take lot of time to perform that educational reform, there will always be tsunami of vulnerabilities, security issues, findings. They randomize your strategy, they randomize your day. There is not a single day, a week where you’re not hit with some vulnerabilities, right?
So in this journey of software security professionals, because of reactiveness, we see lot of friction, we see a lot of stress, everything that we plan proactively gets shaken up. So how do you really build resiliency? We build resiliency through mindfulness. We build resiliency by incorporating those ancient meditation techniques in your daily routine so that no matter what happens,
We know how to go about tackling it. We can keep cool, keep grounded. So yeah, for me, meditation was a game changer.
Raj Krishnamurthy (54:36.062)
Okay, and if somebody is interested in sort of pursuing you, how do they, can they reach out to you, ask for advice, what are the ways in which they can reach out to you?
Trupi Shiralkar (54:46.196)
Absolutely. I have spent 21 years pretty much same time as cybersecurity researching different meditation techniques because there is no one size fit all depending on the phase of life as well as you know your neurological structure like one could be neurologically divergent, one could be neuro-typical depending on your unique needs. I think everybody should spend some time exploring and finding what’s going to work for you.
Raj Krishnamurthy (54:54.078)
Okay.
Trupi Shiralkar (55:15.982)
and then follow the path. And I’m always available to talk about whether cyber economics, meditation, both are part of human productivity for me.
Raj Krishnamurthy (55:25.182)
Got it. I’m gonna give you maybe the last minute to maybe do a shout out to mentors, books that you read, or whatever you wanna do, right? So this is your last 60 seconds. Anything that you wanna say in closing?
Trupi Shiralkar (55:40.142)
I want to give shout out to father of computing, Alan Turing. Thank you, Alan, not only for breaking enigma and teaching us the importance of computing and cybersecurity, but also showing us that machine can think. And 75 years later, here we are with LLMs and all agentic AIs. I’m also going to give shout out to Hedy Lamarr, who used to be a daytime
actress and nighttime inventor. She’s the one who patented technology for securing communications in World War II era. So my life has influenced by a lot of World War II heroes. And lately, I would say Shahar Man, he’s the founder and CEO of Backslash Security. And he’s a true futuristic leader. He’s already gearing up
to help developers and engineering leaders to protect against wipe coding and tsunami of MCP servers and agentic AIs and whatnot. So yeah, many thanks to all these inventors and futuristic leaders for inspiring us to do better and better and become our best version by thinking outside the box.
Raj Krishnamurthy (57:01.354)
That’s fantastic. And thanks. Thank you very much, Tripti. This was for your fantastic insights. Appreciate it.
Trupi Shiralkar (57:10.584)
Thank you Raj, I really enjoyed this conversation. And looking forward, maybe next year, where I can talk more about the cyber economics frameworks and what not.
Raj Krishnamurthy (57:14.388)
Premier.
Raj Krishnamurthy (57:22.933)
I think we should definitely do. Thank you very much, Tripti. I appreciate it. Take care. I think we can stay on the…
Trupi Shiralkar (57:25.304)
Thank you. Bye.
Want to see how we can help your team or project?
Book a Demo