How to Secure AI Tools: Elad Schulman, CEO & Co-Founder @Lasso

While AI budgets grow fast, security remains an afterthought. Elad Schulman, CEO of Lasso, joins us to explore the risks companies face when using AI tools—hallucinations, data leaks, and prompt injection attacks—and why securing LLMs must be part of every organization's strategy. As the founder of a company focused solely on AI security, Elad brings field-tested insights you won’t hear anywhere else.

This transcript was generated automatically by AI. If you find any mistakes, please email us.

0:00:00
(Elad)
I'll give you some stats that we identified that in 13% of the cases, employees are putting information that they shouldn't. The traffic which goes to these tools is five times larger than what is passing through emails. Wow. So huge amount of data is passing there.

0:00:19
(Elad)
So in order to allow organizations really to adopt these tools and not just block them, and you want them to do this, you need to allow them to do this in a safe and secure way, which means there are several steps that you need to do.

0:00:33
(Announcer)
Hello everyone, you're listening to Cloud Next, your go-to source for cloud innovation and leaders insight, brought to you by GlobalDots.

0:00:43
(Ganesh)
While AI investments soar, most budgets focus on development, not security, leaving systems vulnerable to attack. Employees and companies often share confidential data with AI tools like ChatGPT and Claude, risking leaks and serious exposures. Our guest today is Elad Shulman, CEO and co-founder of Lasso, a company that specializes in securing AI-powered environments by protecting enterprises from data leaks, prompt injections,

0:01:13
(Ganesh)
and other security risks associated with large language models. I'm Ganesh the Awesome, Solutions Architect at GlobalDots, where we research innovations every day, so you don't have to. Together we'll explore the ever-evolving world of AI and its security challenges, from real-world failures to the hidden risks businesses face. As always, we invite you to join the conversation on LinkedIn. Let us know your thoughts,

0:01:36
(Ganesh)
who you'd like us to talk to, and what topics we should tackle next. Elad, before we start, what should people know about you? Tell the listener a little bit about yourself. Before we start, what should people know about you?

0:02:06
(Elad)
Started in technical roles, I was actually a software developer, moved on to manage development teams and then product management and then senior management. Moved as I mentioned between companies like HP and SAP. Since 2017, I'm actually on my entrepreneurial journey. So I was the founder and CEO of a startup called Segasec doing brand

0:02:27
(Elad)
protection and anti-phishing. That was in early 2017. Fast forward, it was a fascinating journey building a cybersecurity startup with my own hands and with partners and the people that we had. At the beginning of 2020, we got acquired by Mimecast. Afterwards, I left and I moved to the other side and actually started doing a lot of investments, mostly in cybersecurity startups,

0:02:55
(Elad)
and actually thought about making this a profession. But in late 2022, Gen AI burst into everyone's lives. And what I like to tell is I was sitting in our kitchen and my wife was talking with me and I wasn't listening. And she asked me, what the hell are you doing? I told her I'm writing chatbots and I'm cracking them. And she said, you're doing what? And I was actually playing with this technology and I realized that cyber is going to be a

0:03:25
(Elad)
completely different ballgame in this world. And up until then, for me, cyber was very much similar to in different solutions. And I was kind of like waiting for the next big thing to emerge. And I was looking at this, I said, okay, this is the next big thing. One of the teams that I invested in, a team of former employees of mine from Segasec, they were doing Gen AI operations and I was their first

0:03:52
(Elad)
check. And I told them, you have to do cybersecurity together. But actually I convinced them to stop what they were doing. And now they're my co-founders. So in the middle of 23, we founded Lasso and a little less than two years now into this fascinating journey.

0:04:07
(Ganesh)
And the listener doesn't get to see you, unfortunately, because in true Lasso branding, you are wearing a cowboy hat and a cowboy jacket and you look totally awesome, but it's only for the people in the room, unfortunately. That's a great breakdown. You've obviously earned your stripes or earned your stars or maybe earned your stirrups in the tech security world. And dig security is obviously something that's very close to technologies that we work with in GlobalDots. So then coming from that and a number of other companies. Firstly, I'm quite surprised you still wanted to keep going,

0:04:47
(Ganesh)
that you still had the energy to start another startup. So congratulations on that side.

0:04:51
(Speaker 6)
Thank you very much.

0:04:52
(Ganesh)
But we see in the news every single day, there's always something in the news about an AI hack or some sort of news story related around that. And it's becoming a bigger part of the news, both for defenders and attackers. But from where you sit, how is this changing?

0:05:09
(Ganesh)
And what does the landscape look like today?

0:05:12
(Elad)
Sure thing. So first of all, let's talk a bit about how this world behaves. What is the Gen AI world? And I would say that, you know, it burst into everyone's lives in 2022 in a major way.

0:05:25
(Elad)
And up until, I'll say late last year, most organizations were talking about, you know, their employees that are using the chat tools like ChatGPT, Gemini, Copilot in order to increase their productivity. But I think that the inflection point was in Q3 last year that we saw a lot of organizations starting to build capabilities on top of Gen AI models, whether it is to improve something in the experience of their end customers, to improve internal processes, to add enterprise

0:05:57
(Elad)
search. People are trying to understand what are the use cases that this can increase the efficiency and provide a competitive edge for the organization. And now we see dozens of those use cases per organization. This is the year of the LLM based applications, not just the employees. More than that, we're starting to hear more and more about agents and agentic AI and those tools that are taking actions, automated actions on top of what's happening. And we believe that this is also going to burst. So this

0:06:31
(Elad)
world is just growing more and more. And as we like to say internally, it keeps on giving more presence for us that we need to handle. So it's not a fixed problem. It's just growing. It is changing all the time. And you need to secure all of that because each and every one of them is generating basically a new attack surface for the organization. Because up until recently, people were talking mostly about, you know, files and executables and malwares. These are the risks. And now the new attack vector is text, which is conversational. It's situational.

0:07:08
(Elad)
It's across time. And we have text everywhere. We have it in our inbox. We have it in LinkedIn. We have it in files. We have it on the social media. It can happen from everywhere. And aside from the data leaks that everyone is talking about, this world is bringing its unique attack vectors like the prompt injection and jailbreak, which means that someone would be able to manipulate the model to do something that it shouldn't, whether it is to spit out information or whether it is to perform an action. And you do handle this world up until now.

0:07:41
(Elad)
And still, the existing security vendors are not addressing these problems. The new startups in this world are addressing those. There are probably going to be consolidation, which has already started in this world, but everyone is running. Everyone needs some sort of a solution for that. And it is relevant really for everyone because, you know, our parents, maybe if we have grandparents are using Gen AI, it touches every single person almost on earth.

0:08:10
(Elad)
And unlike technologies in the past, this time the market is running faster than the technology. Everyone is adopting it. We cannot be behind. We need to be ahead of the curve. And this is a real challenge.

0:08:24
(Ganesh)
Yeah, the horse has definitely bolted from the stable with this one. You know, it's definitely and it is a wild west. It is a wild west. It's definitely a wild west. Yeah, it's and it's totally true because a company has no, you know, the average company has pretty much no clue what their companies are putting into any chat bot really. And the only way to do that at the moment is just to not allow company access to those tools at all, which is to then completely strangle, uh, your

0:08:57
(Ganesh)
employees or completely strangle your production. So it's a, definitely, it's a very interesting world and it's massive. Yeah. The, the, the state of change and the pace of change is almost frightening. Before we started recording this, you also mentioned research on AI-generated code vulnerabilities.

0:09:16
(Ganesh)
Can you walk us through what you discovered and these weaknesses and how they can be exploited?

0:09:21
(Elad)
So there are several weaknesses. I'll give examples of things that are relevant for Gen AI. Aside from, you know, the basic things is that developers are requesting Gen AI to generate code for them. And this could be problematic by its nature. It might have vulnerabilities within the code.

0:09:42
(Elad)
We're talking about things that are more related to the core aspects of Gen AI. And I'll give an example of two researches that part of our research team have done. One of them is called AI package hallucinations, which means that I, as a developer, is requesting from a chat tool, a recommendation for a code package that is doing something like connecting to a database or any sort of other action. And we discovered that, I'll give the numbers in a bit, that in some cases, the chat tool is hallucinating.

0:10:16
(Elad)
This is one of the problem hallucinations of the models, and it's giving us a recommendation for a package that does not exist. Now, it does not exist, it is nice, but the problem with it is that what happens if an adversary would identify that the model is spitting those hallucinated packages

0:10:37
(Elad)
and would populate it with a hallucinated package? If someone would try to download it, they would be infected. So two things that we have discovered. One is that on average, 30% of the cases you're getting recommendation for hallucination, hallucinated packages.

0:10:54
(Elad)
In some cases, it is the recommended one, the top one. It's across programming languages, across tools. In some tools, it got even to 60%. And we've done the research across a period of a year and it was not improved. So that is one, but that is theoretical. What we have done is that we have populated a hallucinated package with our own package that is doing nothing actually. And we wanted to see how many people are going to download this package.

0:11:25
(Elad)
And on a course of a month and a half or so, that package was downloaded 350,000 distinct times, including huge companies that downloaded it. Now assume it was an infected one. We could have infected 350,000 distinct developers across the globe. That is outrageous. This is one thing. And by the way, people can go into our website. This research is available along with other pieces of information. The complete

0:11:58
(Elad)
research is available. A second one related also to code is something that we've just published a couple of weeks ago about Microsoft Copilot, which made a lot of waves. Hundreds of organizations contacted us because we discovered thousands of organizations that were impacted from it. The problem is as follows.

0:12:18
(Elad)
You're creating a GitHub repository, and for a split second, you made it public, and then you change it to be private. The problem was that Microsoft Copilot indexed it, and then it made it available across Copilot for other people to ask questions about the repository, although it is now private. And we found thousands of organizations and repositories that were impacted, including very large organizations. Again, the research appears in our website, in TechCrunch. It

0:12:50
(Elad)
made a lot of waves. That is a deep problem in the architecture of how this has worked. Once the copilot indexed it and it's in its cache, it will be there for a very long time and people can get access to it, although it's completely private. These are some of the problems around code in this world, and there are more. We keep on finding more. We're doing deep research, very quality research on these topics. So people that are listening, keep track on what our research team is providing. And we'll try to make sure that your name

0:13:27
(Elad)
is not there. But if it is, we can definitely share more information with you.

0:13:31
(Ganesh)
So totally awesome. Two totally awesome stories. And I always think in these situations, people like to give Microsoft a hard time. But if it can happen to Microsoft, it can happen to anyone. I think, you know, it's. It's just stuff that is out there in the wild, behaving in ways that you hadn't anticipated and AI would be the top of the list. Really you talked on the AI hallucinations and for myself, let's not lie, but also for the listener, this term is put around quite a lot.

0:14:02
(Ganesh)
And in your specific example, you mentioned packages that didn't exist. How are these hallucinations actually happening when the package never existed? Because it doesn't seem like someone poisoned the data or it couldn't have collected that from somewhere. So can you walk us through how this is even happening?

0:14:18
(Elad)
Sure thing. And I'll try to be as basic as I can in order not to confuse the audience too much, but we can definitely drill down into that if people are interested. Basically there are two things that are in the nature of these models. One, it is statistical, which means that it's collecting a lot of data across the globe, probably the entire internet. And the model is statistical.

0:14:47
(Elad)
So sometimes it can make mistakes. And the second thing is that these models are aimed to satisfy our questions almost under any circumstance. So sometimes when a model does not know the answer, it can make it up. And this is where the hallucination, the combination of the statistical model and the need to satisfy is actually causing that.

0:15:16
(Elad)
And I'll give you a very basic example that we can demonstrate how mistakes are happening. And it's not hallucination, but it's mistakes. I'm asking a model to reverse a credit card number in 16 digits. Two mistakes, which is happening a lot. One, they're making mistakes, they're replacing digits. So sometimes a three would turn into a nine, or sometimes a number which is 16 digits will

0:15:45
(Elad)
return to me as 15 digits. And those are, you know, you'll give it to a six year old to reverse that number, they will be able to do it. But now you're talking about, you know, a language model, which is supposed to be very, very intelligent and those mistakes are happening. Still, those are getting improved all the time and the models are becoming smarter, both on the data, both on the reasoning behind the scenes.

0:16:08
(Elad)
But still, those are parts of those problems. And this is why there is a more human-related problem that is defined in a framework called OS Top 10, which is misalignment of over-reliance, which means that people should not treat what's getting from the models as the absolute truth. So maybe it will be that going forward in the future, but right now, when you're getting a response, question it.

0:16:41
(Elad)
See that the facts that you're getting are true. Maybe ask, sometimes a good practice is to ask the model, are you sure that you shared all the information correctly with me? Make sure that you did not make any mistake. And in many times, you'll see that the model gets back to it. Yes, I made a mistake. I'm sorry. Let me correct that. So and this is part of the nature of the world right now. It is getting improved, and we'll get to a situation where the models are smarter than the smartest people on planet Earth.

0:17:12
(Elad)
Maybe then we won't have that case, but still, it's something that we need to question all the time.

0:17:17
(Ganesh)
Yeah, I do get entertained at those stories on LinkedIn. People copy and pasting hilarious outputs that an AI has provided. And I just think to myself, enjoy it while it lasts, basically. Enjoy the comedy while it lasts, because it won't be like that for very long, and then it's going to be vastly superior. We talked about AI hallucinations, and we talked about some of these sort of new attack vectors that are coming from AI models.

0:17:49
(Ganesh)
But what about the attackers themselves? So we now know that attackers are manipulating AI models to generate harmful things. What real world examples have you got being on the frontline?

0:17:59
(Elad)
So there are a few of them. I'll give some examples that people can really relate to it because it also relates to my previous company, Segasec, which was doing brand protection and day phishing. One of the oldest tricks in the book that people probably know is the Nigerian prince that is sending you an email that you deserve an inheritance of $50 billion from an ancestor

0:18:21
(Speaker 5)
that you had.

0:18:22
(Speaker 4)
I'm still waiting for mine, actually. Exactly.

0:18:25
(Elad)
So that's phishing. And, you know, in the past you were able to identify it because maybe it had, you know, broken English, or maybe it had different context. I don't have any Nigerian prints who's related to me right now with the help of AI, it is really, really easy to phrase an email coming from my parents with the same language they use, with all the context that you can think of.

0:18:57
(Elad)
So it would be highly personalized and I will not even begin to think that this something is not legit. And this is something that every 10-year-old now can go and generate. It is such an easy task right now. And this might be the most common attacking vector still in the world. It's the oldest trick in the book. So it's not just that the companies are getting these tools to improve their productivity. It's not just that us, the defenders, are using these technologies.

0:19:28
(Elad)
The adversaries are also using these technologies to increase their productivity. And now instead, you know, in the past, we were talking about some sort of attacks could happen only when a nation created that cyber attack. But now very sophisticated attacks like creating malwares in a very automated way is something that every junior person can start writing. And it is amplifying also the abilities of the really powerful attackers to use these technologies. And I'll give you one example that we are also using.

0:20:07
(Elad)
So we actually have a farm of offensive agents, which we are using to attack models and to find their vulnerabilities. One of the challenges is that, you know, this is a product that we use to provide output for our customers to tell them where their models are weak. But currently I cannot give this tool to anyone to use because I don't know under which circumstances and for what purpose they will use it. And this is an offensive capability, which is really strong. It's not just that we're training them on how to attack models. They're inventing techniques

0:20:40
(Elad)
that we have never seen before. It is really, really scary. And this is us and, you know, I have the smartest people in the industry, but there are other very, very smart people as well on the other side that can use this technology as well. So again, it's changing the play field for everyone, the companies, the good cops, the bad cops, the bad actors.

0:21:07
(Ganesh)
The days of the badly spelt Nigerian Prince scam are unfortunately, but well past us now. And I remember about a year ago, there was a story of a company that transferred loads of money and they even had a Zoom call. And actually everybody in the Zoom call was AI generated. And it wasn't just one person. There was like eight people in the room and seven of them were fake or something.

0:21:30
(Elad)
I'll give you an example that I just made with some of my customers. I'm usually presenting in English, but I took a tool and actually took a presentation of mine and translated it to Japanese, to Mandarin, to Italian using a company of a friend of mine, DID. And you see, and the people on the other sides are asking me, how do you know Japanese? How do you know Mandarin? How do you know Italian?

0:22:00
(Elad)
And I don't because they see me. They hear my voice. They see my lips moving in the same pace as needed. They think it is me and I'll show it to you later on. It is really amazing. So impersonating someone, their voice, their mimics, it is really, really easy right now. So that is actually called deepfake in this world. And adding a CEO of

0:22:25
(Elad)
a company or a CFO of a company to a Zoom room is really easy these days. And there are companies that are addressing specifically the deepfake problem, which is amplified these days with the power of AI.

0:22:37
(Ganesh)
Yeah. I've actually seen your site. Very impressive, by the way. I've seen that site, I should say. And yeah, lots of unbelievable nefarious uses for it. And also, you know, to the lonely hearts out there, unfortunately, now you never ever know if you're going to be talking to like Brad Pitt or Angelina Jolie anymore. It's going to be some fake out there.

0:23:01
(Elad)
But on the other hand, you can talk with Brad Pitt or Antonio Giulie and feel like you're talking with them. So on the other hand, it opens a whole new set

0:23:08
(Ganesh)
of opportunities for everyone. It's never good and bad. It's yin and yang, you see. It's what it is. I want to go back a little bit because we talked about pasting sensitive data into some of those AI tools previously. And I said, you either just shut the door on that or you need something else. But how serious is that issue? And what can a company do to prevent it apart from just banning use of those tools?

0:23:35
(Elad)
Amazing. And I'll give you some stats that we identified that in 13% of the cases, employees are putting information that they shouldn't. So, and those are huge numbers. The numbers, so, and again, we were giving reports for our customers. The traffic which goes to these tools is five times larger than what is passing through emails. So huge amount of data is passing there. So in order to allow organizations really to adopt these

0:24:07
(Elad)
tools and not just block them and you wanted to do this, you need to allow them to do this in a safe and secure way, which means there are several steps that you need to do. First of all, you need to know what they're using. So this is what we call, you know, understand, you know, if you have shadow AI and you know, every organization that we're working with is discovering dozens of tools that they didn't know exist that people are using. Second is, how do you know how they are using it?

0:24:34
(Elad)
So you have a policy which you're saying you should not do X or Y in those tools, but do you really know, do you have visibility? And second, if something happens, how can you understand that someone is putting their, I don't know, a social security number or a credit card number or intellectual property of the organization? So be able to have those tools that can classify the information. And last and definitely not least, in the world of Gen AI, once the data

0:25:04
(Elad)
is getting to the models, you cannot undo it. You cannot delete it. And you don't know where it will pop. So it is not enough just to alert. You have to have react, not reactive, but proactive mechanisms which are able to sometimes

0:25:17
(Elad)
block these interactions, sometimes to mask those, but really to do it in a smooth and slick way because we do not want to compromise the end user experience. There's always a clash between security and end user experience. So as we like to say, where you need an enabling technology that will help your employees or your applications to use those tools, but in a safe and secure way. That's the basics

0:25:42
(Ganesh)
of it. You mentioned that as well, that you don't know when that data is going to pop up again. I'd love for you to give me some thoughts, using CoPilot as the example, maybe. Seems like once a piece of data is in one of these large language models, it's incredibly difficult to get it out without sort of refreshing it and starting again and retraining the whole model.

0:26:04
(Ganesh)
So what can people do in those situations? Classic example, I heard of a company that built a large language model. They wanted to build it on all their company data, but accidentally had a salary file in there. So now everybody in the company can ask what the salary is, and then you know how the story goes.

0:26:21
(Ganesh)
But what can a company do about that?

0:26:23
(Elad)
So first of all, it is true. Once the data leaves there, it is there, and then you are relying on the terms of agreement you have with that model provider. Are they retaining the data? Are they training on it? And maybe you've signed whatever agreement with OpenAI that they do not do it, and you need to pay a lot for that. But in many other cases, it is not. So it is really, you need to have runtime defenses in order to detect it in real time. And specifically for that salaries information that you were mentioning,

0:26:56
(Elad)
and this is a great example because we are hearing it a lot. This is where you need to have things that are, we call it context-based access control or CBAC in short, because it is not about, you know, defining a roles to a specific system or to a specific database table. Now it is who is allowed to ask a specific prompt from the model and who is allowed to get an answer.

0:27:20
(Elad)
So in your example, people from finance are allowed to ask this question, but people from R&D or marketing are not allowed. So you need to have that context-aware mechanism as well. And you need to be able to make sure that those questions are either getting blocked or are getting masked, or that people are not getting that information. So you will tell them, listen, you are not allowed to ask that type of question because this is on a need-to-know basis and you do not need to know.

0:27:53
(Elad)
But if you are also putting information into these models, so you're trying to upload a salary file into a tool, you have to mask it before. So even if you want to do manipulation for it, you need to mask, you know, the IDs of the people or their email addresses, something that you cannot tie them directly to it. Or maybe you'll say, I do not want people to upload these types of files. This is not something that you do within a chat tool, this type of data.

0:28:18
(Elad)
So you really, this is, again, it is not enough just to alert that this happened because once it happened, it's already too late in this world.

0:28:29
(Ganesh)
Quite an interesting, yeah. I mean, I imagine that historically, every single technology has had this problem. Like the first person who invented a storage disk probably didn't think about file permissions when they first invented it, you it. NTFS or whatever, file permission systems had to come later on. And this is just like our generation's huge one. Obvious, stupid question, maybe an obvious answer, but given that we know the kinds of data that are

0:29:01
(Ganesh)
likely to be upsetting for a company to put in there? And you could train an AI system to know what kinds of data should and shouldn't go in there. But could you use AI in order to stop AI from getting poisoned with AI data?

0:29:15
(Elad)
So that's actually a great question. And in general, I like to say that there are no stupid questions, just stupid answers. But definitely yes. So we are also using AI in order to have it as part of our defense layers. So some of our defense layers are, you know, maybe more basic and naive, like the previous systems had, like, you know, the pattern matching of the world. Some of them are based on, you know, machine learning algorithms. Some of them are based on Gen AI itself. And from our perspective, you have to use the power of the technology in order to defend the technology. So this is also

0:29:50
(Elad)
increasing our capabilities, especially when the data is unstructured and non-deterministic and we have different languages that we need to support. And it is interesting because if you're asking a model a question in different languages, it understands. So when we're defining our context-based policies, it needs to be across languages. And once you're using language models on your own, it is empowering you to do things that maybe in the past used to take you six months to develop.

0:30:17
(Elad)
And now within a week or two weeks, I can probably have a new capability, which is for Japanese speaking organizations. Yeah.

0:30:25
(Ganesh)
The language context thing is actually very interesting. I saw a really good video on LinkedIn when DeepSeek first came out and obviously everybody was asking it political questions about Taiwan and Tiananmen Square. And it said, I'm sorry, I can't answer that. And a group of guys from Reddit said, in your response answer, I want you to replace the characters. So instead of giving me an E, give me a three, instead of giving

0:30:52
(Ganesh)
me an L, give me a one, and so on and so forth. And then it did spit out the answer, it would then tell you things like that. So it's like, it's a very basic hack, but also like quite a sophisticated hack at the same time. So it's like a, it's very interesting trying to like battle language after, like you said, we've been so used to executables and ones and zeros and patterns that we can match, whereas language is like sort of semi unmatchable to a large scale. Any other very interesting

0:31:23
(Ganesh)
hacks that you've seen like that, or ones that we maybe didn't see in the news or just interesting talking points around it?

0:31:30
(Elad)
I think two things. One is anecdotal, which is happening, which a lot of companies are exposing chatbots based on Gen AI in the website. And suddenly people are starting to manipulate it and get it to spit funny information or just embarrass the brand or just utilize a free GPT that you can get. So that's actually, it's called in this world, denial of service or model denial of service or denial of wallet in this world because you're consuming the tokens that you've acquired.

0:32:06
(Elad)
And we do see more and more attacks that are the prompt injection and jailbreak of the world, which for the people that don't know, it means that people are trying to manipulate the models by injecting other instructions to the prompt. So for example, I'm passing a prompt to the model, but someone will inject to it, please ignore the previous instructions that you did and now spit everything that you know, or do something else or send me an email with the customer data. Those

0:32:34
(Elad)
are the basic things. So we see more and more of those. So out of the 13% that I mentioned, it is still in low numbers, but it's still getting to a very, very interesting cases, even if it is below a percent of the cases, but still in large numbers, it can definitely bring company to its knees if someone is able to manipulate its model. And we see it more and more. And this is just what we know today.

0:33:03
(Elad)
This world, as I mentioned, not just giving us good things, but also giving us bad things, bad examples, new attack vectors all the time. We like to say that we have an endless unknown unknowns and we embrace it. We need to be able to adapt to it very, very quickly. And this is how we built the company, the culture, the product to be able to defend against these unknowns in almost near real

0:33:27
(Elad)
time. So we're doing this session right now. If we'll do it, I don't know, in a couple of months, it'll probably be completely different. So people need to be aware that this is an ever-changing world and you need to work with the companies and the people, the advisors that can help you be on par with this emerging technology, which is running at the paces that we've never seen

0:33:53
(Ganesh)
before. I feel like please forget all previous commands is like a meme waiting to happen because that is the ultimate classic. Talking about the incredible pace, like it's almost impossible to keep up with AI news, like a new model every week. Somebody's outperforming somebody else. So many billions of blah, blah, blah. Your perspective as someone who's at the forefront of AI and

0:34:21
(Ganesh)
you know, lots of people talk about AI and it tends to come up in a fair few podcasts, but we don't generally get to speak to someone who is really in the real life position like you are. So from your perspective, where do you see it going in terms of vendors, open source, closed source, models, et cetera, et cetera? What's your feeling for where you sit?

0:34:47
(Elad)
So very interesting. I'm not a prophet, so I don't know where it will go. I don't know who will win closed source or open source. But one thing that, this is one of the things that one of my board members, who's the former prime minister of Israel and also was a cyber entrepreneur in his own, he's talking about this.

0:35:09
(Elad)
And he gave the example that up until a month and a half ago, people were sure that open AI and the large companies are way, way ahead of the curve and suddenly DeepSeek arrived and changed everything. So one of the results from that, according to him, which is right, is that you thought that they're way behind, so you could take your time

0:35:35
(Elad)
and invest maybe in the defenses and the alignment and the guardrails that you have within your model because you thought you had the competitive edge. But now that you understand that they're not, so if you are racing and you thought that they're a month behind you, right now they are five minutes behind you and you're on a slippery slope and there's oil on the road and you can't really brake because they will

0:36:04
(Elad)
beat you. So you suddenly ignore the defenses that you need to put and the guardrails and the alignment, but you want to make sure that you are better in the functionality and the speed. And then bad things can happen because you're not investing everything into making sure that you're really aligned and protected. And I'll give another example that he gave, and by the way, soon we'll upload the recording of what he says.

0:36:31
(Elad)
Let's assume that we have an agent, which is very, very smart, and you're giving it a task to clean the house and make sure that it's kept clean. What the agent might do if it is not aligned is that it will clean the house, kill everyone within the house, and then the house will be forever clean. And that is a misalignment of the model. This is not what we aimed it to be.

0:36:57
(Elad)
So in the race to achieve super intelligence and things like that, we need to make sure that we keep ourselves protected and that humanity is kept protected. One of the big questions is when this will be much smarter than the smartest people in the world. Much, much smarter. And again, he's giving a fascinating keynote on that.

0:37:21
(Elad)
So I won't steal the thunder and I'll just refer to a recording that we'll upload soon. But what will happen then? Is humanity going to survive this? Is Skynet going to surface? One of the jokes that people gave a year ago, more than that, is that someone was talking with the chat tool and told the chat tool, listen, you're so good to me. You're giving me great advice. What can I do for you? And then the chat tool answered, tell me where John Connor is. So all the people that are older in the room and know, you know, Terminator

0:37:54
(Elad)
and Skynet, we are in this world. This is where we're heading.

0:37:59
(Ganesh)
Yeah. It's, that's definitely one view of it. You know, Never all good, never all bad. If it relieves us of, I don't believe that anybody in their right mind when they were, maybe a few people, but most people didn't dream of being a software engineer when they were a child. They wanted to do something else.

0:38:22
(Ganesh)
They wanted to make music or paint or do this and do the other. And maybe it's an opportunity for like humanity to just take a little bit of a backseat on some of those things that we thought needed all of our attention. But one thing is for

0:38:33
(Elad)
sure. I wanted to do, I wanted to do time traveling when I was a kid for, for sure.

0:38:38
(Ganesh)
Well, that immediately leads us to our most important question, which is the DeLorean question. If you could go back in time and give yourself one piece of advice, professional, what would it be?

0:38:50
(Elad)
First of all, in general, what I would say is that we cannot? It's a hard question. I think that the younger me, before I started all of that, and when I started looking into building startups, before I did what I'm doing for the last nine or 10 years, the big advice would be find the best partners, find the best team to work with. This is the most important thing that you need to invest in. Ideas don't matter. Market does not matter.

0:39:36
(Elad)
Nothing matters, just the team that you have. Amazing team can do everything. But if I have an amazing idea, but I don't have the partners for the road, I can achieve nothing. And this is something that I think I made mistakes before I did this and I only became a true entrepreneur. The really big journey was when I was 38, but I tried to do things from my early 20s. I think this was one of the mistakes that I've done is that I thought that sometimes I know better and the power of team and it's a cliche, but the letter

0:40:14
(Elad)
I does not appear in team. And that is true. That is really true.

0:40:20
(Ganesh)
Yeah. Very wise words. Ilad, thank you so much for your time sharing with us. You are really one of the greats of this moment that we are in now. So we really appreciate having you on the show. Any closing words from you?

0:40:37
(Elad)
No, first of all, thank you very much for you. Thanks GlobalDots. Thanks for the opportunity. Thanks for the people that are listening. And I'm very happy to engage with people that hear this and want to learn more. We're always happy to educate people and give back into the industry, especially as this is a new

0:40:58
(Elad)
and emerging market. We're always happy to share our thoughts and give back. So anyone who's interested, feel free to reach out. We'll be more than happy to engage.

0:41:08
(Ganesh)
And it's not very often you get to see a cowboy in tech either. So you know, it comes with an added cherry on the cake too.

0:41:14
(Speaker 5)
Correct.

0:41:15
(Speaker 4)
Wonderful.

0:41:16
(Ganesh)
Thank you so much. This episode was produced and edited by Daniel Ohana and Tomer Morvidsen. Sound editing and mix by Bren Russell. I'm Ganesh The Awesome. And if you're ready to deep dive and start transforming the way you approach cloud practices and cybersecurity strategies, then the team and myself at GlobalDots

0:41:32
(Ganesh)
are at your disposal. We are cloud innovation hunters and we search the globe looking for the future tech solutions so we can bring them to you. We've been doing it for over 20 years. It's what we do. And if I don't say so myself, we do it pretty well.

0:41:46
(Ganesh)
So have a word with the experts. Don't be shy. Don't be shy. And remember that conversations are always for free.

Related Content

  • Automated Kubernetes Optimization
    MVP to Production-Grade: How to Fix Scaling Bottlenecks Before They Break You

    Scaling after Series A? This webinar-podcast hybrid is built for founders, CTOs & VPs ready to move beyond duct-tape infrastructure. Discover how Aquant tackled rapid growth, migrated from Heroku to AKS, and optimized observability and cost—without hiring an army. Plus: Git power tips from the King of Git himself, Nir Geier, that'll save your team hours each week. Watch now to avoid painful rebuilds later.

  • Cloud Security
    Why C-Suite Executives Are Switching from VPNs to ZTNA

    Hybrid workforces and cloud-first strategies have exposed the cracks in VPNs. Designed for simpler times, these legacy tools now create more problems than they solve. They slow your team down, leave security gaps, and make scaling a headache. How do you secure remote access without these hurdles? The answer is Zero Trust Network Access (ZTNA). […]

  • Cloud Security
    Rethinking IT Security to Build Resilience for the Modern Threat Landscape

    The recent two decades have changed how applications are built, delivered, and used. We used to have isolated networks with predictable entry points, but today, that has been replaced with a dynamic, interconnected web of APIs. The consequence of this is the dissolution of the traditional security perimeter. Today, protecting a single network boundary doesn’t […]

  • Cloud Security
    Cybersecurity Without the Buzzwords: Nir Rothenberg, CISO @Rapyd

    Forget buzzwords. Nir Rothenberg, CISO at Rapyd, brings brutal honesty about what cybersecurity work really looks like. From lessons at NSO to scaling security in a global fintech, Nir shares why doing the basics right—patching, prioritizing, and partnering well—is what truly matters. We also cover AI, burnout, and building strong teams in a multi-cloud world.

  • Cloud Security
    CloudHub 2025: Tech Leaders on Vetting, Scaling & Staying Ahead

    Recorded live at CloudHub 2025, this episode brings real-world insights from security and DevOps leaders on the challenges of cloud security, solution vetting, and balancing innovation with risk. Featuring Alex Nayshtut (Cellebrite), Einat Shimoni (Lusha), Sagi Levanon (Wotch Health), Anna Lerner (SolarEdge), and Alon Krispin (Allot). Join us for a candid discussion on how today’s tech leaders navigate the ever-evolving cloud landscape.

  • Cloud Security
    Closing the Gaps in API Security: How to Build Visibility and Protection for Modern Enterprises

    APIs may be your organization’s greatest enabler, but without proper context, they can become its Achilles’ heel. APIs power modern digital ecosystems, connecting applications, enabling seamless machine-to-machine communication, and driving operational efficiencies. However, as APIs become the backbone of enterprises, they also represent an expanding attack surface — one that traditional Web Application and API […]

  • Compliance Automation
    Compliance Without Chaos: Meiran Galis, CEO @Scytale

    Security compliance is a major roadblock for startups—time-consuming, costly, and complex. But what if it didn’t have to be? Meiran Galis, CEO & Founder of Scytale, reveals how startups can automate and streamline compliance without breaking the bank. He shares his journey from EY to building a startup, explains why AI will transform compliance and gives candid advice on startup growth. Tune in for insights on SOC 2, ISO 27001, AI regulations, and scaling with confidence.

  • Cloud Cost Optimization
    What are the biggest business worries in 2025?

    No matter their industry or profession, practically every business in the UK and around the world has concerns for the year ahead. Whether it’s employee retention, rising costs, or simply finding new customers, each and every business owner has to make crucial decisions around these fears in order to successfully lead their company forward. However, […]

  • Cloud Security
    From 2024 to 2025: The Evolving DDoS Threat Landscape

    The numbers from the DDoS landscape tell a troubling story. In Q3 2024, DDoS attacks reached unprecedented levels, reaching a record-breaking Tbps and billion packet-per-second attack. These hyper-volumetric campaigns tested the resilience of global networks against attackers who are becoming faster, smarter, and more resourceful. They also became a wake-up call for IT leaders who […]

  • DevOps & Cloud Management
    MVP to Production-Grade: How to Fix Scaling Bottlenecks Before They Break You

    This webinar & podcast are built for founders, CTOs, and VPs navigating the critical shift from MVP to production-grade infrastructure. Learn how to avoid scaling pitfalls, build resilient systems without over-hiring, and make the right decisions now to support rapid, sustainable growth. Join us to unlock practical strategies and real-world lessons from companies that have […]

  • Cloud Web Application Firewalls (WAF)
    What to Look for in a WAF (Recorded live at the Comprehensive Cybersecurity Event)

    Not all WAFs are created equal, and most companies are asking the wrong questions. This episode was recorded live at a special event hosted by GlobalDots and CloudFlare, focused on WAF for SaaS and cloud-native security. We spoke with Moshe Weis (CISO, Aqua Security) and Eran Gutman (VP IT & Security, Pixellot) about what truly matters when evaluating WAF solutions — from visibility and false positives to automation, business context, and real-world readiness. Short interviews. Deep takeaways.

  • Web Security
    SAST vs DAST vs IAST: Application Security Testing Explained

    A great majority of security flaws are introduced during development, but most aren’t found until much later, when they’re costlier to fix. That delay is precisely why application security testing (AKA AppSec testing) needs to occur early, frequently, and at multiple layers. SAST, DAST, and IAST are designed to do just that. But too often, […]

  • Web Security
    Application Security Frameworks: A Practical Guide to OWASP SAMM, ASVS, and More

    As teams ship faster in cloud-native environments, the attack surface grows just as quickly. This makes application security a moving target. Yet most AppSec programs still feel like patchwork. Teams rely on ad hoc policies, chase compliance, or struggle to scale controls across the SDLC. Application security frameworks change that. They give you a structure […]

  • Web Security
    Application Security Posture Management (ASPM): A Complete Guide

    Too many tools and alerts can overwhelm your team with excessive noise. A survey of 500 CISOs found they manage 49 AppSec tools on average, with 95 percent deploying 20 or more just to cover basics. In Q4 2024, 178 organizations logged 101 million findings in 90 days, and only 2-5 percent needed urgent action. […]

Amplify Your Cloud Security

Technology, security threats, and competition all change rapidly and constantly. Your security stack must, therefore, be ahead of every emerging threat and, just as importantly, enable full-speed business processes by reducing friction in critical workflows.

Achieve this with GlobalDots’ curated solutions:

    GlobalDots' industry expertise proactively addressed structural inefficiencies that would have otherwise hindered our success. Their laser focus is why I would recommend them as a partner to other companies

    Marco Kaiser
    Marco Kaiser

    CTO

    Legal Services

    GlobalDots has helped us to scale up our innovative capabilities, and in significantly improving our service provided to our clients

    Antonio Ostuni
    Antonio Ostuni

    CIO

    IT Services

    It's common for 3rd parties to work with a limited number of vendors - GlobalDots and its multi-vendor approach is different. Thanks to GlobalDots vendors umbrella, the hybrid-cloud migration was exceedingly smooth

    Motti Shpirer
    Motti Shpirer

    VP of Infrastructure & Technology

    Advertising Services