Episode 129: AI Agents for Risk & Compliance with Dror Asaf KOVANT

Co-Host

Aytekin Tank

Founder & CEO, Jotform

Co-Host

Demetri Panici

Founder, Rise Productive

About the Episode

In this episode of the AI Agents Podcast, host Demetri Panici sits down with Dror Asaf, co-founder and CTO of Coval, to talk about what it actually takes to bring agentic AI into enterprise operations. They get into why supply chain and operations are still surprisingly archaic, why trust and security are some of the biggest blockers to adoption, and how enterprise teams can use AI to remove bottlenecks without removing human decision-making. Dror also shares how Coval is approaching enterprise AI differently, from on-prem deployments and Microsoft Teams integrations to strong governance and compliance layers. They also dig into the future of jobs, why AI will likely reshape work more than eliminate it, and why the people who know how to leverage these tools will have a massive advantage going forward.

It's similar with the industrial revolution. Previously, some people lost their jobs during that part, but there were a lot more jobs created and people with different skill sets came to like a factory worker did not necessarily have the same tool set of a professional carpenter because he needed to do one thing and one thing alone. I don't know if it's screwing something or if it's to repair something. It's just a different skill set.

This is what we see here. People that are very talented and know how to leverage AI tools are going to be the ones that dominate the market.

Hi, my name is Dmitri Bonichi and I'm a content creator, agency owner, and AI enthusiast. You're listening to the AI agents podcast brought to you by Jot Form and featuring our very own CEO and founder, Idkin Tank. This is the show where artificial intelligence meets innovation, productivity, and the tools shaping the future of work. Enjoy the show.

Hello and welcome back to another episode of the AI agents podcast. In this episode, we have Drawer Asaf. He is the co-founder and CTO of Kavant. How you doing today, Dor?

I'm great. How are you doing?

I'm doing awesome. Very excited to have you on the show. I appreciate you for making the time. So, just to kind of kick things off, what drew you to building AI solutions and specifically what made you decide to focus on agentic AI for enterprise operations?

First, thank you for having me. I think it's great to share information and knowledge and this entire field is changing by the minute as we are talking. There's probably 10 new tools that have been published that we need to check out.

My journey into AI started before it was called AI and it was known as ML, the less sexy brother of AI. I've been in the industry for over 15 years, mostly working with data processing and machine learning. The next leap is to AI. I've seen how crazy it can actually be, meaning even with machine learning we were able to save people's lives and improve people's lives in my previous jobs. Agentic AI is just the next leap.

For me, I definitely see it as something that is going to continuously evolve like any other technology. What led me is that I believe the future is here and the future is very interesting and I want to be part of it.

I don't know how much you've also seen from the agentic web or what's called project Danda, but there is a whole group at MIT focusing specifically on how the web and how agentic or agents are going to work and collaborate together.

I really love hearing the different opinions and I agree that exchanging information is really important. I'm really excited to have you on. You started this company in the beginning of last year, is that correct?

Correct. Yes.

Okay, cool. Like I see about Marchish. What kind of originally drove you to it?

Well, first of all, you got pre-seed funding which is awesome, if I'm not wrong.

Yeah. We just announced it recently. It actually happened over the summer, but we were so busy that we almost forgot to announce it. So we announced it only very recently.

Could you tell us a little bit more about the story of how you got founded? You and your co-founder, did you know each other for a while or what was that relationship like and how you went about acquiring that funding and getting yourself kicked off strong with what you're doing?

My co-founders and I, the CEO of the co-founders was actually working with me during my time at Spotify. This is the connection. Then we parted ways. I went into medtech, he went into his own company. That's where he met the third co-founder. That's how we got introduced.

While I was building medtech, they were helping enterprise with data problems because this is our previous domain. During this time, they encountered many issues on the supply chain domain specifically and paradigm. They found that the operations specifically in the supply chain is archaic compared to how much engineers are being exposed to all the other parts of enterprise and companies, but that part is usually engineering free, which means they are not trying to automate it as much, leaving it in a very archaic state.

That's a little bit on the domain that we picked and how we ended up with this domain and how I met my co-founders. I met Ali during my time at Spotify. We kept in touch. It's kind of funny because the ecosystem is so small that I also have friends that used to work with him.

Very cool. What was it like to go about getting your pre-seed funding?

Since we already had a few customers, it was very easy to persuade that it's not just a preset but there is something more than just an idea and a great team. There was already something more than that.

It was a journey of knocking on many people's doors and trying to persuade them that ours is the right one and why us and not others. It helped that we have together over 50 years of experience, but still it did not persuade everyone because some people in meetings said they don't know what AI is or they don't believe in AI completely, versus people who were jumping up and down saying this is right, this is how things should be done. There is an entire spectrum of VCs out there and it really matters who is the first person that you talk to if you're going to go through the next hoop.

It's always right to get the right partner, not only as the founder but as the one who's going to do the pre-seed funding. Was that your first experience doing something like that?

As a founder, yes. But I've been twice before a founding engineer, so I've been part of this earlier. I've been in two seed fundings and two A rounds before.

It's super critical who you pick as a founder and who you pick to be the VC. It's a bidirectional lane, meaning yes, they need to pick you but you also need to pick them because usually there's more than one who is interested.

That makes sense. As the year has progressed, you have that first year, you start things up. How has your company evolved in its short period of time? How has the product improved? What are some key milestones that you remember?

A key milestone is always the first time that you deploy in production, especially an agent that runs on its own and people are using it. That's definitely a milestone to celebrate. The biggest change I've seen is actually building the team, the interaction, and the fact that there is a team that believes and has alignment with the founders. Finding people is hard, finding the select few who actually believe and can contribute so early on is even harder.

I spend most of my time building the team. Some milestones are the first deployment, the first bug or crash that happened with it. Another milestone is obviously the funding, but I'm ignoring that one for a second. From a product perspective, the moment I realized we are already at version five compared to when we started is insane to see it becoming a reality.

It's like a plant; you water it, give it nutrients, but you don't know what's going to become of it. You give it the foundation it needs, but especially with agents, you give them the territory. What's going to happen? How is it going to do it? That's a whole different discussion.

Early hires are similar to the concerns people have with agents. You need to make sure they really align with what you're doing and that it's the right group of people.

Pivoting to what you are doing with agents, I want to learn a little bit more about that in depth. Tell us a little bit about how Kavant works. You position yourselves as orchestrating an intelligent workforce of agentic workers that transform operations from constraint-bound to infinitely scalable. Can you break down what that means in practical terms?

Let's break the jargon and buzzwords a bit. In many cases nowadays in the modern enterprise, the bottleneck or constraint is the human workforce because across different enterprises and suppliers in the supply chain as a whole, there is a human in the loop. It's not just one human; in some cases, it's dozens or hundreds of humans that all they do is chase down their partner in the different company.

Big manufacturing companies have ERP systems that work for their internal inventory, but in many cases, it doesn't work for external suppliers. Sometimes yes, but mostly no because they don't integrate. Suppliers are often too small or midsize and don't use it or there's incompatibility.

The bottom line is there are many suppliers and many humans in the loop involved. Wherever there is a human in the loop, we can automate the work done there until the need for actual decision remains. The decision stays within the boundaries of humans. We're helping remove the burden from humans to focus on decision making and things that matter rather than boring stuff.

For instance, many people chase Excel spreadsheets moving them from one place to another. There's no real need to do it. We can automate this, but the decision on whether to trust a supplier is something an agent cannot do; we need a human for that.

That's a little bit about the capacity because that's unlocking or removing the bottlenecks currently with the supply chain.

What have you learned through how the market has changed, how tech has changed, and how you've applied that to your current product?

The main learning I had was to unlearn most of what I know. I came in assuming the technology is already well established or in initial generations but well established to perform way more than I realized. It's basically not ready for prime time. I thought smaller things would be feasible, but the more I challenge it, the more I see it's definitely capable of doing a lot and very sophisticated, but the rough edges are where the pain points are.

The happy path is very good and working well, but the guardrails are where it fails miserably, especially when architecting stuff incorrectly and placing guardrails within the boundaries of the agent. It's like giving the cat the cream because the cat will eat the cream whatever whether you like it or not as long as they can spare it.

If the agent can abuse an API, they will even if you tell them not to. We've seen it with Replit and the deleted database. It's a very interesting world where we believe in things and on a daily basis get our hypotheses nullified or proved right. It's moving so fast that things I believed I was able to do earlier are no longer true and vice versa.

For instance, I believe every engineer or engineering team in the world right now is using coding agents because they are so useful, right? They're useful until they're not, until they miss some context or the scope is too big or something is not working right. It's similar with other contexts.

The coding part is very bound and they've done a great job training the LLMs specifically on data from Stack Overflow and others, which makes it very suitable for this solution. The more we retrain the data on the relevant domain, the better it gets. The hard part is to find the real data to train on that is relevant for the task at hand.

How have things improved on finding real data to train? Is that something proprietary? How has the world of guardrails changed?

There are ways to reduce hallucinations but no way to remove them completely. It's similar to machine learning. If someone tells you their model is 100% accurate, they overfitted it. Nothing is perfect in life, especially when handling real data with noise. You would never have 100% accuracy. Similarly, hallucinations always exist. The question is how bad it is and whether you care about it or create a system to validate or cross-reference.

We've seen the LLM message edge option where people use multiple LLMs to evaluate output. It has downsides with token usage but gives a guarantee. It's a majority voting, a democratic way across LLMs to tell if output is hallucination or not. This is very popular now, but there are other methods to detect hallucinations.

You should always have an external system as a guardrail, a fail-safe, especially for compliance. You want it bulletproof and not take chances that something might happen by accident.

Security is key. We are 27001 certified as it's crucial for our customers to uphold relevant certification and security posture because they are a main target for attacks and campaigns daily. Vendors must be vigilant.

It's hard to make people comfortable with AI because of the massive amount of data that runs through systems like yours. That's why certifications exist and security is great. Backend models have relationships with people and healthcare where they don't train data on interactions. It's more offline transactional and can't link back to original servers.

Our solution deploys specifically on the perimeter of our customer. Other providers offer complete SaaS models, but we provide a way to deploy completely on the perimeter without data leaving. We work with customers who whitelist their own LLM providers based on relationships and use those.

What is the key differentiator between you and others attempting similar things? What's your secret sauce?

The top three distinguished things: first, data security completely on the customer's perimeter. Second, our customers lean heavily into the Microsoft stack, especially Microsoft Teams. They don't want another interface; they want to communicate with the agent via Teams. We have native integration with Teams, which is rare. Third, we have a super strong team from Google, AWS, OpenAI, Stern, and others who have felt the pain points themselves and are solving them for others.

Microsoft has always been hard to integrate with, but many enterprises really use and love the product. Many solutions have not been able to implement with such a widely used tool suite.

You can do many things with the Microsoft suite. Historically, my first job was working for a company selling USB drivers and Microsoft Windows drivers, which feels like a different world now.

Hopefully, in the future, Kavant grows and continues to improve. What's the ultimate vision for Kavant? How do you want to reshape enterprise long term?

We want to shift enterprise from focusing on processes to focusing on the end-to-end goal they want to achieve. This is a mind shift requiring education. People currently think about moving from A to B to C, not realizing they created this process artificially to ensure their supply chain works. Instead, they should think about optimizing the supply chain rather than the process rigidity. We want to change enterprise to avoid rigidity and optimize end-to-end solutions to achieve KPIs.

What is holding you back from doing this immediately? Is it tech or time?

Change management is always painful. Educating and building trust in new technology, especially in fragmented enterprises, is hard. Integrations are key but difficult inside enterprises because you have many people to talk to for approvals. Adoption is difficult not only from tech-savvy people but at organizational levels. You can make inroads with individuals but if the organization never adopts it, trust issues remain. There are many blockers.

Education is key to help people understand. If people can't create or create half-baked agents due to permissions, there's low trust in agents. Enterprises have low trust in external solutions. This creates trust issues more significant than outside perceptions. Many people use agents as helpers interactively, not understanding they can perform background tasks. This co-worker model is a game changer.

People are concerned about security but haven't seen deployments rejected if proper risk analysis is done. As long as people understand risks, they're willing to take the bet.

Once people understand risks, capabilities, and possible success, they are willing to adopt. It's new technology and takes time to find unknown unknowns. It's similar to any technology cycle. People bet on their horse and are willing to adopt.

You've worked with companies already. Can you share an example of how you've helped integrate your systems? How do you think jobs in enterprise will adjust? Will there be fewer jobs or will jobs be more effective?

Jobs are going to change. Instead of focusing on doing, people will focus on decision making. Using the industrial revolution analogy, before it, a carpenter produced one sandal a day; after, a factory worker produced ten. Here, the focus is on quality, not quantity. Instead of doing actual work, people will focus on organizing or automating the workforce to reach goals and focus on tasks they can't do or are unqualified for, such as compliance or lack of context.

I think demand will increase for people who know how to use the tools. Like the industrial revolution, some jobs were lost but many more were created with different skill sets. Factory workers didn't have the same tools as carpenters because they did one thing alone. People talented in leveraging AI tools will dominate the market moving forward.

A good example is the ATM. After it came out, bank tellers increased by 20-30% because ATMs decentralized cash flow, allowing more banks and thus more tellers. We don't know how this will shake out. I think jobs will be deeper and more solo entrepreneurs or small teams will do the work of large companies. I don't think AI Armageddon will happen because people will find ways to work. I'm optimistic.

I'm with you. Talented people will find creative ways to express themselves regardless of the situation. News often gives excuses for layoffs, but historically, especially in software, layoffs happen regularly. It's a wave that comes and goes with different reasons.

To close, what's your favorite AI tool you use every day to save time that's not your own company?

I use Claude and Claude Code every day. I have multiple repositories and run different agents on each repository and branch in parallel. I have my own crew of small agents running and improving all the time.

Claude Code is amazing. Did you see Co-work got released? It's a new tool for recurring tasks or large projects. You give it a directive and it works on it for a long time until done. We've gone from chatting with AI to working with AI that interacts and manages tasks. I'm downloading many desktop MCPs for managing tools like Google Ads and analytics to optimize content and workflows. Claude, Claude Code, and Claude Co-work are incredible.

Sometimes I use Chip and Nana Banana for deep research. JGBT gives better results for me, and Nana Banana gives better results for images. I pick different tools for specific tasks, but day-to-day I'm with you on Claude.

There is an obvious difference in training that impacts how LLMs help us.

I appreciate you making the time. It's been a pleasure. Thank you for coming. Where can people find you?

It's been a pleasure. You can find us at kavant.com. Feel free to reach out via info@kavant.com.

That's kavant.com. Make sure to check it out. Thank you everyone for watching and listening. Please hit like, subscribe, and leave a review on the podcast to check out all the cool stuff Drawer is doing at Kavant. We appreciate all founders we interview. Thanks for watching and we'll see you in the next one. Peace.

Stay Ahead with the AI Agents Podcast

Get the latest insights on AI agents, their future, and developments in the AI form industry.