This AI Company Catches Fraud at Scale
Chapters16
Variance announces coming out of stealth and a $21 million Series A, marking a milestone as Karine discusses the company’s stealth exit and vision. The episode centers on what Variance is building and the significance of the new funding.
Variance launches from stealth with a $21M Series A, proving AI agents can scale risk, fraud, and KYB/KYC automations for Fortune 500s and GoFundMe alike.
Summary
Karine, co-founder of Variance, shares the ambitious path from stealth to a thriving, data-driven fraud-fighting platform. The interview outlines how Variance builds purpose-built AI agents to automate risk and compliance tasks at scale for large marketplaces and payment platforms, including GoFundMe and Fortune 50 companies. Karine explains the core building blocks of their system: compliant documents, standard operating procedures, and the AI agents that reason over both internal data and hundreds of external registries. A standout detail is their ability to automate complex KYB/KYC workflows, and to pull data from unstructured sources and even legacy UIs via browser-based data access. The conversation also dives into multi-domain challenges—data integration, open-web signals, and the shift from rule-based and human-in-the-loop systems to fully autonomous agents that can triage and resolve cases. The founders recount a pivotal customer win with IA (a public company) and the brutal, almost cinematic, personal obstacle: Karine’s serious accident and near-term company upheaval, which they navigated with a relentless focus on scale and ownership. The result is a lean team of five engineers delivering prod-level automation across billions of data points, with employees acting like small AI-led squads. Overall, Variance is presented as a high-ownership, product-focused environment that aims to redefine how fraud and compliance environments evolve in real time through agentic systems.
Key Takeaways
- Variance is publicly launching with a $21 million Series A as it comes out of stealth, signaling strong market demand for AI-powered risk and compliance automation.
- AI agents for risk, compliance, and content review handle complex KYB/KYC and fraud investigations at scale using documented SOPs, internal data, and hundreds of external registries.
- The platform can pull and reason over data from scattered systems, even scraping data from legacy UIs when needed to form a complete risk graph.
- GoFundMe uses Variance to verify fundraisers and detect misuse, such as crisis-related fraud or fundraisers tied to sanctioned entities, before funds are released.
- Variance emphasizes a self-healing, end-to-end decisioning loop where AI agents replace multiple traditional nodes (rules, classifiers, humans) for faster, more consistent decisions.
- The team maintains a lean structure (five engineers) but outputs at the level of a much larger team by leveraging AI agents as autonomous managers of micro-workflows.
- Customer-facing success is streamlined by agent-driven feature deployment, with a non-technical success manager able to ship features directly via Cursor-like agents.
Who Is This For?
Founders and engineers in fintech, fraud, and compliance who want to understand how AI agents can automate complex risk workflows at scale. This is essential viewing for teams evaluating AI-driven risk platforms and for YC alumni tracking real-world growth curves.
Notable Quotes
""Variance is building purpose-built AI agents for risk and compliance. We automate content review, fraud reviews, identity reviews at scale.""
—Core value proposition introduced by Karine.
""The AI agents are going to be using all of that context and use the GoFundMe terms of services to basically decide whether or not that should be allowed on the platform.""
—Illustrates how agents apply policy with multi-source signals.
""We were able to detect really complex fraud rings of especially state sponsored actors that were pushing one narrative over the…""
—Shows impact of agentic systems on sophisticated fraud networks.
""We’re hiring across the board. We’re hiring for back end. We’re hiring as well for front end… because 1% of cases still need a human, and you need a great dashboard to handle them.""
—Hiring and product focus reveal the last-mile needs.
""This strong sense of duty … kept Michael and I going throughout the years and resonates deeply with customers.""
—Founders’ motivation and trust signals with customers.
Questions This Video Answers
- How do AI agents replace traditional fraud detection pipelines with rules and humans?
- What makes KYB/KYC automation scalable across global registries and open web data?
- What was the first major enterprise win for Variance and why did it work?
- How can you implement an end-to-end automated risk workflow without losing human oversight?
- What challenges come with scraping data from legacy UIs for fraud analytics?
Variance AIGoFundMeKYBKYCFraud DetectionRisk and Compliance AutomationAI AgentsOpen Web DataData UnificationMLOps in Security
Full Transcript
So, I'm particularly excited to do this Founder Fireside because we have a first for the Founder Fireside series. We are doing a big announcement on this podcast. We are announcing that Variance is coming out of Stealth and announcing their $21 million series A. And I'm really excited to be here with Karine, the co-founder of Variance. Thank you so much for joining us, Karine. Thank you. So, this is a big day. You're coming out of stealth after building basically in stealth for the past 3 years and you built Variance quietly into this big company that powers a lot of the products that everybody knows and uses in their regular life.
Can you tell us about what Variance is and about the $21 million series A you're announcing? So, Variance is building purpose-built AI agents for risk and compliance. We automate content review, fraud reviews, identity reviews at scale. We're powering some of the largest companies in the world's Fortune 500s marketplaces. We've been working with GoFundMe, for instance, to review all of their fundraisers at scale. Some Fortune50 for verifying all of their sellers and complex UBO verifications. And today coming out of stealth and you guys built in stealth for a long time like 3 years and you have a lot of customers that you can't even name on this on this podcast.
Why all the secrecy around this variance usually deals with really sensitive data and sensitive issues. And the phrase I like to use is that we're building the systems that are often used by the bad guys, but we're building them for the good guys. So oftent times it's really hard to market the use cases that customers are using varants for because those issues are so sensitive. And if we were to market those then it may create more fraud. It may create more abuse. It may create more bad like your your customers are stuck in this constant cat and mouse game with the bad guys and you're their secret weapon.
They don't want anyone to know what their secret weapon is. Exactly. We're in the shadows. I think it's it's it's good. It's really impactful for our employees working at the company. Really impactful. You get to see that when you work at Variance, but I think even far beyond the series A will always be a company that's a little bit more in the shadows. And I think that's that's okay for us. What's cool about Variance is that even though probably no one who's watching this video has ever actually used the product themselves, they have used many products that under the hood are actually powered by what you've built.
And so indirectly they're actually like using stuff built on top of variants like a lot of like software infrastructure kind of products. Could you maybe tell people about a a blog, a specific product that they've used that's or have probably used that's powered by varants and how it works? One of the customers, GoFundMe is a platform that you can use in order to build your own fundraisers. And GoFundMe is effectively a payments platform. And GoFundMe actually has some very strict compliance requirements because they are liable for facilitating payments to let's say an organization that they shouldn't have done so.
So variance is used to verify for instance if a fundraiser is going to be built for a military operation for instance or if you're building a fundraiser for um a crisis that you were not part of which would be fraudulent well GoFundMe sort of has the responsibility to be able to detect this they're using AI agents so variance AI agents to conduct those investigations so make sure that the money is going to the person that they say they are and also make sure that the money is not going to be funneled to sanctioned countries for instance or be going to um anything that could be a compliance risk for them.
You were telling me earlier about like one specific very concrete form of fraud. What does like typical fraud on GoFundMe look like? Yes. So um GoFundMe is actually very crisis driven. So if there is any sort of large event or any sort of natural disaster, there's usually going to be a spike in fundraisers that occur on GoFundMe. It's really hard for the team to keep up with how many of these fundraisers are actually going to be real fundraisers for that event or possibly fraudulent. One of the example that we saw was I think recently there was for instance the the murder of Charlie Kirk.
There was a spike of fundraisers for the family of Charlie Kirk. How do you know who's actually related to Charlie Kirk and who's just trying to raise money for their Okay, so like Charlie Kirk dies. Yes. And then what happens is like a hundred people host a fundraiser on GoFundMe like claiming to be a family member of Charlie Kirk. Exactly. And it's mostly just like total fraudsters who are just hoping that some people like donate to their Charlie Kirk fundraiser instead of the real Charlie Kirk family fundraiser. And you're in charge of figuring out which one is the real Charlie Kirk family and who are the likeund fraudsters.
Exactly. And there's a lot of behavioral signals you can use to do so. Um, you have information on the identity, you have information on what that account has done in the past. You also have information at the fundraiser level, so the image, the bio, and such. So, the variance AI agents are going to be using all of that context and use the GoFundMe terms of services to basically decide whether or not that should be allowed on the platform. That work used to be done by human analysts and now can be fully automated in a much more consistent manner.
Every person who signs up to do a GoFundMe fundraiser, they don't realize it, but actually their request is being validated by variance of software before it's allowed to go live. Yes, exactly. What are other like products that people use that also are powered by variants and they don't even realize it? So, the scale has been um impressive the number of use cases that our agents can be used for. So we have been running complex identity reviews for marketplaces, gig economy platforms. So when you sign up to actually be for instance a delivery driver, your identity needs to be verified based on selfies, based on your driver's license.
All of that data is then going to be reasoned on by our AI agents and then it's also going to be validated based on the company's standard operating procedures. That's a good example of how variance is used. were also used for complex what we call KYB verifications. So if you sign up to do any sort of business online whether it's for a marketplace or also to with a financial institution they have the compliance requirement to verify that you are actually linked to the business you say you you own. So a good example is for example I sign up to say I'm going to be doing business with varants.
Well, the legal name of my company is Decoy Technologies and it is tied to Korean Mulada. That's a simple example, but building that graph at scale is really hard. And oftent times you're going to see company have multiple shell companies be tied to multiple different agents and different other identities. And within that really large graph, you expand the area of risk for the company where one of these nodes could be in a sanctioned country. One of these nodes could have adverse media on them and possibly have been to court for money laundering and companies are required to conduct these investigations at scale and at the moment it's entirely manual.
Do you license data from a bunch of different sources? Do you like scrape all these government websites? Yes, but to take a step back, I think it's really elegant how AI agents are built at the moment. There's really only three building blocks that you need. So you have the compliance documents, the standard operating procedure. So what the company deems as necessary to verify at the on boarding level or at any other parts of the life cycle of that entity. Once we have those compliance documents, then the AI agent can do its work using tools that we built and data internal or external.
Those are the only building blocks you need to automate complex KYC, complex KYB, complex content review. The question you're asking about data is interesting. Usually, it's going to be a split between internal customer data. So, variance is really good at connecting to all data sources, pooling unstructured data into our own data stores. And the second one is external data. So we do have access to over hundreds business registries across the world which makes us international and then our agents also have access to the open web. So a lot of unstructured data is found directly on the web.
I actually think interestingly enough that access to the web was one of the final nodes that made this whole problem really hard to automate. Yes. Because like the human analyst, a big part of what they would do is they would go and they would like Google for names and then they would like look at what comes up and apply some judgment like ah does this seem fishy or not. Yeah. When you think of a large graph of abuse. If one of these nodes, one of these intelligence signal is found on the unstructured or the open web, then without having a human agent Google on the web, you can't even trace back the whole graph of abuse.
What are some other interesting technical challenges that you guys have encountered and had to solve in the course of making this work at scale? Um the data problem was was really the core hardest technical challenge. Usually with customers what we will do is that we'll pull in pabytes of data that are coming from vastly different sources and all of that data is unstructured. Doesn't have a schema. unstructured, isn't it? Don't you just like pull it like right do you pull like out of their like relational database? I I wish it was this this simple, but usually I mean I'll give you a very concrete example.
So for instance, if you need to verify a fundraiser, I'll pick the fundraiser example. Okay, usually the data is going to be scattered around the user identity data. So, you need to have information on the user. You need to have information on all of their login behaviors, the devices that they've had, the PII that they've onboarded at the beginning. You need to also be able to pull in information about the business. And then you need to have also all of the information on the fundraiser itself and all the history of that fundraiser. What has been hard is that often times whether you're a financial institution or you're a marketplace that data is going to be scattered across five to 10 different systems.
It's going to be into different data stores. And one thing that's been really interesting in that is that sometimes that data is going to be hidden behind a UI. So the only way that the variance AI agents are able to sort of scoop up that data and reason over it is to be able to directly scrape from a UI that was built for a human. So the data piece and being able to scoop up all of that data and bring it to variance was one of the hardest technical challenges. It's really an onboarding one. And so do you do you do that?
Do you have agents that interact with like the customers like internal dashboards and like pull data out of them? Yes. So at the moment the way to integrate with variance can be reverse ETL API but very recently we've been awarded this third way which is spinning up a browser opening up a really old review tool that was built for a human pulling that data and then reasoning over it. I guess it makes sense because like the work was being done by a human before and so like the human can use the dashboard while the agent can use the same dashboard.
Yes. Exactly. Right. you worked on related systems at Apple like prelims and now you're building this like deeply agentic system. Can you talk about how the technology has evolved and like why it's possible to solve this problem now? Yeah, definitely. So, I think it's important to sort of understand how these problems were solved at scale before. So, usually companies are going to have a patchwork of different what I call deterministic systems. So, they're going to have rules that say, "Well, if this transaction is over $1,000, then do this." Um, they're going to have classifiers that are really good at detecting one specific flavor of abuse.
Then they're going to have humans. And humans are really good at understanding in context all of these different nodes and signals and then making a final decision. Mhm. And I think when you think of a fraud system, the most important feature of a fraud system is that it needs to evolve really rapidly and you need to have a really tight feedback feedback loop. But when you think of the three nodes I've mentioned, rules engine, classifiers, and humans, humans being really slow oftent times and and a little bit inconsistent, that feedback loop can only be so fast.
And we never really felt like you could achieve a self-healing system that could thrive in a dynamic environment. And I think fraud is the most dynamic environment because you always have adversaries. So Michael and I were always really, really, really stubborn about making the system have no nodes that were inefficient, if that makes sense. And now you have AI agents that are able to sort of close the loop from a reliance and self-healing standpoint. They're able to materialize any features that a rules engine would be able to materialize. You don't need a classifier anymore because AI agents are able to read set of standard operating procedure and reason over an image or reason over sort of unstructured data and know that this is possibly chargeback fraud.
you don't need a specialized classifier for it. Um, and you don't need human reasoning anymore. So, you have this fully self-healing system which at scale is really transformative and allows companies to be able to ship faster, open new product lines cuz they don't have this sort of bottleneck. Do you have an example of something like that with one of your customers where the system was able to do something that just like no human team of fraud analysts could ever have been able to do? Yeah, one customer specifically uh and I think that fraud pattern came to be during the elections.
Um we had one customer that is processing a lot of content and they're also there are Fortune 500. They're hosting large communities and they're also fairly politically exposed and throughout the elections because our AI agents had access to the context of entities in relation to other entities. So, how does this user fit into all of the other users that we're looking at? We were able to detect really complex fraud rings of especially state sponsored actors that were pushing one narrative over. And I don't think this would have been possible if you had one classifier in isolation that was looking at one piece of content after the other.
But because AI agents are able to directly query our data stores, they're able to materialize features on the fly and they're also able to use one step to reason over what should be the next step and the next tool call that they make. We were able to detect much more sophisticated fraud rings than you would have been able to do before. There's also like a pretty interesting implication of variance. like you're actually able to like detect misinformation online and like improve the political discourse. What a what a like cool impact to have on the world.
Yeah, it's been uh it's been high responsibility. Um but no, it's it's been really interesting. I think um some of the abuse vectors we've been able to detect have had really serious physical implications. So people that are making threats online of physical harm have a plan to do these things at scale. Um. Wow. So you might have actually like prevented physical violence from happening in the world by detecting it early. Yes. Exactly. Yes. So at scale some of these investigations can lead to finding things that are really scary and at the end of the day once it's detected and investigated by varants it's usually going to be in the hands of law enforcement.
But seeing that impact at scale is really interesting. This sounds like a lot of software to build. How how big is the team now? So, we're 12. Um, we have five software engineers. So, we've been we've remained very very lean. We have five software engineers building all of this. Yes. Are you hiring more? We're hiring more. So, we're hiring across the board. Um, we're hiring for back end. We're hiring as well for front end, which I think Michael and I hadn't realized was such an important part of the product. If I can give you a little bit more context, I think when Michael and I first started the company, we were really stubborn about building a full endto-end decisioning layer.
So, we thought we needed to get really good at making really precise decisions, but at the end of the day, we could be treated like an API call. We were not right about that. Um, what we found out is that because AI agents are able to take on the simplest part of the workflow, so they're able to triage 99% of of cases, that 1% is usually going to be the most complex cases that need to be reviewed by a human. And you need a really good dashboard. You need a really good investigative visual tool to be able to make sense of those super complex use cases.
What's it like working at Variance? What's what's cool about working there? It's a very strong ownership culture. Um, with five engineers and we're processing pabytes of data. We're making decisions in a fully automated manner for some of the largest companies in the world. So, I think just for context, we have 2x founders on the team. Um, everyone has a really strong ownership culture and really feels like they have agency over every single part of the company. So, on a day-to-day basis, we're in person in San Francisco every day. We're very, very product focused. Both Michael and I are engineers of course.
So it's a very very high ownership but also high collaboration workplace and there's never really a time where Michael and I sort of dictate to the team what there is to build. It's really more going to be we give a problem to an engineer and then they just take it and run with it. And at the end of the day, it's been really fascinating because I can sort of proudly say that there's people on the team that understand certain areas of the business better than I do. For instance, Luke on the team was one of the first engineers at meter and he understands eval for large language models better than anyone else on the team and even I think better than than a lot of people in the industry.
So we sort of have this culture where we get to learn a lot from our engineers and because the team is so small they get to be a really large part of the success of variance and I think that's something that's been possible now that we have coding agents. Yeah. Are you guys AI coding maximalists? I would say we we we pretty much are at this point. Um, everyone on the team is usually going to be almost like a manager of their own their own small team, which is really interesting. I would say we're five, but I think in terms of software output, we're probably closer to a 25 people team.
So, every engineer is going to have three monitors with their coding agents running. Okay. Um, we still have, you know, good oversight. We still review all of the PRs, but I think in terms of output, everyone is a manager of a small team of AI agents, which is really interesting. And I think one other really interesting anecdote is that our customer success manager who's entirely non-technical but interfaces with enterprise customers on a day-to-day basis now gets to take on feature requests especially the simple ones directly give them to cursor a cursor agent and then directly be able to ship features in a fully autonomous manner get back to the customer a few hours later and say oh it's it's shipped and she didn't even need to speak to the engineering team.
Wa. That's awesome. Maybe let's change gears here and talk a little bit about the the origin story. How did you and Michael end up starting this company? How did you end up working on this problem? So, Michael and I both met, we were co-workers at Apple. Um, we were both engineers on the fraud engineering team and I was a data engineer. Michael was a machine learning engineer. And it was really interesting because the team in and of itself was sort of the broad engineering as a service team of Apple. We were providing our services to the iMessage team, the iCloud team, and we were this centralized fraud team.
And Michael's machine learning decisions were then dispatched to the rest of the organization through my own streaming jobs. So, we had a very sort of symbiotic relationship uh from the get-go. We knew that we worked really well together. I remember telling Michael, "Oh, well, what if the right vehicle for this product was a company?" And I told Michael, "Oh, we should apply to our combinator." That was sort of the origin story. I think it really started from the product. We really, really wanted to just see this product exist. Um, and we wanted to see that problem that we were solving at Apple be solved in a much more efficient way, in a much more self-healing and resilient way.
How did you crack it? How did you convince the first customer to take a big bet on you? I think your first customer really believes in the founders first. They believe in the founders's ability to be able to solve their problem because at the end of the day when you start enterprise, you do have a version of your product, but it's going to evolve so much based on your first customer's requirements. So there was a belief and a trust in Michael and I that we understood their problem well enough to then translate that into a software platform.
It really started with that. And then I think the second thing that was really important is that we needed to land on a problem space which has evolved a lot which was on fire. It needed to be on fire because what we found is that if it wasn't on fire then there was no reason to go and trust this really small startup that had no real proof points behind them. Who was the first customer and what was the burning pain point that got them to be willing to do this? Yeah. So, our first customer was uh a company called IA.
They're a publicly traded company. IA folks might not have know of IA because sort of like a parent holding company, but I'll bet they know a bunch of the IA brands like understand the IC brands that everybody knows. Yes. So, I I brands I mean I has care.com and we were working with Ask Media Group. Okay. which had a very large amount of marketing content and because IC is a large publicly traded company there was a lot of compliance requirements around what could go into their marketing content. So we basically used the platform that we had built to review content at scale to then review their own marketing content.
And I think what was really interesting is that one that problem was entirely solved using human agents because those compliance guidelines are really hard to map to a traditional classifier. You can't give advice for legal defense for instance. It's really hard to map to try writing a regular expression for that. Exactly. So it was it was semi-impossible. Also was a very large team of human agents uh part of a BO outsourced that was doing this work and that was basically hurting their growth. There was less marketing content that they could put out in the world because they were not it's really hard to scale a team of human agents and human moderators.
So they knew sort of had an intuition that that could be done with large language models. But we were the first company to say hey we can actually do that with large language models. We were a small startup. I think that happened maybe this was at the very beginning because you guys actually started variance prehat GBT like just a little bit before chat GBT right. And so you were you were right at the very earliest waves of companies that were figuring out out how to use LMS to automate these previously unautomatable tasks. Yes. Exactly. Um, I think GPT4 came out during the batch.
Yeah. Which was really interesting. And as we were running this pilot for this first customer, sort of OpenAI was coming out with new models in the middle of the pilot, which was changing one our cost structure by a 10x factor, but also was changing our performance quite a lot. So, it was really interesting to build in this world that was super dynamic. Okay. So the the the first big battle was like getting the first customer. Took eight months to land IAC. You really did it the hard way because you went enterprise from the from the very beginning.
Have there been any other hard like challenges of building this company. I think starting a company tests you in a lot of different ways. Um and we have a really interesting story. You know after I we got to on board a lot of great customers in the trust and safety space. um, Medium, of course, GoFundMe, Redbubble, and around July 2024 was one of the times where the company was growing rapidly. We were on boarding more and more enterprise use cases. I think during that month, our revenue was doubling within the month and then doubling the month after.
It was really exciting, very step function because it's enterprises and we had just wrapped up one of the largest trust and safety conferences uh called Trustcon in in San Francisco and I think a couple I want to say the day after we had worked so hard for this conference 12 hours a day we were super tired um I was going back to the office on a Sunday afternoon um and I was on the bike lane and a truck hit me. That was a really really crazy experience to to go through as a founder. I mean, the company was doing well, but at the end of the day, we were a 10 people team, and the CEO just gets hit by a truck.
How badly were you injured? Um, so I broke my spine, broke my leg, and I was hospitalized for about 10 days. I couldn't walk for about 10 days. Whoa. For a year and a half, as a founder, you're moving so fast. You're working every day. I had never I don't think we ever took days off. Even if you do, you're you're still pretty much on. So for a year and a half, you're working super hard and then all of a sudden as a founder, you're in a bed and you can't move. 3 or 4 days after all of that happened, Michael came to visit me in the hospital.
And of course, Michael was super anxious. One for my health, but also trying to understand what that even meant, right? Like the the CEO has a bust factor of one. Yeah, we only have engineers and we have me that's running all the sales and the customer relationship. You were the it was all founder leled sales. You were the only person doing sales. Yes. And and fully the rest of the team was only engineers. And and Michael came to visit me and he he brought I don't know I vividly remember he brought this Norman Foster book which was the architect of of Apple Park um as a gift to me and he was sort of sitting next to my hospital bed holding the book.
Um, and we were both in silence because we we didn't even know what to say, right? It was silence for for a couple of minutes. And he laughed and said, "Well, this is going to make a really good scene in our IPO movie." And I was like, "Yeah, it's a good way to view it." Um, we laughed about it, but I think I was probably out of commission for about a week and then two or 3 weeks after that. And I think it it it put a lot of stress on on the company. There were a m a few moments where Michael was wondering if this was the end.
He would tell me and repeat to me the story of Steve Wnjak who like went through the plane crash and then left Apple and then um went to go work at or went back to Berkeley. So I think there was sort of a a feeling that maybe this was going to be the end of the company and maybe we just needed to sort of part ways. there was a really deep feeling that um it was not the end. It's an interesting challenge. So, sort of like a a hurdle you need to go over, but it just doesn't feel like the end at all.
It feels like there's so much more to come. And I definitely felt that. I know Michael felt that. And we got to got to walk again. Now I can walk again. And and and we got to learn that uh we definitely need to scale me. so that so that hopefully this doesn't happen again but um we definitely yeah need to scale me. One thing that strikes me about your story is um the extent to which you guys have had a very strong opinion from the very beginning about what to build and who to build it for.
Like a lot of founders come into YC, they have some initial idea, but like it's just a hypothesis and then they like pivot through like many different ideas and like new models drop and AI changes and they like might change the thing that they're working on like several times to fit with whatever the like cool thing is. And and you you guys have kind of been the opposite of that. You came in like day zero with a very strong opinion of what to build. you'd seen the problem firsthand and then the whole company has just been like that initial hypothesis playing out.
I guess I don't know is is that perhaps part of why it didn't seem like it could be over yet because you like really had to see it through? Yes. And if we go back to why Michael and I started the company, Michael and I had a very specific pair of skill sets in fraud and we understood what the industry looked like. We had a lot of issues with what the industry looked like and how these problems were solved at scale. But I think from the beginning we always felt a really strong sense of duty to put our very specific and quite rare pair of skill sets to the good of the industry.
Mhm. Um it was almost like a sense of duty and to us it was never really about starting a company for any problem sets which I think some some founders are really great at doing that don't get me wrong but we didn't want to just start a company for any problems. We wanted to solve that problem and we knew the technology was going to evolve and and we were so lucky, right? LMS got so good. Now we have agentic systems, we have a agent harnesses that are able to fully solve this problem end to end.
But we really wanted to solve this problem and this strong sense of duty I think is what kept Michael and I going throughout the years and I also think that's something that resonates deeply with customers. So when they meet us, they sort of see these founders that are really deeply trying to solve the problem that they're seeing on a day-to-day basis. Once because they've seen it before, but also two because it's something that is doable if you put enough care into building the right engineering system in the right ways. All right, I feel like that might be a great note to end on.
Thank you so much, Karine. Thank you.
More from Y Combinator
Get daily recaps from
Y Combinator
AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.



