What we've optimised in the very early days is building a product that makes it very easy to engage with us… We've invested a tone of engineering effort in making onboarding extremely easy. Security is well thought out... And that helped us remove as many of the barriers for data organisations to work with us from day one.
An interview with Barr Moses, a co-founder and the CEO at Monte Carlo, a data observability platform. Monte Carlo raised $41M from GGV Capital, Redpoint, Accel, and others.
Peter Zhegin:
Hello, and welcome, everyone. My name is Peter Zhegin. And I am hosting datafounders series of interviews with entrepreneurs and investors who work on data science startups. It's an interview number 10. And I'm talking to Barr Moses, a co-founder and the CEO of Monte Carlo, a data reliability platform. It's a pleasure to have you here.
Barr Moses:
Thanks. Great to be here. lucky number 10.
Peter Zhegin:
Yes, yes, I hope it will become our lucky number. So probably, let's start from the early days of your career, I know that you served in the Israeli Air Force, as a commander of an intelligent data analyst unit, before you worked at Bain, a consulting company, so I'm sure everyone's curious how those experiences have influenced you as a founder, and where you are today as a founder.
Barr Moses:
Yeah, so I was born and raised in Israel. I was drafted to the Israeli Air Force, I actually wanted to be a pilot, but I didn't get accepted, which is a fun fact, and I sort of got found myself in the data space, actually. And so at a very young age, I was about 18 and a half years old, and I was responsible for a group of 18 years old, responsible for their training and well being and all that good stuff.

At such a young age, I had no training, I didn't have a degree, no professional experience, let alone management training. And I learned at a very young age, if you actually give folks kind of a tremendous amount of responsibility, people actually rally around that, and can really have sort of outstanding impact. We worked in a unit that provided data that actually ended up sort of helping saving lives in the 2006 period, and we actually got commended with an award for outstanding performance.

What I personally learned from that is that if you give people the opportunity to make an impact with this responsibility, and if they have the right motivation, then they can do great things, even if they've never done it before, or didn't have sort of the official training to do that. Taking that lesson into startup world, you know, hiring sort of folks who are very bright and talented and capable, can make a huge impact on a company, even if they don't have sort of the typical background, or the specific professional experience that you'd expect. Actually, sort of a diversity of experiences can oftentimes breed a stronger culture, or even stronger impact in results. So that was certainly kind of impacts from that time.

Later on, I moved to the Bay Area. And actually, several years after joined Bain & Company as a consultant, and worked with companies ranging from tech companies to private equity, supporting M&A deals. And the timelines there were very, very short, you know, it could be two to three weeks on average. And you could work on a variety of different industries. So one day could be helping a semiconductor company that wanted to enter the IoT market with a new acquisition of wearable technologies. This was like the early days of like Fitbit and such, I don't know if you remember that back then... So this was back when we didn't even know if this was going to be a market or not. And we basically had two weeks to get up to speed on the market and decide whether an acquisition was sort of the right acquisition or not, just as an example.
In a very short amount of time, you learn a lot about a new industry, you actually collect a lot of data, and then need to make sort of very strong businesses decisions based on impartial data. And as a founder that sort of helped me kind of get up to speed on new topics in a short period of time. For example, you know, I need to know the best of every function of our business, right, even a function that I haven't done before.

The second thing that I learned from that time was how to really make decisions with limited information. Someone told me, strategy is actually making decisions with very limited data. And as a founder, especially an early stage founder, you're really making most of your decisions with very limited information. Questions like what product to build, what customers to go after, what market, what part of the market you're going to tackle. Those things are things that you really don't have data and yet you have to act with conviction, and get people to follow you to do that. So that's a little bit of how I think about those sort of different experiences.
Ideation – testing ideas and recognizing excitement from potential customers
Peter Zhegin:
My understanding is that after Bain, there were also an important moment in your career, you were working with a customer success software development company, right? There you lead teams responsible for data and for analyses, and correct me if I'm wrong, exactly, there, you spot the idea that became Monte Carlo, right?
Barr Moses:
Shortly after that, I joined Gainsight. Gainsight is the customer data platform, which works with companies to help them to use their customer base to propel growth in the organisation. At Gainsight, I had the fortune of working with wonderful people, helping create the customer success category, and also building the internal team that was responsible for customer data, which we called 'Gainsight, on Gainsight', because we were basically using our product internally for us. Just sort of short, we call it Gong 'Gainsight On Gainsight' for sort, it was like the 'Gong team'.

Throughout that experience, as a company, we became very data driven. We were collecting way more data, many data sources that we were working with, our pipelines became more complex, our transformations became more complex. And the consumers that we served grew in size and brands as well. So we had way more people in the organisation actually relying on data.

My personal experience there was that bad data was just a thing that we had to live with. I would just wake up every morning, I'm sure, sort of data scientists across the world can resonate with that, but you kind of wake up, for like the daily fire drill, and you're like, 'Okay, what's broken today, right, which model is not working right now, what sort of consumers asking about like this, this sort of dataset that like, seems to be outdated, etc.

At some point we just started to, like, manually vet these reports and models to make sure that the numbers check out. And that just seemed insane to me. We were like a small and mighty team of a few folks, but like, we didn't have time to, like, manually vet this. And I realised that if we wanted to get data driven as an organisation, and as an industry more broadly, we need to change our approach. It just it's not gonna work for us to continue doing this. And I sort of asked myself, like, you know, sometimes in these situations are like - 'Am I crazy? Is the world crazy, like, what exactly is going on?'

So I actually sort of wanted to kind of take a step back, and, you know, understand what folks are doing out there. And so I spoke to hundreds of data leaders, asking them about their biggest pain points, ranging from small startups, like large organisations, I literally just cold call people. And I was like - 'Hey, what's keeping you upat night?'. And this thing just came up again, again, and I learned that this idea that people are just surprised by bad data... The interesting thing is that oftentimes, data teams are the last to know when data breaks, right, they hear about it from someone else who's consuming the data. So oftentimes, they're silent failures that happen, maybe you know, a job was completed, but you know everything was not in the data that was passed. If you're not proactively checking for those things, proactively making sure that you have a good understanding of the health of your data, oftentimes, you're surprised by that.

And so I experienced that personally, and also saw that so many folks are sort of suffering from this, and also that it's detrimental for their companies specifically. And so that got me really curious about why in the world are reacting this way, right? It just seemed like something was off.
Peter Zhegin:
And was there a particular 'a-ha' moment, or it's more like an accumulation of different talks, conversations that led you to the feeling that you were on to something?
Barr Moses:
Yeah, I think was every conversation that I had my conviction grew stronger and stronger. And actually, interestingly, before I started the company, I actually experimented with a number of different ideas for starting a company. I wanted to convince myself that it's a worthwhile idea and I knew that I was going to want to convince customers, and employees, and investors, and sort of other folks in it. And so I needed to get that conviction first.

These conversations... every conversation that I had continued to solidify that. When you know other ideas that I would working on, I would call people up to talk about them, and like, nobody would pick up the phone, like, nobody would call me back. I was like - 'Oh, my God, nobody cares about this idea, how could I create a company around it' - right?

But this particular idea... like customers, sort of people would call me before I would even call them. And there we're sort of this strong pull to continue the discussion on this. And you could tell if people cared about this, like as data people... it's sort of ironic, but our profession is sort of relies on our ability to deliver reliable data. That's the whole point of data, then it's accurate, and you can rely on it. And yet, reality is so far from that. And that's very personal. Right? And I think through those conversation is how I kind of got that conviction that, you know, one, it's, it's something that is an unsolved problem, that's very painful to people. And two, that the market is very big, because literally every company runs into it. And if you think about our world, every company's becoming a data company today, those that aern't are being left behind. If they haven't, they will. And I just couldn't imagine a world where there wasn't a solution to this problem.
What Monte Carlo does – data reliability – building a new category
Peter Zhegin:
I guess now it's a great time to switch to the solution. We talked about the problem, about the pain that you have seen, about the conviction. Tell us a bit more about Monte Carlo?
Barr Moses:
Yeah, definitely. Our mission is to accelerate the adoption of data in the world by minimising data downtime. I'll start by explaining what data downtime actually is. Data downtime, is a term that we coined to describe those times in which data breaks, if you will. The proper definition is periods of time when data is missing, inaccurate or otherwise erroneous. As I mentioned, like, I've seen this kind of, regardless of industry, costly problem, which were literally companies sort of lose millions of dollars firefighting, because of sort of bad data. And why do we actually call it data downtime? Where does that corollary come from? I think that's important, because that's sort of what is at the root of what Monte Carlo is.

Data downtime draws on the application of jobs on the corollary of application downtime. if you think about the last couple of decades, as the software industry has made tremendous progress in developing monitoring application, diligence around application downtime. Something that didn't exist 20 years ago, and does exist today, because we rely on application so much, right software is driving all areas of our business. I think data is the new software. And as a result, data is now powering all of our applications and our operations. You know, ranging from like digital products to daily decision making, to just the operations of the company. And if you think about that, that means if you believe that trend is going to continue to happen, that means that we need to start treating data downtime at the same diligence that we have been treating application downtime.

Monte Carlo is really focused on helping organisations restore trust in their data. So you know, we believe that data teams should be the first to know about data downtime, not the last and not after a month, but rather be the first to know and in real time. We also believe that it's incredibly hard to know and understanding downtime today. We help organisations identify the root cause in minutes, not in months. Oftentimes, what happens is that data organisations learn about a problem months later, and then it takes them a couple of weeks just to understand what was the problem and why. And the company has already sort of lost millions of dollars in the process. W help reverse that. Then the third thing that we help organisations do is actually prevent data downtime to begin with. With the right sort of information at your fingertips, you can oftentimes actually prevent these from happening. And I'll give an example.

A specific example of how data downtime happens is when someone on some team, let's say like an engineer makes a change to your website. That change on your website has unintended consequences downstream on a particular report or dataset that your marketing team is using for a campaign. And that change has now impacted the ROI on that marketing campaign. And in sort of today's world, this entire process happens without the engineering team and the marketing team having this communication. What we help do is make it very easy for everyone to know this change was made upstream. We also help engineers know the impact of what was going to happen if they make that change on the website, so they can understand what are the reports, or what's all the data assets that are going to be impacted as a result. Once you have that information, we can find that people can actually move fast, build more things, improve their velocity as an organisation, and not break their data, meaning still have trust in their data, if that makes sense.
Identifying a buyer persona, customer development questions
Peter Zhegin:
Even in this simple example, there were multiple personas mentioned, right? You have marketing people, you mentioned engineers, on top of my head, I would mention compliance people, cybersecurity people, maybe machine learning research people. So for startups, usually there is a challenge to define who are the primary personas, right? With whom to talk first? Who are these people for you? And how did you arrive to those people? What was your framework to identify them?
Barr Moses:
Great question. And I think the some of the challenges in the in the early days really are around sort of problem definition. And one of the interesting things about the problem of data downtime, you're right, it touches on everyone in the organisation, right? I'll add to that like, and, you know, engineering, data analytics, product management, data governance, really the list of people that touch data, so long. And in the early days of startup, you really can't solve for all of those people, right?

What, you know, what we actually did is, we started with trying to understand, what are the people in the organisation who are impacted most by data downtime, and why? There's a few different personas, the main personas, or main sort of folks that we work with are data engineers, and data analysts. And they have different titles in that. But so data engineers are often sort of folks who kind of responsible for the infrastructure and responsible for kind of the jobs and building the pipelines. And data analysts are oftentimes those who are either building or consuming the data, and using that, to power digital products or to make decisions.

I'd say that the sort of the way that we've sort of identified these folks for us has been, from the arly days speaking to organisations where we can identify a particular pain point of data downtime, and then working very closely with them on a solution. So our approach was, find customers as soon as you can, as early as you can, actually build the product with them. So we were very fortunate to have awesome design partners from day one, who sort of opened up to us about their challenges. And we could actually see like - 'okay, you know, when was the last time they had data and downtime? Why did that happen? Oh, someone made a change upstream. Okay. How did they catch that? Oh, you know, they they saw that, because they had an unusual number of [unclear]. Okay, what are they doing now to troubleshoot that? What are the steps that they're taking?'. We literally sort of became like these detectives, right to really understand like how people are working and how they're thinking and then understand how we can add value and impact in solving their problems. So just through sort of close work with customers from the very early days.
Peter Zhegin:
We just described, I guess, an amazing picture of a customer development process, asking a lot of right questions that definitely pass the mom's test, right.
Barr Moses:
It's a good book.
Peter Zhegin:
It's an amazing book, I will say. And I'm glad to hear that you as a founder who actually builds something can confirm that.
Barr Moses:
Yeah, actually at the beginning, I did the complete opposite. And then I read the book and I was like - 'Oh, shit, I've been doing everything wrong'. And then I totally changed my approach and readopted what what was in that book. And so yeah, I totally agree and highly recommend it as well.
Peter Zhegin:
It's really a great book, and maybe the one part that is less covered it in the book is how actually get hold of these people. What was your approach to finding these people, through your previous companies where you worked, through your friends, etc? How did you actually identify this wonderful design partners?
Barr Moses:
For me, it was very important from the beginning to actually, what the book recommends is to not go with sort of my network and my family. Just for folks listening, sort of what this book talks about is that there's a certain group of people in your life, it could be your parents, it could be your friends, it can be your colleagues, who are not very objective when it comes to evaluating your ideas. And so if you give them a prompt around a particular idea that you're working on, even if the idea is complete shit, they might still tell you 'oh, it's the best idea ever, you're amazing, you know, go forth and do it'. Just just because it's human nature to do that. And so the book proposes that in order to truly trust an idea, you need to find people who don't have that bias.

I really took that to heart and found people who owe me nothing, don't know me at all, and who, you know, to the best of my ability didn't have that bias. Now, I would say, as I mentioned, with other ideas that I worked on, when I did that, like nothing clicked. So like, you know, just the fact that I talked to like, people that I didn't know didn't help me. For me, what worked is like finding people who had nothing to do with, with me personally, and could give me an objective view. And I also needed to make sure that we're working on something that matters, like that's worth solving for them, right? Because if it wasn't something that matters, then they wouldn't spend time with me.

There was some ideas where I would try to get on a call with someone who didn't know me, and they, like, didn't have time for me, they were like - 'I'll talk to you later'. But, you know, when it came to this thing, this thing mattered to them, like data downtime was a problem for them. And so they wanted to talk about it. So that was one sort of aspect.

And I think the second is, you know, I had to build sort of trust and credibility in the space, right through getting to know people getting through the community. That was a very important part.
From questions to demoes and sales
Peter Zhegin:
Then maybe the last sort of thought here is like once we wanted to pass the interview part or pass like the research question, the hardest part is to get people actually try out your product, right? With folks who don't even know you.
Barr Moses:
What we've optimised in the very early days is building a product that makes it very easy to engage with us, like incredibly easy. We've invested a tone of engineering effort in making onboarding extremely easy. Security is well thought out, our architecture is sort of security first. And that helped us remove as many of the barriers for data organisations to work with us from day one.

We basically [asked] like what do we need to do from the very beginning to make it incredibly easy for people whom we don't know, who actually have this problem, it matters to them, it's worth solving it for them. Now, let's see if we can get a foot in the door and try to help solve them. And you earn that by building a product that's easy to use.

Peter Zhegin:
After these discovery talks with potential customers took place, what happened next? What would be your framework around moving leads further down the pipeline? Like trials, demos, and then maybe contracts? How do you think about this? And what would be your advice for someone who builds a data science startup regarding actually converting customers into paying customers?
Barr Moses:
Yeah, building on sort of this, this previous example that I gave... in the early days, you have very limited resources. You can probably only solve like one of those problems, right? You can't both solve, you know, onboarding, and, you know, deployment and getting them to pay and getting them to renew, right, you're gonna solve the whole, the whole chain, right. And I remember, in the early days of the company, I used to have a slide that actually shows the different parts of the chain, and explain how we're going to tackle them one by one. And we're gonna, like, go all in on the first and then go all in on the second and just like, start chipping away at it.

The very first thing that we did was like, how do we get people to talk to us, right? And that that came down to, you know, having something that they care about, and have something interesting to say about it.

And then the second was, how do we get them to try out a product, and the only thing that matter there was to make the onboarding extremely easy. So like, make the product the easiest to use, right? Like our implementation time is like 19 minutes or something like that, on average.

And to your point, like, what's the next step after that, right? The most important thing, in that next step after that is actually seeing value. Gone are the days when people implement a product, and then they wait like six to nine months to actually see value. It was clear to us that if people have gone through the first few steps with us, we probably have like, maybe a few days to show them about you. Right? After a few days, like, that's it, like, you know, I was sort of, you know, the person who like, let a data team, I have like zero attention spans, all these vendors are trying to get at me, right. And if someone was talking to me, like, you probably had like, a day or two, and then like, that's it. Right.

It was clear to me that we needed to find ways to show value extremely easy. So we went all in on that. And actually, what we did was we developed a way to generate value through our product without actually getting input from the organisation. So particularly, what we do is, we have a machine learning based model, a module in our product, that helps you glean insights from your data, specifically identify data downtime, without any input.

You can get started immediately with just a very quick onboarding. And then you start seeing some value. I'm incredibly fast. And that's something that we hear from our customers today, after a couple of days of onboarding, they already have the ability into things that they just didn't have before. And they're really sort of wowed by that.

We invested a lot in making sure that they get value very, very quickly. And so that actually brought us to a point where customers were offering, you know, they were like, - 'hey, like, like, this is giving us great value, you know, we want to pay for this service'. And so the conversation around like payment and value and all that stuff becomes way easier if you're actually focused on delivering value, and solving the specific problem that's at hand with them. So we were just laser focused on how do we make our customers lives incredibly easy, and make them you know, as happy as we can as fast as possible.
Founder and the sales process
Peter Zhegin:
It seems like it became very organic then, right? To move with customers to another stages. How deeply actually a founder should be involved into all of that, because some people say - 'oh, let's outsource sales, let's also that to somebody who is much more experienced than we are. We are data scientists, we don't do that'. So I'm curious to learn what was the kind of level of your involvement and what do you think about structuring this activity? Do you need somebody else to do that? Or maybe the founder should to do that? These kinds of things?
Barr Moses:
It's very hard to build a great business on your own right. That's not how our great businesses are being built, right? The great things are built by organisations, right, not by sort of individual people, is very much believe in that. I'm incredibly grateful for the amazing team that we have at Monte Carlo, that, you know, that is building the data reliability category, as we speak, right. In general, I would say, like, nothing is done alone on your own. That being said, in the very, very early days, you know, for founders who perhaps not yet raised money, or just in their early, early days of figuring out what you want to work on, I think, I think you have to sort of push the limits of what you feel comfortable doing.

There is value to, you know, for founders doing a little bit about what you call zero to one stage, right? Like getting the first customer or even the first 10 customers. Those are things that impact all the decisions that you make later. Like, for example, where are you going to hire to run your sales team? What kind of go to market motion will you have? What kind of marketing do you need? What kind of product leadership do you need, all of those decisions are really determined by what you're doing in the very early days. And so gleaning those learnings I personally think is very, very important and helpful.

I'm probably extreme on this side. But I've invested a tonne in customer development, even before we wrote a single line of code. We actually we, like flip the model. So instead of starting with a product and then finding customers that fit for it with it, we started with the customers and then built the product that would help serve them. If that makes sense.
Focus and specialization on verticals/industries, ideal customer profile
Peter Zhegin:
And if we fast forward to the current stage of MonteCarlo, the companies quite mature and what I've noticed that the website you have different solutions for different let's call them verticals maybe marketing financial services, retail, etc. I guess there is always a discussion about should you aim for these verticals? Or should you build something very horizontal, generic, and not mention any specific verticals/markets? And then there is a discussion about priorities between these verticals. So, what's your framework about that? And when there is a sense to go for specific verticals, when there is no sense, in your opinion?
Barr Moses:
Yeah, and in general, you know, I think, for a founder or CEO, you're sort of biggest role is focus, actually, um, it's something that that I try to do even more every day, and my team knows about it, I'm always a person who's trying to push us to be more focused and more narrow. And I think, you know, we need to do an even better job at that.

I would say, you know, by default, narrower is better, in the early days, it's incredibly hard to do otherwise. On the other hand for us, because we were, because we are creating a new category, we aren't replacing a product that already exists out there. So it's not like there's a set of customers that's already using a product, and we're offering, you know, a solution that's replacing it, right. So we needed to identify who is this set of customers for which this problem is significant, at least in today's world.

We started actually, just quite frankly, we actually shot in all directions. So we just tested out the market, you know, engage with everyone from like, 10 person companies to like fortune 50s. And that actually works surprisingly well for us, we learned a lot, we are lucky to have sort of inroads with a very broad spectrum of customer.

We also realise that we can't do everything. Across these industries, there are things that are very, very common. So for example, we work mostly with cloud first companies, mostly with sort of cloud technology, for example. And that can be across industries. We work with a very specific persona, as I mentioned, like the persona that cares about data downtime. So you know, for example, if they have like data and their title, if there's like a senior person in organisation that's responsible for data, that probably means that data matters to their organisation, and they're more likely to experience downtime. And so, you know, even though it may seem like there's sort of this wide range, there's a very narrow set of commonalities across that, that allows us to serve this this broad base of customers.
Peter Zhegin:
Do you think there is a sense in kind of trying to quantify an ideal profile of a customer, for instance, let's say, based on the amount of the budget, if they spend less than X on Facebook, we just don't work with them. Something like this? Do you think founders need to move towards this quantification? Or it may be less quantifiable and more maybe qualitative?
Barr Moses:
I'd say it depends. I think you do need to convince yourself that the market that you're working on is big and is growing. So for us, for example, you know, Snowflake that just IPO recently, it was the largest software IPO of all times, right, that's a strong indication that the data, the data space is becoming more and more important, and is probably, you know, one that I'm excited to bet on. You do need to have strong conviction, and I think in the size of the market overall.

That being said, you know, that there's two catches. One you often see companies actually increasing their TAM over time, as they move into adjacent areas as they release sort of new products. So choosing something narrower at the beginning, doesn't necessarily mean that you can't expand later. For us, in particular, we actually are creating a new category. And that means that it's very, very hard to quantify in the early days, right?

The way that we think about this is sort of the market that that we're creating, we call it sort of data observability. And what does this mean? I talked a little bit about data downtime, instead of that, what that how that was a corollary to application downtime. Data observability is that application to observability and DevOps. So in the same way that sort of any engineering organisations has a solution like New Relic, Rack Dynamics or Data Dog that they use in order to create observability into the health of their applications, and the use those solutions in order to make sure that their apps are up and running.

Data organisations, in my opinion should have the exact same thing. And that's what we've built is the New Relic, but for data, which gives data organisations the power to know when their data is down, or when, when their data downtime when their data is not healthy. And in my mind, you know, any engineering team that has a solution like that, it's a no brainer. And in the same way, like, it's a no brainer that any data organisation should have something like this. I don't understand how we don't already have something like this.

But in the same sense, also there isn't a category right now out there that you can quantify. There's one that you know, as a founder, there's a set of sort of hypothesis that you can, you can list out that you would believe in, for example, I believe that data will become more important three to five years today. I believe that every organisation will be relying on more and more data sources, and more and more data consumers, for example. The third one could be, I believe that manual checking of data quality and data lineage is not going to scale as these first two hypotheses emerge, and therefore we will need an automated way to think about health of data. If you believe these three hypotheses, then I believe that and you know, in this market, and so I really think about it in terms of like, what do I need to believe that to get excited about this, and that's sort of my favourite.
Peter Zhegin:
So, you feel that if a potential customer believes in the similar things that you believe in - that's a good potential customer, right?
Barr Moses:
I mean, they have to have the pain, they have to have the budget, there's a long list of things that they had to have, right. If you look at the data, you know, it's just some, some numbers in particular, like, you know, the data industry, I think, is projected to be like 224 billion by 2022, or something like that. And there's, there's a lot to work with. And then the question is, you know, who within that market is your target, and what your budget is.
Content, community, women in entrepreneurship
Peter Zhegin:
And I assume that when you build a new category, or a new type of product, the communicating that to the market might be tough thing to do. What I've noticed about Monte Carol, and about you, like you guys publish a lot of content, and it seems like it clicks. And at the same time, I see that lots of data science startups struggle with communicating actually what they do. So what's what's your advice? What's your view on the role of the content maybe?
Barr Moses:
We've invested in content very early on, because we recognise that we're bringing new ideas to market. And we sort of took sort of two pronged approach to that. One is - we really focus on customer pain points. And we right from that vantage point, we're very, very targeted about that.

For example, you know, if we have a conversation with, you know, a couple of customers in the last month, and we just realised, oh, they're all struggling with the same pain point, that's something that's like, is probably shared, right. So we have a particular persona, a particular pain point, a particular customer in mind when we write this. And choose it based on what we hear is a high priority and top of mind for folks. For example, you know, folks attended a specific conference and got really excited about a particular talk. Why were they excited about that talk? What about that talk, you know, got them excited? Or, for example, if we talk to them, and we ask them, like, what's your priority for 2021? And, you know, they'll say, you know, they'll describe a very, very particular thing that's on their roadmap that they're working towards, like, that is something that like, you know, gives us food for thought on what these people care about.

And then we add our own perspective and views on that, but in corporate with what we learn from the community, so that's really the first like a very customer specific perspective, for the content.

Then the second tenant of this is we try to write in a very approachable manner, to your point, like, we could make it extremely technical, we could make it very complex, but we actually focus on trying to make it easy to understand, easy to consume. I think, in today's world, people are inundated with information. And we want to make it thoughtful and interesting, but something that can click with people.

And so, you know, we wrote like about data mesh, you know, without going into all the details of that. It's basically a novel architecture that some of our customers are very excited about. And we got excited about that, too. About that vision and where that's going. And we did a lot of research on it and wrote to, like, clarify the idea and explain how data observability actually is connected to the data mesh. The second example is we write a lot about data downtime, right. And again, we could have made it very technical about like, you know, specific sort of schema changes, different anomaly detection methods. But instead, we're focused on the stories and how it impacts people and our methodology. We try to do something that's more approachable and something that you can just start using tomorrow.

In early stages, when you will just building the readership, were you mostly focused on specific technical events and technical audience and you build from there, or you started from, and maybe you remain, in the business domain, and not like data scientists or engineers. What was the original turf, for you to build the community out?

So the very original [community] actually followed kind of the path that we took to develop the company and the product, which is starting with the non-technical, but rather, pain point and impact that it had. That's actually the very first blog posts we wrote was about, I wrote about my personal experiences with data downtime, and how I felt plagued by it. It's called the rise of data downtime, you can look it up on Medium. And that really spoke about sort of a very painful time in my career, when I thought I was gonna get fired, because the data was wrong all the time. When I share that with people, people responded, saying, like, 'I feel heard, you know, and that's what we wanted'. That's sort of the response that we wanted to get, we wanted to get folks to wanted to create empathy and a shared connection around this problem. So that's what we started out with. And moving in sort of the chain and kind of developing a company and building each part, we're actually moving to creating more and more content that's focused on the technical aspects of the product. And stay tuned, there's, there's way more to come.
Peter Zhegin:
I guess it's really important, not increasing the gap between the data science community and less technical people. You're bridging this gap, that's really cool. Another community thing I wanted to discuss with you is that really there aren't too many female founders data, data science startup, especially, and do think we can make it better, and how we can all make it better?
Barr Moses:
Yeah, definitely. It's, it's a very big part of my decision to start Monte Carlo is tied to that, and to my deep desire to make it better. For me, every day, I sort of wake up thinking about, how do we do that. And I think the best way to do that is on for founders to create more positive examples. So that more and more people recognise that it's possible, and would want to do it too, right. For me, when I get started, there weren't a tonne of examples, right. There still aren't a tonne today. But I think the best thing that I can do for female founders is to build a massive business as soon as possible. And so that's what I'm laser focused on. And, of course, have many opportunities along the journey to build, you know, a diverse community, I think the data industry is actually very well poised for diversity. There's an amazing diversity of candidates with non homogenous backgrounds. I think we actually have an opportunity to create a new normal for for data industry. And I definitely wake up every day and think about this, like, how can I build the best company possible? How can I set the best example in my arena? And the way to create this world, I think, is just to create more and more of these examples.
Opportunities in DataOps
Peter Zhegin:
If we look at DataOps, let's call it widely, DataOps space where, are, or will be some opportunities for new startups. What are interesting spaces that potential founders, female founders, can can consider?
Barr Moses:
Yeah, so I'm definitely very bullish on the data observability market for sure. Some other areas that I think are interesting or are ripe for for disruption is in particular data discovery, for example, I think there is I actually just wrote a blog post about this release yesterday. It's called 'Data catalogues are dead, long live data discovery' and I think, you know, in many ways, It's obviously a controversial headline. But I think the way in which we've been thinking about documentation and cataloguing of data hasn't evolved with the way that we consume and use and build data. So data catalogues need to catch up with that. And I think we need to basically blow up that paradigm and rethink how to do what we call data discovery.

If you think about what's powering all of these movements in DataOps is actually metadata, right? So metadata can be like lineage, or you know, information, like data quality, all these different, you know, kinds of metadata you call it. The thing about metadata is it's actually useless on its own, right, like, take lineage, for example, nobody cares about like just a map of your assets. It's a great map, you look at it once, and you're done. But it's incredibly powerful lineage in any kind of data, when it's tied to a particular use case. Like, for example, when it's tied to trying to make sure that your data can be trusted, knowing what's impacted downstream, when you make a particular change to a data asset in your lineage. That's when the power of lineage really comes to life. And so I very much believe in the next generation of kind of products that are aligned around use cases and not in particular things like, like metadata, for example.
Closing remarks – don't settle for bad dat
Peter Zhegin:
Yes, it's a very interesting space. And someone with whom I talked recently, he mentioned another interesting category, process mining, for instance. So things that allows to discover new things,. A lot of low hanging fruits were picked up, but there are more, and we need to discover them in data sense, a product sense, etc.

Just to be cautious of your time, probably the last question, what would be the key takeaways that you want people to remember from this podcast from this conversation? And if there is anything, we didn't discuss what you believe deserves to be discussed, and now it's the right time.
Barr Moses:
Yeah, so I would say, you know, when it comes to sort of, you know, founding a company I think nothing can sort of replace kind of grit and determination. But at the end of the day, the most important thing is making customers happy. From day one, that's, that's what we obsess about, and it's still the most important thing for us as a company. So maybe that's one thought on sort of building, our Northstar at least. And then when it comes to data, you know, don't settle for bad data, right, like, stop being okay with bad data. I think data reliability and data that you can trust is actually within your reach with the right approach. And by focusing on automation and observability, it is actually possible to have data that you trust, and I think as an industry, we need to push ourselves to get there.
Peter Zhegin:
Fantastic. Thank you very much, Barr, thank you for the talk. And I'm sure we will hear a lot of good news from you and from Monte Carlo as well.
Barr Moses:
Thank you. It's great being here.