Why Loops with Casey Winters

Why Loops – With Casey Winters

In this session, we asked Casey Winters, the former CPO at Eventbrite, about what opportunity he sees in Loops for product and data teams. Casey shares a lot of valuable tips about how to approach experimentations, how to align product managers and data teams, and shares from his vast experience working in companies like Eventbrite and Grubhub.

Enjoy.

Transcript

Tom: Hi Casey, why Loops? Why did you decide to invest in Loops?

Casey: Yeah. So, you know, I’ve been working with analytics tools for a very long time. And when, you know, founders, product managers from those companies have asked me what I want them to look like, or how I want to use them in the future, my answer is basically always been the same.

I don’t want to use them at all. I want them to understand my data, understand my business, and notify me when it detects something that can help me grow faster or determine why a metrics drop is happening, but proactive alerting when there’s something to look at versus make me refresh dashboards and like splunk all the time.

And, you know, people have said that they were going to, you know, build that in the past, but, you know, to paraphrase you know, Stargate, I guess, many have said that they would do this; you were the first that I believe could actually do it. You had the tech experience and the vision to actually make that a reality of proactively understanding what my business does, what could really have an impact and giving me the hypothesis, instead of just saying, well, you know, here’s a bunch of data, go see what you can find; which is still, unfortunately, when most analytics tools do today.

Tom: Yeah, I can totally relate to like the difference between the dashboards and and the camp insights, current opportunities that that we offer.

Speaking of that, eventually, those things, those insights are supposed to be translated into experiments, right? Into a clear hypothesis that are being tested in real life. What is, in your opinion, the biggest misconceptions that product manager have around experimentation? \

Casey: I think there are two that come to mind for me.

One is that, you know, experimentation is, you know, a tool to help test your conviction and make sure you’re on the right track. You’re not missing something. It is not a tool to kind of outsource decision making for your company. You should be making decisions, you should have conviction behind them.

And then you use experiments to track like, are you getting closer and closer to the truth? Are you getting less wrong, you know, over time? And I think a lot of companies have used it as a tool to mitigate risk, you know, personally in your career of saying something that ends up being wrong. And I think that’s a really, you know, bad way to think of experimentation. You should be making bets and using experiments to validate them, but you shouldn’t be kind of outsourcing your brand, you shouldn’t be outsourcing, you know, your product strategy to whatever the AB test says. So I think that’s one thing that you see very commonly be an issue. Another thing that I, that I see pretty commonly be an issue, especially with growth teams is I think a lot of people see experiments as just an optimization game.

You want to isolate variables and make iterative changes that hopefully compound. And that is one type of experimentation for improving on a strategy that is working. And if you only do that type of experimentation, this strategy will have diminishing returns. So another type of experimentation that I see a lot of people completely miss is when you need to diverge in strategy.

And when you need to diverge in strategy, you want to be building a more holistic vision of the change you want to create in the product to hopefully get to kind of a new frontier of growth. And you want to be changing a lot of things at once to test out this broad new vision of where you can go with your product.

And then if it’s successful, then you can go and isolate the variables later to understand which parts of it made it more successful than what you’ve done in the past and how to actually like tune those and, and optimize them, you know, to, you know, get better and better over time, because now you have a new strategy to optimize, but I see a lot of teams only know how to play the optimization game, they don’t know how to play the diverging and strategy game. And both I think are critical ways to use experimentation.

Tom: You know, you gave us such an, like I would say a very frequent example that we’re getting when we asked, we were asked, should we isolate every kind of factors that we are changing now? Or should we go with, like, one version that will kind of include all the changes together? And I kind of, I think sometimes companies do not, you know, find the right balance between, you know, impact and accuracy. Okay, so maybe later on you’ll be able to isolate, to attribute the impact of everything, kind of element that you change, but eventually you want to test a new kind of approach and use strategy. Go and do it. Who cares about, like, you know, the impact of each kind of component in the change that you just had.

Yes. Cool. So we talked about kind of leveraging the, the the data to make the right experimentations. Eventually there is kind of another function and organization that is in charge of driving those insights, which are basically the, the data analysts and we are dealing a lot around collaboration between the analysts and you know, the pro team, the product teams, and, and obviously you manage, you know, data teams throughout your career.

So my question to you is what’s the best tip that you can kind of like give around the, the creating the, you know, healthy collaboration between product and growth teams and the data teams.

Casey: Yeah, absolutely. Because I’ve managed both analysts and product managers, in my career, I think too little at the time, both PMs and analysts, don’t align on what the success criteria of an analyst role looks like and, and, and the collaboration between them.

So the way I define it, is let’s say you’re a PM. At first, yes, the analyst should help me answer some questions that my team already has, but then you should sequence to helping me automate the answers that I’m commonly going to have through dashboarding alerts, et cetera, so that the PM can self serve instead of taking the analyst time every time a question comes up. And that’s really phase two of, you know, onboarding of an analyst. But if you do that correctly, then this should allow for phase three, which allows the analyst to spend more of their time on what I believe is the true leverage of an analyst role, which is using their knowledge of the strategy and user problems and business problems that typically comes from the PM and their knowledge of the data, and how to mine it and deep dive it to find insights that are going to help teams understand their problems better, prioritize better, and come up with solutions that have bigger impact. So many analysts and PMs never get past that first or second stage of answering direct questions from teams or preparing executive updates. Sorry, if that’s all you’re doing, you are not an analyst, you are a reporter, and that is a much lower leverage job than an analyst.

Tom: Yeah. Yeah. The difference there, I also managed analysts throughout my career. I like the depreciation, which obviously most of the teams would not admit, but the difference between reporter and actually someone that, you know, they need to, to move the needle, basically to kind of like, you know, they have, I think we discussed it in the past, they have a quota, they need to show insights that they like sales team. It’s more like sales than you think.

Casey: If you’re not coming up with, you know, one to five solid ideas that helped the team build more value to users in the business in the first three months, you’re probably not on track to be a successful analyst, and that can be a scary thing for new analysts to hear, but it’s also an incredible opportunity to have an impact on the business.

Tom: Yeah. Speaking of that, eventually the data is, is, is stated to bring impact on the business. Can you talk about like, you know one, two examples where you actually, you know, you saw something from the data you under, like you translate it into a clear insight, you tested it and saw, you know, massive impact on your on your KPIs.

Casey: Yeah, I can give a couple examples from Grubhub and one from Eventbrite. So at Grubhub, one of the analyses we did. Is we looked at the conversion rate of first time users based on how many online ordering search results they saw when you search your address. So the way Grubhub works is you type in your address and then we show you a bunch of restaurants that will deliver food to your door, right?

And what we noticed is that in each market we did this in, there was this S curve in conversion rates based on how many results you had. So once you hit this tipping point of, you know, in the real world, showing more results than people thought they had to order food around them, you saw this effect of doubling in conversion rate.

And we saw this repeatable in every market. And then if you slice it by cuisine, if you slice it by, you know other elements, neighborhoods, you would also see this. So what it helped us do is build this really clear conversation between growth and sales on where we didn’t have enough restaurants to provide the type of conversion rate we wanted or the type of retention we wanted and really allowed us to be targeted about where we just were supply constrained and we needed to go get, you know, more restaurants to drive, you know, a market to new heights.

So that was extremely valuable. Something on the diagnostic side from Grubhub was what’s really interesting about network effect businesses is that normally, you know, as you grow a lot of metrics get worse over time, your, you know, payback period goes up, your retention rate goes down, your frequency of ordering goes down because you’re targeting users who are just less of a bullseye for the business.

Yeah. And what happens in these network effect businesses like Grubhub is the opposite is true. Because you keep getting more supply, the product, which is how many restaurants you can order from, gets better faster than the users you target get worse. So that’s what was happening at Grubhub and then it stopped happening.

We started seeing our year over year retention cohorts decline in some of our key markets for the first time, which is naturally kind of concerning. So, as we dug into the data, we started realizing that when people place their first order, Grubhub would give you the option to order, you know, via to create an account or to order as a guest.

And that ability to order as a guest meant you had to fill in less fields, lower friction, which should lead to higher conversion rate, right? Classic, like reduction and friction conversion problem. But we started seeing the number of people using the guest option rise over time and correspondingly their retention rate into frequent users of Grubhub was about half of people who created accounts.

So we did this work to in the ordering process, you know, the, something we traditionally reduced friction on, we did this work to like sell the value of creating an account and to make creating a an account to, to continuing as a guest, less prominent. And what we’re able to do is migrate half of the people who were signing up as guests to creating accounts during that process without impacting conversion rates and getting the retention benefits from them having accounts like a stored credit card, you know, personalized recommendations, things like that. So that was a really big win, you know, off a pretty concerning, you know, metric trend.

Eventbrite example is pretty interesting one. When I joined, most of our insights about what to build came from the sales team because we were going up market and trying to build things for larger creators. So we did a joint project between the analytics team and the research team to understand our self serve customers better.

And what we found through the data is this entirely new segment of frequent users that were producing smaller events, but just very often, and they were both growing faster than the rest of the business and were more profitable than the sales defined segments that we have been building for. So we rebuilt our entire strategy over the next three years around this segment. And that has been key in going from a very unprofitable business when and then bright IPO to now being cashflow positive for the last two years. So it’s been a really big win for us.

Tom: These are really, really great example that I see across our current customers. You talked about, like, the second example around activation, basically sign up as part of the activation journey.

And by the way, so many companies ask, should I get users to sign up? How do I increase, I actually deal with this during my Google days. How do I think, solid kind of like insight that is very relevant for other companies and also the segmentation part that you saw, which is, it’s hard. There are so many, when you grow, there are so many segmentations that you could be doing. I think from that perspective, this is kind of like, I would say of advantage of like, you know, for platform over, kind of, in some cases over a human being, which is the ability to scan through all the permutations of segmentations to identify segments that, you know, overperform or underperform, give you the root cause and then help you kind of like execute on them. So you can actually improve those KPIs accordingly.

So these are really great examples that are actually tough on my loops these days. So Casey, thank you for that.

Casey: Yeah, no problem.