Status: Processed
File: Download
Speaker 1: If it's like my family, I am definitely.
Speaker 1: I definitely have no subjects.
Speaker 2: Hey, Daniel.
Speaker 2: Hey, Eric.
Speaker 2: Hello, welcome.
Speaker 2: Welcome back, Eric.
Speaker 3: Thanks so much.
Speaker 3: Yeah, it's great to be here.
Speaker 2: Hi, Virginia.
Speaker 2: Hello.
Speaker 2: All right, let's let's kick off.
Speaker 2: I wanted to start out with some reminders 1st we have a book club coming up on inspired in four weeks on August 7th.
Speaker 2: I just reread it myself.
Speaker 2: It's a it's a good read.
Speaker 2: It's highly aligned with how I think about product management and does a good job of explaining why some of these things are important that that I have also believed to be important.
Speaker 2: So it's nice to have another voice explaining all of that.
Speaker 2: So please do read that.
Speaker 2: I think I'm going to update the new hire onboarding dock and ask all the new hires to read this as well so that we all, everybody in the team is on the same page with respect to this book.
Speaker 2: Let's see reminder B Remember, there's this interview spreadsheet.
Speaker 2: CS and sales have populated that with the number of customer contacts for meetings.
Speaker 2: Please do follow up on that.
Speaker 2: I want to ensure goodwill with that team and follow up promptly with meetings with these customers so that that team can see that we're taking advantage of it.
Speaker 2: 3rd reminder, we've got a little engagement survey.
Speaker 2: I'm going to run this once a month in Q3 just to take a pulse given all the change going on.
Speaker 2: Please do take a a minute to fill it out.
Speaker 2: It's 5, you know, quick questions and then one free form where you can share whatever feedback you have.
Speaker 2: Fabian didn't receive it.
Speaker 4: I, I'm pretty sure I went through my emails.
Speaker 4: I think there was some maybe it's on my end, but I, I'm happy to fill it out.
Speaker 4: But I, I I.
Speaker 2: You didn't get it.
Speaker 2: Need to get it to you.
Speaker 2: All right, I will.
Speaker 2: I'll ask Jessica to resend that to you.
Speaker 2: Is anybody else in the same condition where you did not receive it?
Speaker 5: I don't recall, but is there a way to put the link to the survey in the agenda?
Speaker 2: Well, it's, it's personal.
Speaker 2: It's it's tied back to your user ID, so we can track which team you're on and that kind of thing.
Speaker 2: I do believe it's anonymous, but nevertheless, everyone has their own custom ID.
Speaker 2: So I'll ask Jessica to send it to Fabian and Karina.
Speaker 2: Anybody else?
Speaker 3: I just, I hadn't seen it, Scott, but I searched my e-mail real quick and it looks like that's the title of the e-mail.
Speaker 3: So if you just search for that in your in your Gmail, you should be able to find it if you got it.
Speaker 2: Pulse Survey.
Speaker 2: OK, If anybody else didn't get it, please ping me.
Speaker 2: All right, next reminder, we we need, we have a goal of at least three customer interviews per PM.
Speaker 2: There's an OK R issue out there.
Speaker 2: If you haven't updated it lately, please do so.
Speaker 2: And remember, we have three weeks until Q2 to hit our goals.
Speaker 2: So please, please do invest the time to get those set up and get at least three done if you haven't already.
Speaker 2: Next one, category maturity page.
Speaker 2: Last week we talked about this.
Speaker 2: Josh did a great job of creating some new views, one of which is sort of this flow chart showing how mature we're going to be at a given point in time, which raised questions about whether we were forecasting that accurately.
Speaker 2: If you haven't already, please go in and either confirm that it's accurate or updated.
Speaker 2: Thanks to Kenny for creating that issue.
Speaker 2: Somebody had a direction maturity page.
Speaker 2: You want to talk about that?
Speaker 4: That was just me, just as you referenced it.
Speaker 4: So I was adding a link there.
Speaker 4: That's all folks seeing the updates there for the charts.
Speaker 4: Just check it out so.
Speaker 2: Got it.
Speaker 4: So it's a good way you can get a sense for like, you know, it's hard.
Speaker 4: It's hard when it's in tabular form, but when it's charted, it's much easier to see like if, if it's achievable or not based on some of the trends.
Speaker 4: And there's also, if you Scroll down, stage level trends as well, so you can see how your stage in particular is trending or said to be trending so.
Speaker 2: Great.
Speaker 2: Thanks, Josh.
Speaker 2: All right, some team updates.
Speaker 2: We hired a couple more PMS.
Speaker 2: We got a good rhythm going on hiring.
Speaker 2: We hired Gabe Weaver.
Speaker 2: He originally came through the growth funnel, but we have a really strong candidate for that fourth slot.
Speaker 2: So we're going to target Gabe for 1/3 manage PM.
Speaker 2: The charter of that team is to be defined.
Speaker 2: But bottom line, we're going to have a third group in the manage area and Gabe will lead that.
Speaker 2: And then Dove Hershkovitz, we just hired him as the APM monitoring.
Speaker 2: He's got a great background in monitoring and has most recently been at Elastic.
Speaker 2: So thank you to everyone who's been involved in the hiring loop.
Speaker 2: I know it's taking a lot of energy from everybody, but I think our hiring processes continues to pick up speed B to BI worked with with Christy and David Sakamoto to change some language around customer results.
Speaker 2: Just wanted to make sure you all saw that.
Speaker 2: So there's the Mr.
Speaker 6: Hey, Scott, on that one, I just the there's a DIF highlights what is new content I believe and there's one section that is great.
Speaker 6: I can totally understand why we would add that about prioritize ruthlessly.
Speaker 6: But then the rest is I guess a bunch of formatting changes and I don't know if there's new content in any of the dog fooding.
Speaker 6: I guess the TLDR the addition of that prioritize ruthlessly or is there some other point we were trying to make in this change?
Speaker 2: It's been a little while.
Speaker 2: I think there were a number of changes, but before the handbook basically read that internal feedback is worth 10 times more than external feedback.
Speaker 2: And I understand why we want internal feedback because of dog fooding and using our own product.
Speaker 2: It's, it's a, it's a great channel for feedback, but I think it was sending the message that customers weren't nearly as important as internal opinion.
Speaker 2: And both Christy and I want to move off of that position.
Speaker 2: Like we should be customer 1st and treat our own teams as a customer.
Speaker 2: But let's, I don't, I don't want people to interpret that our own internal opinion is worth 10 times more than a customer's opinion, if that makes sense.
Speaker 2: So it was mostly language wherever that showed up in the handbook.
Speaker 6: Gotcha.
Speaker 6: OK.
Speaker 4: The one comment I had on this is that some of the text seems like we should focus on core competencies as opposed to new scope and like as in the we should focus first what we're best at.
Speaker 4: So, and I'm not.
Speaker 4: Anyways, that's one thought I had on this but.
Speaker 2: I don't remember that being the point of it.
Speaker 2: Maybe it reads that way, I don't know.
Speaker 2: Feel free to continue to suggest tweaks.
Speaker 2: The point was let's prioritize and do what matters most first.
Speaker 2: Just it's kind of what I've been preaching the whole time, like what's in your area, wherever that is, do what matters first.
Speaker 2: Don't try to do it all at once.
Speaker 2: We're going to have to work our way through.
Speaker 2: That was the point.
Speaker 6: Yeah.
Speaker 6: And I'll, I don't know if this is a follow up issue in in the way you described it.
Speaker 6: It doesn't seem controversial.
Speaker 6: But I will say there was a big discussion and the recent initiative, you know from Sid and other leaders that we should heavily prioritize dogfooding because there are parts, there are teams within the company that we're not utilizing our features and we wanted to make sure that the product team was responsive to requests from them.
Speaker 2: Yep.
Speaker 6: It's a little bit different than saying it's about our internal opinion like that.
Speaker 6: We should.
Speaker 6: We had always said we should validate it.
Speaker 6: So that clarification is good that if we want to make sure it's about us saying this is in line with where we want to take the product and where we're hearing customers, but if an internal customer wants it, we should, we should.
Speaker 6: The original thinking was that we should emphasize it.
Speaker 6: I just want to like if the intent was to make sure we we're just clarifying that same position.
Speaker 6: But if we're saying actually we should kind of pull back from the push for more dog fitting.
Speaker 2: I would please don't conflate the two, OK?
Speaker 2: We very much still wanted dog food.
Speaker 2: I think the point is when you're thinking of customers for your thing, think of our internal teams early, like you can get great feedback from them.
Speaker 2: They have an incentive to work with you.
Speaker 2: There's very little risk in rolling out things early to them.
Speaker 2: So treat them like a customer and think of our internal teams early as you're rolling something out.
Speaker 2: That's still very much the message.
Speaker 2: But let's not over rotate on internal feedback or internal opinion.
Speaker 2: Let's still seek external feedback too, because that's just one customer of many.
Speaker 4: Cool.
Speaker 4: OK.
Speaker 2: Makes sense.
Speaker 2: OK, Yeah.
Speaker 6: It.
Speaker 2: Does all right to see customer training, discovery training coming soon.
Speaker 2: Sarah O'Donnell and her team are going to do a bunch of sort of quick videos on a variety of customer discovery topics.
Speaker 2: So super excited for that.
Speaker 2: They should start dropping any day now, I think starting this week.
Speaker 2: And so we'll, we'll release those to you as they come out.
Speaker 2: We'll embed them in the How We Work description on our team page as well.
Speaker 2: All right #312 two kick off feedback.
Speaker 2: Josh, thanks for leading the charge.
Speaker 2: I thought you did a good job of emceeing and sort of adding color commentary in between.
Speaker 2: I thought the screenshots definitely helped.
Speaker 2: There were a bunch that did not have them.
Speaker 2: I was wondering why.
Speaker 2: Is it just because we're not there yet on many of these?
Speaker 2: Yeah, OK.
Speaker 6: Yeah, I mean some the commentary I don't know or Nicole added that, yeah, many of the issues we're saying we're going to do UX front end and back end in the same iteration.
Speaker 6: So it hasn't started.
Speaker 6: And in some they're like.
Speaker 6: I can think of a number where there just aren't appropriate screenshots.
Speaker 6: Or at least there weren't screenshots or mock ups created in advance for the purposes of Front End working on it, because Front End was going to work on that without a mock up.
Speaker 2: OK.
Speaker 2: I'd love to get to where we're a bit ahead so that we'll have more of these earlier and hopefully the customer discovery flow will get us further ahead on that.
Speaker 4: In my case, some of the features also just have no UX component as a no UI component that could be screenshotted, it's.
Speaker 2: Understood.
Speaker 2: Yeah, I don't expect everyone.
Speaker 2: I mean, use your judgement.
Speaker 2: If it doesn't need it, fine.
Speaker 2: But where we do need design, it'd be great to get at least a month ahead.
Speaker 2: So as we roll into dev, we have that to offer them.
Speaker 3: Scott, just a quick question to you.
Speaker 3: How do you feel about presenting like balsamic or super low FI mock ups on the kick off call?
Speaker 2: I'm fine with that.
Speaker 2: OK.
Speaker 3: 'Cause that that could be an option too for PMS that are waiting for UX to work in the same Sprint.
Speaker 3: And I know that plans done a pretty good job, at least in the past, of kind of running ahead of UX and saying like, hey, this is kind of what I think I want this to look like before spinning UX cycles on on making a more hi-fi mock ups.
Speaker 3: So.
Speaker 2: Just if you think it does a better job of describing it than the issue itself, then use it.
Speaker 3: I think, I think in some cases, like a picture can be worth 1000 words.
Speaker 3: I mean, no matter how many words you throw at something, it's like, you know, for example, one of my things that I request or I, I reported on for the release, the kickoff meeting was expanding the epic view in the road map.
Speaker 3: And like those are basically just a bunch of buzzwords put together that you're like, OK, what does that mean?
Speaker 3: Expand epic?
Speaker 3: And, and I'm just, I literally thought on that one for like 20 minutes saying, how do I make this epic?
Speaker 3: How do I make this issue title like more descriptive for customer value?
Speaker 3: And it just came down to like, that is the functionality we're adding.
Speaker 3: What does that mean?
Speaker 3: Oh, here's the screenshot.
Speaker 3: You can see that we're going to add a drop down.
Speaker 3: You can see the issues and children epics that are attached to that epic.
Speaker 3: And in that case, like, I was like, I'm so thankful I have a screenshot, even though that one is actually not a hi-fi mock up.
Speaker 3: It's it's more, it's more lo Fi is a little bit pieced together.
Speaker 3: So yeah, I, I think like in general, there's a lot more value if we can show something like that.
Speaker 3: So, you know, product managers, you can, you can consider that you should feel free that you know, you're empowered to take a tool that you're comfortable with, even if it might even be just like Google Slides and, and make something that gets you at least a part of the way there in terms of what you want the experience to look like.
Speaker 2: Yep, perfect three CI.
Speaker 2: Thought the talk track shifted.
Speaker 2: I was definitely more a problem focused.
Speaker 2: I noticed a number of speakers really trying to 0 in on that which is perfect.
Speaker 2: Some of them could have been more problem focused I thought.
Speaker 2: So just keep, keep considering that as you, as you, yeah, it's important to be able to pitch these things in ways that people that aren't close to it can understand.
Speaker 2: And so just think about that.
Speaker 2: How do I explain this to someone who's cold, who doesn't know a darn thing about this?
Speaker 2: Why should they care that getting that crystal in your, in your thinking is going to be important no matter what?
Speaker 2: So it's time all spent.
Speaker 5: Hey, Scott, this is Karina.
Speaker 5: Just to add to that, if you don't mind, I think this has always been a challenge in product, even before I've joined get lab for many people's how to how to get there on some of this terminology when those of us have deep technical background.
Speaker 5: So my thought would be, is there a way that you can start sharing, you know, or applauding good examples of this so that the product team can start to kind of ruminate on this and and develop that skill if if we're not there yet?
Speaker 2: Yeah, I thought Lucas were very well framed up those those two popped out at me as yeah, that's the problem we're trying to solve.
Speaker 2: Check those out.
Speaker 2: I'll I'll look through for some other examples.
Speaker 2: Thank you for the suggestion.
Speaker 2: All right, 3D, we went long.
Speaker 2: We just have a ton of speakers, which I I love that every that lots of people get a chance to speak.
Speaker 2: So I'm I'm good with that.
Speaker 2: But we're going to have to we're going to have to limit the number of items probably.
Speaker 2: So it looks like there's some other ideas in here, perhaps themes.
Speaker 2: Yeah.
Speaker 2: I mean, if there are some that relate to each other, you could tell a story.
Speaker 2: Hey, we're trying to prove this.
Speaker 2: And then AB and C tie back to it.
Speaker 2: I think it's OK to be pretty brief in your description as long as you're hitting what it is.
Speaker 2: And if somebody's really interested, they can dive deep.
Speaker 2: Thematic is a good idea.
Speaker 2: Recorded video, if you really want to go deep, maybe it's technically complex, that's a great idea.
Speaker 2: And then you can just cover the customer value at a high level and leave the detail to the video.
Speaker 2: Watch statistics.
Speaker 2: I think Josh looked this up last time.
Speaker 2: He, I think he said there were 1000.
Speaker 2: Oh, there we go.
Speaker 2: Kenny's putting them in.
Speaker 2: So somewhere between 500 and 1000.
Speaker 5: To kind of add to the time, just a feedback, I was timing myself this time and I had two features listed and I hit 3 minutes and 14 seconds obviously because shorten that.
Speaker 5: So when we talk about you know, I think somebody mentioned doing 2 or coupling it down.
Speaker 5: It's interesting that I landed there with the the two that I chose.
Speaker 2: Yeah, that feels about average.
Speaker 2: But we've had how many speakers?
Speaker 2: We'll probably have to be a couple minutes Max per person.
Speaker 6: I mean, Eric pointed this out in the next line.
Speaker 6: I, I do think we are due for a rethink of how we're yeah, doing the kickoff because we're going to have next month, we're going to have 25 people trying to give content and even at 2 minutes, you're already gone.
Speaker 6: So.
Speaker 2: Yeah, maybe we expand it.
Speaker 6: I I will give a shout out for Jason.
Speaker 6: I know because he's on paternity leave, created a video, but I think the original intent of the kickoff was actually just as a company, we had a retrospective and a kickoff, A retrospective immediately followed by a kickoff, and we just decided to post that on YouTube.
Speaker 6: We now post a whole bunch of content on YouTube.
Speaker 6: So just just having what you would normally do for your kind of like grooming or kickoff within your individual group posted to YouTube and us maybe having a specific channel for people who wanted to follow it Anyway, we should discuss it in an issue and come up with something I do think prior to next release kick off.
Speaker 2: Just to evaluate alternatives to the format.
Speaker 6: Yeah.
Speaker 6: I, I mean, I don't even if we said every person has one minute, I feel like we're doing a disservice because we're now highlighting much less because we feel like we have a time constraint and need to keep it into one synchronous 30 minute block.
Speaker 3: And there's.
Speaker 6: Need to do that.
Speaker 2: OK.
Speaker 4: Plus one to the revamping it, I think, I think we're trying to it's like got so many jobs right now that we're not doing a good job at any particular one of them.
Speaker 4: But I, I think gut feels the most important customers are internal and it's just like communicating internally about because like people attend that thing, man, we had like 50 people on a Zoom call alone, not even considering YouTube.
Speaker 4: People are asking about what, you know, what happened to YouTube link and things like that.
Speaker 4: So it's it's well attended internally.
Speaker 4: I think they're just for alignment.
Speaker 4: So let alone the, you know, marketing value of like a sort of like a release.
Speaker 4: I mean, for professional customers, it kind of feels like better off having like a webinar or live stream on the release day or something like that.
Speaker 2: Right.
Speaker 2: Yeah, Maybe the externally focused 1 would be more about what we just shipped.
Speaker 7: There was a webinar that used to happen called Release Radar.
Speaker 7: I think I participated in a couple of those, like three or three of them back-to-back, and they were pretty poorly attended from what my experience was.
Speaker 7: And I think that actually got ended by the product marketing team for that reason.
Speaker 7: I'm sure someone from that team could actually give feedback, but.
Speaker 7: I think one thing about the time limit is it's really hard to motivate problems, particularly like in a short amount of time, particularly when they're very technical.
Speaker 7: Like as product categories grow in maturity and sophistication, like the problems become more and more specific that we're solving and so motivating those specific reasons and why we're going after like this specific tiny piece of a very mature category.
Speaker 7: It's hard to do in 30 seconds in a way that ends.
Speaker 7: So if we want to do that better, that's going to put more and more pressure on like communicating a reasonable number of items, I think.
Speaker 2: OK, thank you all for the feedback.
Speaker 2: I like the idea of creating an issue and perhaps tweaking the format before next month.
Speaker 2: I also like the idea of asking internal and external constituents what they like or don't like about the format.
Speaker 1: Yeah, just one final thought on that.
Speaker 1: Like I love that that it's 1/2 an hour.
Speaker 1: I almost even like take pic particular categories over, over over lengthening the time as an example, just because I feel the feeling, I have a feeling that if you want to watch it consistently, it's going to be in that block.
Speaker 1: But that's just me.
Speaker 1: So like, if you know, other customers are, you know, saying, saying they would like the larger block, then then that's the right way to go.
Speaker 1: So that's that's where I'd love to get feedback in some fashion to get say, OK, you know, here's how we should change it.
Speaker 1: But we clearly have gone breath wise, we've gone so much broader that it's it's going to be hard to cover all the topics in a quick amount.
Speaker 2: OK.
Speaker 2: Thanks Kenny for starting the issue.
Speaker 2: James, over to you for #4.
Speaker 7: Yeah.
Speaker 7: I just thought I'd share this for many.
Speaker 7: I think many on this call haven't heard Mark comes back speak about product discovery sprints, but he advocated for this quite a number of times previously from his experience running these at a prior company.
Speaker 7: So the idea is kind of different to AI guess a UX discovery Sprint.
Speaker 7: I think Fabian linked one of the books about that, where it's really focused on UX iteration and research.
Speaker 7: The product discovery Sprint is more focused on kind of like actually building something, iterating on something that's built and trying to get to some sort of MVC really quickly by trying to make the process more synchronous.
Speaker 7: So the source code group is going to try and do that around file by file diff navigation to solve performance and usability problems in 12.3.
Speaker 7: And I thought it'd be interesting to share that because internally we've been wrestling with like how to make this work well in an async slash remote environment.
Speaker 7: So we're looking at trying to confine the participants in a specific time zone so that we can all be available with a significant amount of overlap.
Speaker 7: But that's also difficult because we have, it kind of excludes automatically 50% of the team who are just geographically remote from any of their peers.
Speaker 7: We only have one UX designer that's only available in the European time zone, so some interesting challenges there.
Speaker 7: If it goes well, we're going to try and replicate it a release or two later on a different problem that is also really complicated and hard and we're going to make progress on quickly.
Speaker 7: But I'll share any findings we have and if anyone's interested in discussing.
Speaker 4: That we'll need more.
Speaker 7: Put a meeting in my calendar or drop me a message.
Speaker 2: This is great, James, by the way, I think the UX team is going to run well.
Speaker 2: Let me just say we have the option to run one with Google Ventures, who's one of our investors in that Sprint book that Fabian linked to was written by a guy from GV.
Speaker 2: They did hundreds of these things for their clients.
Speaker 2: They know what they're doing.
Speaker 2: So if we get a chance to do one with them, we should.
Speaker 2: We're going to have to figure out how to do it within our basic model though, so whatever you learn from yours, James, please feed that back.
Speaker 2: Super interesting topic.
Speaker 2: I think if we could get good at this asynchronously, that would be a breakthrough.
Speaker 7: Yeah.
Speaker 7: I think one other interesting challenge is that the Sprint sort of terminology is kind of challenging and like it's not sustainable to be doing design sprints or discovery sprints on a daily basis, whether or not we were in person or not.
Speaker 7: Like it's not scalable to actually Sprint all the time.
Speaker 7: So choosing the right tasks, choosing the right time is, I think, one of the other challenges.
Speaker 2: I agree.
Speaker 2: Yeah, you don't want to do this for everything 'cause well, if you follow the to the letter it takes a whole week and you're totally dedicated to it, which is amazing for focus sake, but you can't get anything else done.
Speaker 2: So depending on how we structure this, it would need to be done for things that are really big unknowns, where dedicating a big chunk of time like that is worth it and not everything clears that bar.
Speaker 3: I think it's also most relevant for for stages that are very.
Speaker 2: In very, just in the very beginning, kind of like that Tyrant was like one of their biggest example for Google Ventures when they obviously like solving clinical trials for the world is like super complex problems.
Speaker 2: So they just figured out what is this thing that we can do so that we can start getting there.
Speaker 2: And I think these are the problems that that the desire Sprint is useful.
Speaker 2: We used it pretty successfully at my last company around pricing and packaging stuff and ran a bunch of interviews with customers on that.
Speaker 2: So I've seen it work.
Speaker 2: All right.
Speaker 2: OK, Christopher #5.
Speaker 1: Yeah.
Speaker 1: Just want to call out, we've over the past month, we've had a significant number of outages related to dot com and that affected at least one customer revenue potential.
Speaker 1: And because of that, you know, we've had some some focus from an exec leadership perspective.
Speaker 1: So I encourage everybody to look at that document and kind of look through it.
Speaker 1: And particularly there's a couple things from an engineering perspective make you aware of.
Speaker 1: 1 is, is we started an infrastructure to Development Board where we're going to start matching issues up and trying to make sure that it's that those get prioritized highly where appropriate, particularly for anything that you know effects performance around these issues.
Speaker 1: The other issue that I put in there was one around that's listed specifically, which is around the fact of prioritizing P performance availability work.
Speaker 1: So one of the significant features of this particular recent outage last week was, is that the Redis server apparently can't handle the load anymore.
Speaker 1: And we started digging into it.
Speaker 1: We found a bunch of stuff that we hadn't checked.
Speaker 1: Like, for instance, as an example, RJ unit tests were basically going and getting cached, and there was no limit on the number of unit tests that could actually be cached.
Speaker 1: So they're getting these like blocks of like several megabytes of data that had to basically be transferred around in Redis.
Speaker 1: And that's really what's affecting its performance overall from a caching service perspective.
Speaker 1: So consequently, Scott, I send that to you.
Speaker 1: I hope that's OK.
Speaker 1: Yeah, because it feels like, feels like you need to help out in regards to the fact that, you know, how do we best make sure that we get this, this kind of systematically going.
Speaker 1: And I just want to make sure that kind of everybody was aware and just kind of open up for a discussion if there were any questions or or any feedback, early feedback on it from that perspective.
Speaker 2: I added some comments to it, Christopher.
Speaker 1: OK.
Speaker 1: I haven't had a chance to look.
Speaker 1: I apologize about that.
Speaker 2: No problem.
Speaker 6: Can I ask, do we and maybe Mac, this is a question for you?
Speaker 6: Do we categorize performance issues as bugs?
Speaker 2: We do have a performance label.
Speaker 4: But they should be under bugs.
Speaker 6: OK.
Speaker 2: Yeah, look at this.
Speaker 1: This is an example where often times the way we would treat performance is, is a, a reactionary.
Speaker 1: This is trying to think about it more in a proactive way.
Speaker 1: So like as an example, I'll give a horrible example, but I worked at Amazon tags.
Speaker 1: Originally when Amazon was created, tags were, were they were expecting just to label, you know, certain instances and that was it.
Speaker 1: And it turns out that all customers started using like 20 and 30 or 50 tags and they're like, what the heck's going on?
Speaker 1: And they realized tags were being used to basically assure environmental information.
Speaker 1: So the VMS could, they could put the same drop of code on two different VMS and they could behave differently based on the tag, which was a total novel way for customers to use it.
Speaker 1: So then they had to basically limit the number of tags they they could use because it wasn't scaling with the system effectively.
Speaker 1: So like this is kind of another example where like I think we got to start thinking in terms of, you know, like when we create something new, a new feature of piece of functionality, like what's the cost associated with that, right?
Speaker 1: Because like it does cost some something internal.
Speaker 1: And I'm not asking product managers to necessarily think in terms of the exact bytes, but I am starting to think in terms of like, you know, what are the expectations around it?
Speaker 1: Because like, as an example, if we went back and looked at G unit tests and reporting, you know, if we said unlimited, that's, you know, that's a tough engineering call, right?
Speaker 1: Particularly I guess it's free right now for customers.
Speaker 1: So is, is my understanding.
Speaker 1: We also don't have a number of repos mirroring.
Speaker 1: We don't have a limit on that.
Speaker 1: And that that seems dangerous.
Speaker 6: Yeah.
Speaker 6: So I guess I would comment, you know, I think the product team is expected to prioritize all things and to understand them deeply, whether they're a security issue or a performance concern.
Speaker 6: I think what you're highlighting is in order to be proactive.
Speaker 6: I don't know if the product team would immediately know the impact of a proposed change, but maybe that's an opportunity for our infrastructure SRE stable counterparts to be involved in vetting and looking at issues early in the pipeline to decide whether or not they would.
Speaker 1: Yeah, or, or let's say we're implementing a feature like let's say we were implementing mirroring from scratch.
Speaker 1: Like the first question we should be asking is, is like how many, how many mirrors does a customer expected to be able to support and what I want to start charging for if they get above a certain limit.
Speaker 1: And you know, and right now we don't.
Speaker 1: And you could argue that scaling is just a much a reason for customers to start paying us as, as feature sets.
Speaker 1: That's, that's, that's kind of the argument I would be ranking because those things cost money like whether we like to admit it or not.
Speaker 5: Yeah, Christopher, I would, I would agree with you on what you're trying to sort of shape up and call out here in the sense of, you know, going through pages for example, performance of of getting this page loaded is not great.
Speaker 5: And I don't know if we set out originally to track some of those performance things, but I think that performance and and to your point, Kenny, I think performance should be somewhere incorporated as we move forward and something we should be thinking about for scalability across the board because it's just as important as bringing forth that really cool thing to them is that that really cool thing works and people will stay there to use it.
Speaker 6: I.
Speaker 4: I think just as a side note, I think we have something the product handbook that I read like a couple days ago on performance, something like fast applications are are like always, you know, like more usable.
Speaker 4: And I think that's, that's definitely important.
Speaker 4: And I also think that gitlab.com is massive and I think we have 4 million users.
Speaker 4: And for example, for Geo, I know that only by actually like interacting with the infrastructure, we are getting feedback on some of the performance bottlenecks that we are just not seeing otherwise, right.
Speaker 4: And so I think that's actually also really valuable.
Speaker 4: And in that regard, maybe also like again, you know, dogfooding these things helps.
Speaker 4: And I think with the combination of CD, we may hit a lot of those things at the moment.
Speaker 5: Yeah, and the dogfooding thing on that front is a little confusing me.
Speaker 5: I met with Marin to talk about that.
Speaker 5: And you know, there's sort of this mentality of looking at dot com first our leadingwith.com for scalability.
Speaker 5: And I just, it's not really Chris to me where we're going for making sure that we're, you know, how we approach making sure that we intactscalabilityfor.com if we're startingwith.com or are we starting somewhere else from a dog fitting perspective?
Speaker 7: I'm pretty sure the handbook says that we're meant to.
Speaker 7: Well at least the guidelines used to be that for new features that were meant to be available on gitlab.com and self hosted at the same time and that there used to be a production ready checklist that I think the engineering team was responsible for.
Speaker 7: I know that.
Speaker 7: So when we launched Geo, there was a production readiness process that we had to go through.
Speaker 7: And certainly with Gidley, we consider these things on the source code front, we're regularly considering scale, like moving terabytes of data from the database into object storage.
Speaker 7: And considering all these sorts of things, performance is very much a feature and should be considered that.
Speaker 7: And I think particularly in categories where adoption is still growing and in early stages of maturity, performance like understandably is less of a concern because there's lower usage.
Speaker 7: So like solving scale at like an enormous level doesn't make sense commercially like necessarily when usage is small.
Speaker 7: So there is a bit of a juggling act here because we don't want to build a product for billions of users if there's only, I don't know, 20,000 users experimenting with our newest feature.
Speaker 7: So there's a, there's an iterative approach that needs to be taken.
Speaker 7: But I I would agree that particularly coming from a team that's digging out a lot of technical debt and solving a lot of performance problems all the time, we've probably historically not been very good at picking the right moment to pay off technical debt and address performance problems until they've become fires.
Speaker 7: So.
Speaker 5: Yeah.
Speaker 5: So to to that point just real quick James, sorry Scott.
Speaker 5: I think some things are obvious, like when we look at our progressive deliveries strategy, I think that we like, if you look at something like feature flags or something like that, like that's something that I think is going to be like, I wouldn't imagine that that's not going to be a key feature that we're going to bring forward.
Speaker 5: So I feel like that that should be a, a Gimme on whether adoption has yet struck or not.
Speaker 5: But the second thing that is not clear to me, like again, when I was interviewing Marin about dog fooding is that I noticed that Marin's like, we don't, this isn't we weren't, they didn't come to us first.
Speaker 5: And so this is not scalable or this is not usable for us internally.
Speaker 5: And so it's like the the approach and process moving forward to dog food in the right spots is not clear to me or you know, what the best practice has been or if anybody's, you know, crack that.
Speaker 7: Yeah, I can give a concrete example because I did a call with Marin a few months back around confidential merge requests.
Speaker 7: So we knew that customers wanted to resolve them.
Speaker 7: We knew that we wanted to do that and that we're trying to get rid of dev.gilad.org.
Speaker 7: So I had a video call with him and a bunch of async conversations with like, I've got these ideas for what a first iteration looks like.
Speaker 7: And then we did a few calls and worked through them and worked out which were the things that needed to happen.
Speaker 7: And so we're shipping the first iteration of that and 12.1.
Speaker 7: But we coordinated with them and I spoke with Marin quite a lot to make sure whatever we were building was useful and would solve the security problems that they had as well as our own ones.
Speaker 7: So yeah, I agree, it needs to be proactive.
Speaker 7: We're not going to ship something's useful or that the infrastructure team is going to want to opt into unless we've had a conversation with them in advance all.
Speaker 4: Right.
Speaker 7: It's at 132nd, can I add 1?
Speaker 7: Like the last tiny point, It's sometimes really important for customers as well that we're running it on gitlab.com before they adopt it.
Speaker 7: So one example is we built SSLTLS support in Gitly, but it's not turned on at gitlab.com.
Speaker 7: And so the customer that we built it for isn't using it because they're waiting for our production team to turn it on because they want to see before they turn it on for their enormous instance.
Speaker 7: Have we actually proven it at the world's largest GitLab instance scale?
Speaker 7: So I think that's one important reason why we always need to make sure that features are on and are getting used on gitlab.com.
Speaker 4: Just sorry, I've got a couple of things.
Speaker 4: I think we definitely need to have a stronger definition of done as part of our progressive delivery, right?
Speaker 4: And so part of definition done is it needs to run at scaleandgetthat.com successfully and not blow up the cost model, not below performance.
Speaker 4: And if it does, it's just going to be reverted frankly.
Speaker 4: And that should be the bar for getting features across the line.
Speaker 4: That doesn't mean for new features, you know, that have low usage that you know, obviously their impact could be quite small, but you still it needs to be within reason.
Speaker 4: I totally agree that you don't want to over build on the 1st iteration for planning for millions of users that that doesn't make any sense.
Speaker 4: But yeah, I think that's one aspect.
Speaker 4: I think their aspect is that on your comment, Christopher, on pricing, and we can maybe have a follow up here on like a handbook update, but I think it's interesting that customers will absorb the, the cost on self managed of of compute.
Speaker 4: And so for them, if they want to have a ridiculous number of, you know, mirrors, then, you know, then it's fine because they're they're paying for it.
Speaker 4: It's a use case, it's all on their dime.
Speaker 4: And so maybe a way to think about this is to have some level of controls you can set if you want to at the instance level that sort of, I don't have have some way to control that amount of behavior for when we're covering the cost of those things.
Speaker 4: But but yeah anyways.
Speaker 2: Thank you all.
Speaker 2: Great topic.
Speaker 2: Christopher, please pile on that issue with thoughts on how to how to handle this.
Speaker 2: I like your suggestion on definition of done, Josh.
Speaker 2: All right, Karina, 6:00 and 7:00.
Speaker 5: Yes.
Speaker 5: So I submitted an Mr.
Speaker 5: for the product handbook yesterday and we're going through this process of getting more self organized in the release area and with our engineering and, and user design partners.
Speaker 5: And you know, one of the things that we recognize and it's documented in the issue below and #7 is, you know, one, our delivery percentage is, has not been great what you've heard me talk about, but has been a ramp that we need to self organize around some method.
Speaker 5: And what we found and sort of the last prioritization for a release scope is that we have a lot of oversized issues and features that, you know, honestly need a need a beat for a release to go through user research, maybe look at the code.
Speaker 5: If they've never seen the code, look, you know, reviewed that piece of code before or make some recommendations on the best way to solve.
Speaker 5: So I put some thinking around, you know, that sort of, you know, that dual track mindset, dual track agile kind of launching off of what user experience has recently updated for dual track agile.
Speaker 5: The feedback on that.
Speaker 5: And then the second piece is that this experiment we're running is we're leveraging semi dual track agile approach just to organize our conversation, how we open issues for areas that we need a discovery beat versus presenting an issue that is actually ready for delivery.
Speaker 5: One thing that was interesting, Scott, we were talking about.
Speaker 5: You know, just the kick off call and having some, you know, you know, images and, and more to share.
Speaker 5: That's definitely where I think we'd like to be with release is getting ahead of that curve and really having some concrete understanding and prototypes of what we're trying to to present and deliver.
Speaker 5: But when we looked at sort of kind of going through that process, you know, this is really for complex things or heavy lifting because you know, it is about a 20 to 30 day lead time to commit release.
Speaker 5: So just that.
Speaker 5: And so we have some targets to improve, you know, our hypothesis on leveraging this, you could follow it there if you have input, but it they kind of tie together.
Speaker 5: But I love input on the handbook piece.
Speaker 2: Thank you, Karina, for creating these and sharing these.
Speaker 2: I think you're on the right track.
Speaker 2: In parallel, I've been working with like Christopher and Eric and Christy to outline a high level description of our software development life cycle, which will have two tracks.
Speaker 2: This is sort of competing content there or maybe or maybe they could be merged.
Speaker 2: So thank you for doing this.
Speaker 2: I may slow roll it a little bit to make sure that we have one way of describing the flow we'd like to go through.
Speaker 2: But thank you very much for getting it kicked off.
Speaker 2: Any questions for Karina?
Speaker 2: If not, Josh, over to you.
Speaker 4: Yeah, just a basic announcement I just went through, renamed the promise label to planning priority.
Speaker 4: General meaning is largely the same, although we shouldn't be promising features.
Speaker 4: And so this is just a way to flag it.
Speaker 4: And that way it's a reminder for PMS that this issue had some importance like connectional dependencies.
Speaker 4: And so just be aware of it so you can feel free to use it.
Speaker 4: I did note in the label text that it should only be applied by product managers and in particular the responsible product manager for that section.
Speaker 4: So it shouldn't get applied by Pam.
Speaker 4: Is there anyone else so?
Speaker 2: Awesome.
Speaker 2: I like that terminology a lot better.
Speaker 2: Thank you, Josh.
Speaker 2: All right, 5 minutes to spare.
Speaker 2: Anything else?
Speaker 2: If not, have a great Tuesday.
Speaker 2: Adios.
Speaker 6: Thanks.
Comprehensive Summary of Meeting Discussion:
Welcome and Reminders:
- The meeting began with a warm welcome and several reminders. An upcoming book club meeting scheduled for August 7th will focus on discussing "Inspired," emphasizing its importance for product management alignment among team members.
- There was a reminder to engage with the interview spreadsheet for customer meetings to maintain goodwill. Additionally, participation in an engagement survey was encouraged, with a concern raised about missing survey emails. The need to resolve survey access issues was acknowledged with plans for further actions to ensure all team members receive the survey link.
Speaker Contributions:
- Speaker 2: Reminded the team about goals for customer interviews and addressed an OKR issue, urging completion before Q2. Josh developed new chart views for category maturity that require accuracy verification, with contributions noted from Kenny. Mentioned hiring more PMs, including Gabe Weaver, and considering a strong candidate for a fourth slot, needing further definition for their team's charter.
- Speaker 1: Highlighted the value of focusing on core competencies and improving hiring processes, internal handbook language, and prioritization of tasks.
- Speaker 3: Emphasized the value of visuals like screenshots for clarifying complex concepts and stressed problem-oriented communication for better engagement. Suggested sharing good examples to help teams unfamiliar with technical terminology.
- Speaker 7: Shared an experience with poorly attended sessions by the product marketing team, suggesting better communication to motivate technical issues. Speaker 2 and Speaker 1 proposed gathering feedback from internal and external sources to refine formats.
Team Developments:
- Management Group Formation: A new management group is being formed, led by Gabe. Dove Hershkovitz has been hired for APM monitoring, leveraging his experience from Elastic.
- Language Change Regarding Customer Results: There was an emphasis on prioritizing customer feedback over internal opinions.
- Focus on Core Competencies: Discussions focused on prioritizing tasks and improving hiring processes, while emphasizing dogfooding to align with customer needs.
Technical Challenges and Solutions:
- UX Discovery Sprint: A sprint focuses on creating a minimally viable product to address performance and usability issues, coordinating participants across time zones with limited UX resources.
- Infrastructure and Performance Issues: Redis performance issues due to unlimited cached unit tests were discussed, emphasizing systematic solutions and proactive management of performance issues.
- Resource Management: Discussions on implementing controls at the instance level to manage behaviors and costs, ensuring scalability and usability in line with user feedback.
Progressive Delivery and Scalability:
- Progressive Delivery Strategies: Feature flags identified for advancement, with emphasis on integrating features before deployment on GitLab.com.
- Coordination with Infrastructure: Importance of ensuring new features are proactively integrated to avoid impacting costs or performance.
- Performance Evaluation: High performance not prioritized in early maturity stages due to limited user engagement, managing technical debt and performance issues iteratively.
Organizational and Process Improvements:
- Dual Track Agile Approach: Efforts to enhance self-organization in the release area for oversized issues through structured preparations, including prototypes.
- Handbook and Label Renaming: Speaker 5 seeks input on improving processes, particularly the handbook. Speaker 4 announced label renaming from "promise" to "planning priority" for clarity in usage by product managers only.
The meeting concluded with acknowledgments for contributions made to align with software development life cycles and efforts to improve processes and structured preparation for upcoming releases.