Ladybug Podcast
9/22/2025

Project Management As An Engineering Manager

As an engineering manager, project management isn’t just a skill—it’s part of the job. In this episode, we unpack what effective project management looks like from the EM seat.

Transcript
Speaker A:

And I'm going to ask what's the business case for each of these six things that's requiring us to push out the launch deadline in order to do this? Because it's going to mean that the engineer who's working on this cannot work on something else. We're adding to the scope. Welcome to the Ladybug podcast. I'm Kelly.

Speaker B:

And I'm Emma. And we're debugging the tech industry.

Speaker A:

Hi.

Speaker B:

Hey.

Speaker A:

How's it going?

Speaker B:

It's going.

Speaker A:

What are we talking about today?

Speaker B:

Today we're talking about how to manage projects as an engineering manager, which is frankly not something you think about needing to do when you think about managing engineers. At least I didn't, because you think.

Speaker A:

About, like, maybe you're.

Speaker B:

You're going like.

Speaker A:

You know, people often say moving from IC to EM is like moving from managing projects to managing people, but the reality is you're still managing projects in EM and often a whole lot more than you think you should, because a lot of companies don't have dedicated project managers or people who are leading on the scrum side or JIRA admins. And so, yeah, you're going to have to learn how to do a good job managing that project and whatever that means. Also, one thing that we should clarify at the very beginning here is there is a difference between project management and product management.

Speaker B:

I was just going to say the same thing.

Speaker A:

I wanted to find the difference between those people. Forget that. Yeah.

Speaker B:

And stay tuned for next week's episode because this is. We're going to dive into a little bit more about what a product manager does. But at least in my interpretation, product managers are responsible for, like. Oh, my gosh. Well, now I'm blinking. But it's. It's more about, like, strategy and bringing in, like, working with stakeholders to define, like, company initiatives and bringing features and things to the product as opposed to managing, like, the individual projects that a team has signed on to do, like, the status of them, if there are any blockers or dependencies and things of that nature. But I don't actually know the formal definitions, so I could be wrong.

Speaker A:

I mean, I don't think the formal definitions really matter in this space. But yeah, I mean, product managers sit between sales and engineering and marketing, and so they're kind of their presence with customers, they're understanding what feature requests are coming in from customers, from prospects, and they're establishing the roadmap, working closely with their engineering manager counterpart and anybody else in product where there's overlap in what needs to be Done. They're establishing basically the business case for why something needs to be built and then leading the execution of that in a way. So the way I like to think about the difference, and again we're going to talk about this more next week, but the way I think about the difference is that the product managers define the what and the why and the engineering managers define the who and the how and the when because they're managing a lot more projects and you know, tech debt and all that good stuff. And they're defining like which engineers should be working on this and why and breaking down the milestones and so on. So project managers are purely focused on execution. They're focused more on like how do we actually get from point A to point B and how do we make sure we're staying on top of our schedule and that we're not falling behind and that they're keeping the necessary stakeholders informed. So now you can see why project managers, you know, if you don't have one, the engineering manager really has to be that person because they know they're doing the who and the how and so they're already having to define that, the milestones and what that actually looks like from an execution standpoint.

Speaker B:

Yeah, this is the part of the job that I struggle with solely because remembering the status of the 30 to 40 items on our six month roadmap at a given time is not easy for me.

Speaker A:

Yeah, that's why I don't like doing long term roadmaps. I mean, I think it's necessary as you move up in an organization, your scope becomes larger and you look a lot further into the future. But the reality is you can say this is what we would like to work on for the next six months or over the course of the next six months. But something could come up. You know, like six months ago AI agents weren't a thing and suddenly a lot of companies dropped what they were doing to build an AI agent because that then became like the hot thing air quotes there to, to keep up with competition and be relevant in like the cutting edge type of product development. And so six month roadmaps, you can't really predict those kinds of things. And so that's why I like breaking things down into like, okay, what can we feasibly do in a quarter? Like here's the six month or one year product vision. What can we do in the first half? What can we do in the first quarter? And then how can we actually look at the next, the first six weeks of. And then kind of execute from there.

Speaker B:

Yeah, I'm not a fan of the six month roadmaps, not that I have a big say in that, but we used to be on three month like quarterly planning, which don't get me wrong, the planning cadence or like the process of planning is time consuming. However, you're right, like things change so frequently in six months to the point where like the roadmap that we start out the launch cycle with is definitely not even remotely close to the roadmap we end with. Exactly.

Speaker A:

Yeah.

Speaker B:

But yeah, let's talk a little bit about the project life cycle.

Speaker A:

Okay. Yeah. Why don't we start with, why don't you walk through like the stages of the, the product life cycle.

Speaker B:

So at least in general there are probably around five stages, although it's going to change company to company. So the first phase is probably the initiation process where you're defining like what needs to be done, who are the stakeholders, what does it mean to be successful or like the most minimum viable product, minimum lovable product. What are the goals? The second phase of that, once you've accepted the work, is to create your roadmap, to create the individual trackable pieces of work within that. And we can talk about how to break down work in the next section here. But essentially taking these units of work via tracking system and this, oh my gosh, this is also something I didn't expect to have to do was to estimate engineering capacity.

Speaker A:

Oh, I love this game.

Speaker B:

Yeah, this is fun. We use perfect engineering days. So like I have this massive spreadsheet where I have mapped out six months and I have every week I have like how many working days everybody on my team has. So if they have a vacation day, I'll go in and like decrease or if there's a public holiday or whatever it is. And I sum all of those up, I multiply it by like their contribution factor. So for like full time team members it's a one because I expect like they're contributing normally we also do these things called embeds where basically people from other teams can come sit with our team for extended periods of time, like three months, six months, and we give them half a contribution factor. So we're not accounting for them to come in and do as much work as a full time team member. And we aggregate all of that. We pull out public holidays, potential sick days, keep the lights on time at like a 50%. We calculate our perfect engineering days. But holy crap, this spreadsheet is massive and it's taken three years to get it to a point. Where I'm happy with it, but it's, yeah, engineering capacity. The reason we have to do that is to match it up with, like, once we estimate all the incoming work that we're being asked to do, like, based on priorities, how many days do we have? How many days are we looking at based on all this work and then matching it up?

Speaker A:

Yeah, I like that. I like that you call it perfect engineering days. I have a similar spreadsheet that I created as well that looks at a lot of the same things. Um, you know, like, how many, how many weeks are in the quarter? Usually 12 or 13 weeks. How many engineers are fully ramped, how many are not fully ramped, or how many do we expect to bring on in that given quarter? Um, I looked at holidays. I, I plan for one week of vacation time per engineer. I also account for sick days, like, potential sick days, more in Q4 and Q1 than in Q2 and Q3, because winter and people get sick around this time. And then I look at basically the engineering velocity of the team, and it's similar to, I think, your contribution factor. But as you saw, say, like, engineers don't code for five days a week. They do not write code for five days a week. No. Typical engineering velocity you might see considering meetings and PR reviews and edits and administrative work, writing ERDs, you might be looking at a 3 or 3.5 days per week. And so it kind of subtracts. You know, it, it takes like a, A, a portion of what would typically be considered like, you know, six tenths of a week or whatever. And, and then from there you get your engineering days. Like your, your estimated engineering capacity. I also account for, depending on the level that they're coming in at a junior engineer, I expect them to take three months to ramp, like one quarter to ramp. And so if they're starting midway point through the quarter, they might account for like one third of an engineer for that, that purpose. Like your contribution factor. Same thing. If we're bringing on more people, it's going to take more time to onboard, and so that's going to decrease the other engineering velocity. And then more senior engineers tend to, like, very quickly start contributing early on.

Speaker B:

Yeah, yeah, totally. It's, it's an actual science behind it. And every planning cycle, we change something. Whether it's like the keep the lights on is the biggest one for us. Like, what percentage of time do we want to reserve for anything related to the job that's not coding against an epic? And so we're constantly tweaking that figure, the balance becomes like, if you have too much KTLO time, I call it non coding time because keep the lights on is just not indicative of, like, how you're spending that time anyway. But the non coding time, it's like, okay, if we put it at 40%, there's a chance that we take on too much work and we're stressed out. If we put it at 50, then there's a chance we don't have enough work to do. I know we're diving into, like, estimating at this point. However, like, I think that perfect engineering days work pretty well for our team. We also. One thing that I think really helped a lot was when we started estimating different incoming asks. Not only identifying, like, dependent teams, like, do we need data from a certain team that will require work and all of that, but also putting a confidence rating on these things because oftentimes we would scope all of these, like, 40 incoming asks. Or actually, I think this time we got closer to 80. It was nuts. And it's easy to be like, I think it could take us like, 20 days. Right. But how many engineers are we looking at working on it? Because the more engineers you add, I tack on additional time because there's, you know, collaboration that needs to happen. Also this confidence rating, like, hey, I'm estimating this, I'm not confident in it. We might want to add some additional buffer time onto it. That's been helpful because for things that are straightforward, like, fair enough. Like, okay, two days, that's fine. For things that are really complex and have really volatile designs, or, you know, we have a lot of dependencies if we're not confident, like, I'm going to rate it higher estimation wise. So we have the buffer. But yeah, if those get descoped, it's like, okay, now we have a bunch of time left. Like, exactly. But there are a couple other ways you can estimate work. Maybe we just jump into that real quick since we're here. I've used T shirt sizing before, and frankly. Oh. At a previous company, this gave me so much anxiety because we would sit in a room and we would all, like, we would count down from three and everyone would have to raise, like a card that said the size of the T shirt that you were looking at for the amount of work. And being the most junior team member who had just joined in a foreign country, nonetheless, because I was in Germany was so intimidating to be like, I don't like that.

Speaker A:

I don't.

Speaker B:

Oh, I hated it. That's why I Refuse to. Well, the T shirt sizing is a different problem, but like forcing people to give like real time estimates. Not cool. But yeah, T shirt sizing, I don't know, I just find them to be a bit arbitrary.

Speaker A:

They're very arbitrary. And I think I, you know, what is the difference between small and medium? Yeah, you know, yeah. Being a woman, I, you know, I wear both size small and medium. Sometimes even large. Like, I don't know what these mean.

Speaker B:

I know, yeah.

Speaker A:

I don't like it. It's too vague. I like. So I've used, I have, I have to find a resource. But Asana has a, like a time estimate chart that they break down. That's more just like, like story pointing, basically. And the way that I use story pointing is, you know, let's just say I'm throwing out some hypothetical things here. Like let's say something is three story points. It takes, you know, it'll take maybe two days to build. I don't know exactly which one's which. It's not rep story right now, but like there's a certain point where the story points become so high, let's say eight, where as soon as something takes over a week or two weeks to build, that needs to be broken down into more tasks which need to be estimated. Because if you can't have like full completion of tasks in one sprint, like 12 week sprint, your tasks are too big.

Speaker B:

Mm, okay. Interesting. Yeah, we're not like really strict on our team with this stuff, frankly. And this gets into the third phase here, which is the execution phase. Once you actually start working on these things, my team doesn't break down these big epics which are like, I don't know, like if you're building a new feature, it would perhaps come in the form of an epic, right? Like, like, oh, add a feature to a Spotify playlist to allow users to, let's say the download feature, which obviously already exists. Like, oh, add a button that will allow users to download a playlist and listen offline. Okay, well, that's an epic, right? We don't do the breaking down of that into a story or a task until the execution phase. And we do like a project kickoff. And I'll be honest, I let my team handle it. Like, I don't micromanage how things get broken down. It's up to like, we call it a tech lead. It could be a road manager, it could be, you know, whatever you want the term to be. But the person who's the directly responsible engineer, because I I mean, I guess technically I'm the directly responsible individual for the overall like project, I guess from an engineering perspective, but the one leading the work from a tech perspective. Like it's up to them how they want to break down the tasks with their work stream partners. And I'm not micromanaging like how granular they are or anything like that. So maybe I'm not the best example of like how it should be done.

Speaker A:

No, I mean it also depends on your team. You know, if you, if you have a very self managed team, for example, and you, they have a history of delivering on schedule, you can be more hands off with that. You know, sometimes when you're working in a very complex project you have to break it down. And I, you know, first for something that's a little bit like a project I recently led by. Recently, I mean it's like a year and a half ago. It was something that I'm very like deeply familiar with but nobody else on the team really was. And so you know, I, I was a little bit more hands on in the, the milestone, like breaking down the milestones and estimating those out. Because I know from a, like an over, like I know the prod from the project perspective, I know how long things should take. The actual execution piece of it. I'm not going to make you say like, okay, we need to do like the, the REST API and the GraphQL API functions 18 times for these 18 different things. I'm not going to be like, okay, give me estimates for each of those, like bucket them. I'm okay with that because basically the way that I do, like if we're setting, like we need to, we need to launch this, we need to get this into beta, you know, let's say three weeks before the end of the quarter. So we know generally how many weeks we have to work with. What do we need to build, what is it? What is the MVP to ship for beta, which in my opinion I call the minimum shippable product, not the minimum viable product. And then in that regard, what can we feasibly get done so we can cut things if we absolutely need to. And so we have a general like roadmap that we're working on kind of working backwards from that, that beta date. And we're checking in every single week. Like how are we progressing against this timeline? Like are we worried? Like did one of my engineers get pulled to work on some other project and we, you know, slowed down or somebody got sick while I was out for a week. Whatever it happens to be just so we're, we're always keeping up with that schedule. And I find that piece, the project management is actually really, really useful because we're notoriously like, we as engineers in general are notoriously bad at estimating. Like, it is not something that you can just predict. No, you can't. Like, you can use historical data, but as soon as you're building something brand new, you're in a greenfield space. How are you supposed to know how much time something's going to take you?

Speaker B:

Yeah, we have the discussion a lot where we started trying to identify. Is this similar to things that we've done in the past in terms of the teams we're interfacing with? Or like, do we need to refactor things before we can get started? Or like, does the design system have everything we need? And based on that kind of aggregate estimations on projects that were pretty accurate. But it's, it's a lot of data and well, you can circle back to this like in the. How do you measure progress? Like when we talk about. Or we could just talk about it now, but in terms of progress, like we actually just did a planning retrospective halfway through the launch cycle to say, like, here are the projects, here's what we estimated at. How accurate was this estimation? Was it too low, too high? Just right. What impacted that? And I can tell you like 80% of them were underestimated, whether that was due to scope creep or dependencies not being ready or missing requirements. And so I guess that's really good to know. Like, I actually liked the planning retro. And then what we did was like, we looked forward at the stuff coming through and like, this is what we estimated stuff at. Are we still confident here? Because when you think about it, it's all a budget. Right? So like if you come in and you've overestimated a lot of the work that you have, meaning that you have a lot of days left that you can take back. Okay, well, we can afford to spend more time on some of these other projects. Totally fine. But yeah, we're trying to keep more real time track of the progress of these things. That it's just very time consuming.

Speaker A:

It is, but it's time well spent.

Speaker B:

Yeah.

Speaker A:

In, in my opinion. And, and I also like these like midpoint retros too, because they're meant to be blameless. They're meant to be like, let's just look at what's happening. You know, if things are taking longer than you expected, like you're struggling in something. Okay. Like, I don't expect you to be like operating at 100, knowing exactly what to do every single time. Otherwise, like AI would be a whole lot better in writing code for us. Like, that's just not, it's not realistic. And so, you know, by check, by giving those like touch points just to check in, like, you know, this is what I said I would do. I'm trying to do this like I'm blocked for one reason or another. And like making sure you can have a conversation about it without feeling like it's coming across as just like giving excuses or complaining or something like that. It becomes an easier conversation to have with their team and they'll be able to like self reflect without feeling judgment.

Speaker B:

Yeah, yeah. Although that does. I mean, this is maybe something that we can talk about in a later episode on like the hard parts of the job, but something hard that, that I find about this job is how do you confront a situation wherein a project is taking an absurdly long time to complete and the person leading that project is perhaps unaware or disagrees that it's taking a long time. Yeah, but we'll talk about that in a later episode. I think it's the last one of the season, which is the tough parts of this job. So. So yeah, once you have gotten through the execution phase, then it's onto perhaps monitoring or controlling. We do those sometimes. We do like a B tests with different features to see like which version is hitting better in terms of what we want the impact to be. We do monitor rollouts for both the employees as well as the public to make sure we're not breaking things and it's being well received. Just really. Yeah, just. Is anything else within monitoring and controlling, Just making sure that the rollout is going well.

Speaker A:

I mean, setting up alerts especially because yes, that is fundamentally changing. Like, you know, the risk of a project. You should, you should already know the level of risk that comes with shipping something. You know, when you're just building this net new feature that you know people aren't using yet, the risk is a lot lower when you're building something like you're. You're refactoring your entire authorization system. It's a high risk item because, you know, if people lose access to things that they actually need to run their business. For example, you know, we're, we're B2B, you're B2C. Like there's a big difference in losing access to something versus like oh, something's broken or something's not working very well on in the future. So Having the alerts, like very like detailed alerts set up so you can see a downward trend of something's happening versus like everything's suddenly on fire because a customer reported an issue. That's really, really helpful. Also like feature flagging is so incredibly useful here. Like you should do controlled rollouts all the time in my opinion. We rarely full send anything from 0 to 100 at an, @ an earlier stage startup like let's say series A pre seed seed stage, you can take a lot more risk in shipping something directly into production, getting feedback live and iterating on it when you have a smaller subset of customers. Because typically these customers, because the product is so new, they might not be so, it might not be so deeply ingrained in their business systems quite yet, where if something breaks, it's not the end of the world or they're willing to take on that risk as you scale, you can't do that anymore and you have to slow down. You have to have tests written. You need to choose. You know, this is also part of the risk question is like how much time do we need to spend writing tests and what kind of test coverage do we actually need? And it's an interesting question because, you know, I think that there's a lot of pressure to make sure you have full test coverage for anything that you write. But that's also part of a minimum viable product is like it's okay to be buggy because it's not a minimum shippable product.

Speaker B:

Yeah, this is something we're actually looking. This is my project before I go on parental leave is our testing strategy and evaluation and do we have sufficient test coverage on. The question becomes what metric are we using to evaluate whether like we have significant coverage. Right. If we're looking at line coverage with unit tests, like you don't want 100% coverage. Right. That's not going to catch all your bugs. And in fact it could probably. It's. It's going to cause more harm than it's worth. Right. But if we're looking at customer support tickets, is that really indicative of how like resilient your strategy is? Right, but I think the biggest point is it should be something that's actively considered as well as, you know, in tandem with monitoring and alerting and all of those things.

Speaker A:

Yeah, yeah. And one of the things before we move into like, you know, talking through scope creep and how things change, I mean, I guess it's a, it's a pretty good segue here. But you know, we talk about this project lifestyle Life cycle of going like initiation, planning, execution, monitoring and closure. But realistically you're going to be probably spending time iterating on 2, 3 and 4 on that planning, execution and monitoring stage. You can do everything in your power to capture every possible risk, every possible edge case and corner case when you're planning. But you've now spent too much time in the planning phase and you're not actually getting started on execution. Like again you have, it's a, it's a risk scenario and it's, it's like finding what that, that appropriate balance is to be able to say like we have enough to get started. We know there are some risks, we are accepting some risks. We night but we might need to go back and revisit some of these things. But we're willing to take that risk on.

Speaker B:

Yeah, yeah, totally. And then something like we deal with because we have a lot of stakeholders which again we'll talk about in the next episode coming up. But one of the difficulties of being a UI team is that we have dependencies on our data teams to provide API stuff and all these things. And so yeah, something we started doing is like hey, can you give us like a mock API or just like tell us what you're thinking and then you know, we can get started on something. And yeah, but it is, it's definitely a delicate balance and as an em, you'll, you know, if your team members need support, I think you should be prepared to support them. I'm more of the hands off approach where like if you bring to me that you need support, I will be happy to help interface with these stakeholders for you. But for the most part I think yeah, just.

Speaker A:

Yeah. It often comes again how self managed your team is. The senior team, sometimes you do need to jump in but it's a good skill for every engineer to learn how to do like that stakeholder management and the negotiation that comes with that is so incredibly important to learn.

Speaker B:

Absolutely.

Speaker A:

Should we talk about scope creep?

Speaker B:

Let's do it.

Speaker A:

So it's kind of unavoidable. Like scope creep almost always happens to some degree. And the best thing you can do with scope creep is plan as much as you can up front, but again identify where those risks currently exist. As you're building a new feature, for example, or you're refactoring something and I'm not just saying like refactoring from an engineering perspective, like you're changing the user experience of something you are especially working with product, you're likely to uncover things along the way. That you didn't think about at the outset. And you need to decide are these things that we are willing to take on, knowing that they might push out the launch date for this and handling scope creep is just a game of negotiation. The whole thing is just negotiation. I assume you'd pretty. You'd probably deal with scope creep a fair bit as well.

Speaker B:

Yeah, we do. I'm trying to think like, does it come more from like the. The folks leading like the overarching work or like the. I don't know how to explain this, like the stakeholders at the high level who are running the project for like all the platforms, or does it come more from like design and like wanting to do more? I think for us it comes more from like the stakeholders leading the initiatives at a high level where they want us to do more or different functionality. And then of course, it's a waterfall effect where it's like, okay, we're changing the requirements or like the user flow now we have to update the visual designs. And then it's. But normally what we would do is just say, hey, we're shipping this. I don't like MVP either is a term we, I like to say like minimum lovable product, because viable and lovable are not the same thing. So normally it's like, okay, one thing that we do struggle with is defining what the minimum version will be or the first iteration will be. And then it's hard to identify scope group because it just kind of creeps up on you. So I think we're pushing this time to have a more defined, like, set of requirements for the first iteration. And then should we need to expand the scope, we'll deliver the first version if we're able to like it. Don't have to roll it out or anything, but at least to finish what we agreed to. And then if there's additional stuff that we need to add on, let's do.

Speaker A:

That after, like, let's fast follow with it. Exactly, yeah. And I will negotiate with my product counterpart with this as well. Like every now and then they'll say like, well, you know, we really want to, like, we need to get this feature out. But I believe that these six things are deal breakers. Like, we have to build these additional six things in order to launch. And I'm going to, you know, ask what's the business case for each of these six things that's requiring us to push out the launch deadline in order to do this? Because it's going to mean that the engineer who's working on this cannot Work on something else or adding to the scope? Is that a trade off that I am willing to make that I'll agree to? And occasionally it'll become a. Okay. I know that these things, these six things are really important to you and I also see 12 more things below this that you think are less important, but things you would like to get done. I am drawing a very defined line in the sand here where I will give you your six things. I'm taking a screenshot of this and saying these 12 things that are post launch world work on later, they need to stay there because I will now know if you've pulled something else in. It sounds harsh, but it's such a necessary thing occasionally to just have a very firm wall that you cannot pass this because this is affecting, this is having ripple effects on everything else that we need to execute on the roadmap.

Speaker B:

Yeah. I don't often have to interface with stakeholders to set boundaries and oftentimes if I am looped into a conversation like, hey, can your engineer build this? Because we want to do it and it wasn't planned work, I'll bring the engineer in and say, hey, what do you think? And they'll give their opinion and I say, cool. I back them up. Like they say, we can't do it, we can't do it. You know, So I am fortunate in my position where I don't have to be the one to evaluate whether like the opportunity cost of those decisions for the most part. So I think be flexible, but be flexible with boundaries, if that makes sense.

Speaker A:

Absolutely. Yeah, yeah, yeah. You, you're, if you're, if you're never setting boundaries, you're just gonna get pushed over all the time and, and that's just not a healthy way of working. Um, and then one thing that I, I, I meant to mention earlier on, on the time estimate side of things like these types of things that come up when you're like completely unexpected work that you know has to get done, something breaks or you have a sudden customer commit. We, we'll have customer commits occasionally that suddenly get pulled in, you know, whatever the task is. I allocate a week each, six weeks for one engineer's time for what I call, oh, shit tasks. These are the things that like, you're like, I did not plan to work on this whatsoever, but at least know I have time allocated for this, for one engineer's time in case something comes up. We don't have to push things around too, too much.

Speaker B:

Yeah, that's good.

Speaker A:

We have the concept of Pesky bugs, which is different from like actual issues. Pesky bugs are there's a random zero that's showing up on this page that shouldn't be there. Or I'm a little annoyed. Like this, this button is kind of getting covered a little bit by this other element. Like they're not deal breakers. People are still able to do their work. They're annoying or they're just like mildly unsightly. But like they're not. Something is fundamentally broken. And so pesky bugs challenge or channel. And there are many times when people will report actual issues. And I'm like, this is not pesky. Let me, let me define pesky for you.

Speaker B:

Yeah, yeah. Like the system is down. You might want to like raise an incident.

Speaker A:

But can anyone load the dashboard?

Speaker B:

Yeah.

Speaker A:

That'S fine. Let's talk communication. Yeah. So how do you, how do you handle, like you're working on a long term project, let's say it's at least six weeks long. How are you handling the communication of that project during that time period?

Speaker B:

It really depends. Like we have one team we interface with for a lot of our data needs and so we'll set up bi weekly check ins with their em and it's quite nice because we get facetime with her as well. But just to kind of hear like, hey, what's the status of your work? Here's what our roadmap looks like. Can you prioritize your work on this epic as well? So that's been good, but for like the stakeholders that are more leadership based, like for a product area or even higher. We do bi weekly. So every two weeks we have a roadmap review with them where we'll show our roadmap. Here, here's the status of things. They're the projects that we're working on and also raise any risks if there are any risks of it not being delivered on time. That's probably the biggest way that we keep everybody informed. Otherwise like our product managers, like we do weekly newsletters as well. He's really good at posting in more public slack channels about like rollouts or announcements or things like that. So yeah, that's how we do it. How do you do it?

Speaker A:

So on my direct team on like a particular, like a specific project that's going to be, you know, multiple engineers working on it for multiple weeks, we will do a weekly project, Project Sync, that's almost like a standup, but it's one time a week and it's like max 15 minutes. There's a notion doc we use that basically every, every time we meet you click a button and it just generates the, for that date. The engineers on that project and where they, they will be able to like I will live write it so they don't have to do the work ahead of time. So we're just meeting and I'm typing as they're, as they're going like, here's what I've been working on, here's what's shipped. This is in review, this is what I'm working on next. And then two sections for questions and blockers. Any questions, any blockers, we'll talk through those if they come up. And then I have one more section for just like other, other notes. Other notes can be we are falling behind this particular thing or we forgot to add this to the scope but this isn't necessary to do. So this is going to push off the timeline. We reviewed the timeline, everything looks good. We discussed beta customers, like who's going to be testing. And this is engineering, product and sometimes design in there as well. And it gives everyone like one space. Like here's, this is a quick 15 minutes. You're obviously talking about the project all the time when you're working on it. But like this is dedicated face to face time for like a maximum of 15 minutes with, with PM or with product, with engineering and with, with design to just like make sure we're all on the same page. And I found that to be really, really helpful for these longer term projects. And I like to have whoever is leading this project actually lead the discussion as well. Again, it's kind of building that muscle. It's a little bit administrative, but I don't think it's a huge time suck. And we put it at a time where it's not going to be like a massive context switching situation for our team and then as a larger organization because again we're small. So up until February we had a weekly Monday morning meeting where we like here are the top three things that we worked on last week. Week. Here's what the top three things our team is working on this week. And it's all three engineering teams as well as like customer success, sales, engineering support. Like everyone is kind of in one room, just the leaders in there to kind of talk through. Are we all on the same page about what everybody's working on in case there are any dependencies that exist from one team or another?

Speaker B:

Yeah, yeah, that sounds really good. It actually does sound similar to what we have. We don't do project meetings like that. We actually so My leads team like design, engineering, product, we're all in the standups every single day. So like there's no need to communicate like daily status outside of that. But we do have a private Slack channel. We can just like ask each other questions. We have every two around two experience every two weeks. We have like an EMPM Design Data Science sync. We go through anything upcoming and then the people leading the projects we actually there are two things that we do. They have a. We have a project channel where everybody involved can join like oh, like desktop download button for example that the workstream lead will create and post updates into. And then also we do. We use coda sometimes. Like we have coda pages for any big piece of work where I. We've put together a template for like all the project information and they can keep it up to date with stakeholders, relevant resources, timeline blockers, et cetera. And that's been really helpful as well.

Speaker A:

I like that. Yeah, we don't do Daisy stand ups because things Vince from California to India. Yeah, it's a little hard.

Speaker B:

Yeah. I think the point here that we seem to have in common is like asynchronous is always good.

Speaker A:

Async is great. Yeah. In fact my daily standup is a workflow on Slack that. Oh, that just. It's really cool. I post it like 9 like 9am Eastern, 8am Eastern and you react with an emoji. It sends you a DM with a button to click and just ask you three questions like what's your number one priority for today? Did you do yesterday's number one priority? If not, why? And then do you have any blockers or questions? And then just like it's, it's less than like a minute or two to answer it and that's it. It automatically posts into the thread.

Speaker B:

That's nice. Yeah, we thought about doing that because we have this idea of like Focus Wednesdays. But my team really enjoys like I always stand up at 10 in the morning. Cause we're all EMEA based like in European Middle Eastern time zone. Uh, and the first 15 minutes are an optional fika, which is like a Swedish coffee break. And then we have 15 minutes of standup. But people really like it. Like they even want their Focus Wednesdays asynchronous, like Slack. I love that they want to hang out, which is really, really great.

Speaker A:

So.

Speaker B:

Yeah, and I think the, the. One of the other key takeaways here is that your role as a project manager while you're an engineering manager will change drastically depending on the scope of the company and the tenure and seniority of your team. If you're taking on a new team or you have a lot of junior people or people who aren't used to leading cross functional work streams, it's going to look different. So communication is probably your best tool.

Speaker A:

I like that. That feels like a good way to end this.

Speaker B:

Agreed.

Speaker A:

Agreed. All right, why don't we talk about resources?

Speaker B:

Yes. My resource for the week is a book I've read years ago. It's still something I think about often, which is is Start with why by Simon Sinek. It was the first book of his I read. I'm reading another one right now called Leaders Eat Last, which I'm finding interesting as well. But Start with why was really intriguing to me because it really hit home the point that like if you just ask people to do things without explaining the value of it or the reasoning of it, they're less likely to be motivated to do it. So often with planning or execution of projects, if my engineers understand why something is so valuable and how it fits into the business objectives, they are more likely to be motivated to do it and to do it well.

Speaker A:

So that's a great book. I really like the audiobook for that one.

Speaker B:

Oh, I haven't listened to it. Maybe I should.

Speaker A:

That's good. He has a slight accent, so it's really. Yes. And it just like it comes out in like little bits. It sounds like somebody who like if you were living in like, let's say you grew up in like Ireland or you grew up in the UK and then you moved to the States and you like Americanized your accent. But every now and then you say something interesting.

Speaker B:

I didn't know he was English. American. That's funny.

Speaker A:

Yeah. Okay. And so mine is Measure what Matters by John Dewar. I like this if you're going to be doing project management because I think it's really important to understand okrs objectives and key results and you, you know, even if you're not the one who is really setting these for your team from like a, like a company perspective, it's good to understand the, the business case for writing them. And it's a good exercise to actually put yourself through and your team through because it's going to help you relate the work you're doing back to those top level objectives that are coming down for as far as like what, what are the overall company objective? What are the, you know, like the business objectives for each team and how does that continue to roll down? When done correctly, you can map all of your stuff back to those bigger, those broader objectives. In summary, in some way, that was.

Speaker B:

Like, I'm kidding you not. It's the second book up on my bookshelf here. It was one of the next ones I was gonna pick up as well.

Speaker A:

So it's so good.

Speaker B:

I will expedite that one now.

Speaker A:

Perfect, perfect, perfect.

Speaker B:

Well, with that, I think we can call this a wrap.

Speaker A:

Do you want to give the whole ending thing, since I did the last one. The whole end?

Speaker B:

Oh, my gosh. You're asking somebody who hates ending meanings. Actually, yeah. The seaside. Yeah, this is definitely not my strong suit here, but yeah, let us know on Twitter or in the comments. Like, what are you using to keep track of your projects? Because I think it's so interesting to hear how other companies are tracking their work and. Yeah, yeah, with that, I guess.

Speaker A:

Yeah. Find us on all the pages. Any, like, podcasting? I was going to say social media, maybe, sort of. That too, but any kind of podcasting platform. YouTube, tell your friends, subscribe, follow, smash that whatever button we're talking about. Smash it these days? I have no idea. But we'll see you next week.

Speaker B:

See you.

Episode Notes

As an engineering manager, project management isn’t just a skill—it’s part of the job. In this episode, we unpack what effective project management looks like from the EM seat. From setting realistic timelines and tracking progress to balancing technical depth with stakeholder expectations, we explore how to keep projects on track without becoming a bottleneck. Whether you're new to the role or looking to sharpen your execution game, this episode offers practical advice to help you lead projects with confidence and clarity.

  • 02:08 Fundamentals of project management
  • 05:28 The project lifecycle
  • 06:24 Planning and estimation
  • 17:01 Measuring progress
  • 23:09 Handling scope creep
  • 31:13 Communication and stakeholder management