If it's like anything that most managers do, you have the team come up with estimates, have some meetings to see if you missed anything, discover that you did, then create a schedule, remembering to add slack time for unforeseen interruptions due to people getting sick, customer issues, etc.
Then you see that the schedule is too long. Slowly but surely browbeat the team into saying "yes" to the question "do you think this can be done faster" on every task. Do this some more when someone from sales asks if something can be delivered in a particular quarter.
Get very stressed out when the real project deliverable dates align much more closely with the original estimate than the unrealistic one. Micromanage your people and stress them out. Keep freaking out as you miss every deadline in the "pipe dream" plan.
Deliver the project with a mild slip compared to the original plan (because you missed something major in the original planning as it is impossible to foresee everything). Use the twice-too-short plan as your metric though. Still, the product is awesome, so give your people a pat on the back.
Hold a "lessons learned" presentation swearing to never do this again. Speak at length about how critical good estimation is. Go on to repeat the very same exercise for your next project.
Just had an idea: maybe keep the version of the timeline from before you haggle down the dates and keep checking which one matches reality better? You don't have to tell the developers that you are doing it if you believe us programmers need some flogging to keep us coding.
If you're a company that plans far in advance, the same is true. You'll demand the work you wanted is done at a given date, again regardless of how difficult it was.
At my company, we have a well maintained backlog of work. We pick dates in the future to check in on what got done in between check ins. The product manager can (re)prioritize work as needed. We pull from the top. If there's a surprise deadline it gets brought up to everyone and then put at the top of the queue, sometimes over currently in-progress tasks.
You cannot bend the realities of time and complexity. If something is hard, an estimate doesn't make it easier. A hard delivery date also doesn't bring predictability. Ultimately, with or without estimates, you'll get what you get. If you want to get _more_, ask a team what's making them slow and then prioritize fixing the things they bring up. If every team is working optimally (they're not), hire.
I'll also add, the time it takes to prepare an estimate of any value is probably 10x what anybody asking you to estimate something is willing to provide. You're being asked to spit out a number to fit an existing narrative. If you wanted to estimate a unit of work with any amount of legitimacy, you'd need hours/days/weeks (depending on SOW). These companies scheduling weekly estimation meetings that last an hour and are bullshit scrum cards don't matter and aren't interested in being even close to correct.
Defense against the dark art of estimation bargaining (2014) - https://news.ycombinator.com/item?id=28208859 - Aug 2021 (71 comments)
Guide to Software Project Estimation - https://news.ycombinator.com/item?id=28047973 - Aug 2021 (40 comments)
Software estimation is hard – do it anyway - https://news.ycombinator.com/item?id=27687265 - June 2021 (230 comments)
How I started believing in Cycle Time over Estimation - https://news.ycombinator.com/item?id=26165779 - Feb 2021 (34 comments)
Software effort estimation is mostly fake research - https://news.ycombinator.com/item?id=25825244 - Jan 2021 (308 comments)
Back of the envelope estimation hacks - https://news.ycombinator.com/item?id=23278405 - May 2020 (77 comments)
(obviously there have been many more, going back further)
For level of urgency, I have another system for that, using Cold stone creamery sizes: Gotta have it (must ship this sprint), love it (should ship this sprint), like it (stretch goal). Anything that doesn't get done gets bumped to the next level of urgency for the next sprint.
I use that method for all my consulting estimates, and it has been very reliable for me; no crunch time, and I often come in under time. I've also made a tool that does the math so my clients and other people can continue to use the same process without me.
Unfortunately, that argument never worked with my managers at project budgeting time.
As a result, because I was really bad at estimating, I spent a day teaching myself function point (FP) estimation. I also found a chart detailing how many hours of effort an “average” organization required to build projects of varying FP sizes. This meant that it was easy to make a simple spreadsheet using the chart data and Excel’s FORECAST function to generate estimates.
With that approach, my subsequent projects were always within 10% of the estimate, which is much better than my pre-function-point estimates. Combining that approach with the Agile idea of working on the most important features first meant that my projects were largely free of drama come deadline time.
If we had to absolutely deliver by a fixed date - a rarity - we broke the task down in parts that could be estimated, identified the dependencies between tasks, and kept management informed if those dates could slip.
This lightweight process worked pretty well if we had to implement it. There was a lot of trust put in the engineers.
(1) Individual sprints more-or-less hit their goals. Maybe you do 80% of what you expected to do consistently, but there never seems to be a last sprint. (e.g. new requirements keep coming up, new problems get discovered, etc.)
(2) Each sprint is a disaster. You deliver 20 or 30% of what you expected in the sprint.
If you ask the people in the team and other stakeholders you might even find that some believe (1) is the case, others believe (2) is the case.
I would look the following mismatch: The conventional sprint planning process assumes the work is a big bucket of punchclock time where there are no dependency orders, one team member can do the work of another team member, etc.
In some cases this is close to the truth, in other cases it is nowhere near the truth.
For instance if you plan to have work implemented and tested within the boundary of one sprint there is a point at which the work is sent over the wall to the tester. I worked on one project for which each iteration contained a machine learning model that took two days to train (most of this process happened outside "punchclock time") If everything went right you could start two days before the end of the sprint and have a model, but often things didn't go right and if you really wanted the sprint to succeed you would want to start training the model as early as you can, maybe even over the first weekend.
If wallclock time and temporal dependencies are the real issue you have to address that.
We naturally learned to slice « too big » tickets and we roughly define « too big » by « I’ll probably struggle on this ».
It’s far from perfect, not really better but absolutely not worse than when we did estimates and we are able to achieve our goals, which is what matters.
Whilst I've found this can help weed out biases and gut feelings - I still add a multiplier that i've refined over the years. Basically chaos allowance, that aims to take into account the things you can't possible take into account yet.
I think me the most common cause of estimation issues arises more from unclear acceptance criteria, rather than calculation issues, so i've learn to work with my clients to capture a huge level of detail in the user story acceptance criteria (kind of anti-agile I know), and only then do a fine grained estimate for every story. This undoubtedly takes a lot of time, but in the end it's what works well on the majority of my projects (i generally do this in a paid scoping phase).
From my perspective it involves two things.
First, it is about applying the experience of similar situations to the future. I can't very well tell my coworkers "I don't know" because the work that they do depends on me getting my stuff done, especially in a timely manner. Thus, I have to draw on previous experience to say, "Well, this (or something like this) took me two weeks to get done last time so it will likely take me a similar amount of time this go round."
The second part of that is being open with expectations. Being up front with my coworkers involves me telling them that while I think this will take X amount of time, these are the complications I am facing that could have an impact on my ability to deliver in the provided time frame.
Ultimately, by applying a combination of those two things I have been able to build a good relationship with those I work with when it comes to providing estimates and expectations for delivery. Honestly, not sure how this will apply to software engineering or if it is even translatable at all, but that's my two cents.
There are many methods to get this assessment. These are the ones I have used:
* Basic level: Write down everything that needs to be done and assign a time estimate (in hours, otherwise break down further)
* Intermediate level: Write down everything that needs to be done and assign a range of time in days (from best to worst case scenario)
* Expert level: Use an abstract scale (like the fibonacci sequence, T-shirt sizes, etc)
* "We've been doing this together for 20 years" level: Can it be done in XY time? Yes or No answers only.
In each of these levels the specific time it will take to complete the task becomes less and less specific. The reasons to keep it specific, are a) to learn how to estimate well and b) to keep people accountable. Reasons to keep it unspecific are that a) it's impossible to get a correct number for a task you've never done before, thus b) it's a waste of time to try and make an estimate accurate.
The value of all these estimation techniques is to find out where the most risk lies (usually there where the most unknowns / complexity lies). If you then don't follow up on those risks by checking in with the team, the estimation becomes useless. If you manage these risks well, you should at the very least have consistent estimates. Even if their real-time equivilant is off, it should be off by a consistent amount.
Most important of all though: Estimatation is a learned skill and not inherent to anybody (developers or managers). It takes time and practice to get going accurately.
I've ask because I've seen large orgs have accurate estimates, but lay those estimates out incorrectly on the roadmap because the product managers overestimate what percentage of time is actually spent working on the products.
That is amazingly bad estimation! You must have some big systemic issues if you can repeat such a failure even twice and still do estimates. Why bother if you're that far off?
We planned out all the epics (create promotion flow, promotion landing page, redeem flow, etc) and sprints. I think it was six 2 week sprints, or about 3 months.
Then the scope creep happened. We agreed on just a "send now" button in the designs, but marketing decided that a scheduling feature was necessary. Somehow we got bogged down building the scheduler and doing timezone calculations (e.g. what if a business located in Pacific time has an owner in Eastern Time who is traveling to Central time, when should a promotion scheduled for 6pm actually get posted?).
In the end we ended up taking twice the estimated time for a feature that was only used by 0.2% of paying customers (it turns out that businesses either already have a coupon system or they don't want one at all).
https://writemoretests.com/2012/02/estimating-like-an-adult-...
https://fogbugz.com/Evidence-Based-Scheduling/
https://blog.fogbugz.com/evidence-based-scheduling
https://support.fogbugz.com/hc/en-us/articles/360011258994-E...
Project planning that breaks tasks down into granular dates just begs everyone to disrespect it once dates inevitably start slipping. Don't do that, just keep an eye on the important dates and give your teams the headroom to work.
In your case I'd set strategic goals for the year to align with the business, then plan anchor dates (such as key releases) on a month or two month level for the entire year, then work back dependency and tech roadmap from there.
Also keep some capacity in reserve. Plan your yearly goals conservatively but your anchor dates a bit more aggressively.
Then once the plan is in place, keep track through weeklies or something unobtrusive like that.
https://github.com/SixArm/sixarm_project_management_rope_est...
Estimates as work-hours by the person most-likely to do the work tend to work the best in my experience. Estimates as work-hours also are clear and direct for upper management, or across to other stakeholder organizations.
If your estimates are solely within one team or sprint, then story points tend to feel fun and easy. If you can skip estimates all together then that's superb-- it helps to have a skilled team and also a high trust environment.
As a side project, I actually built a tool to help with this: "Calculeeto" (http://calculeeto.herokuapp.com/). Besides walking you through the process of breaking down your project and adding buffer, it also has features to assign dependencies and generate a schedule based on number of teammates -- it even adds time for communication overhead: http://calculeeto.herokuapp.com/?debug4
I never launched or finished it but it seemed useful.
Estimates for anything other than the next couple of tasks planned in detail are entirely useless 100% of the time and you never benefit from under estimating.
For what it's worth, most places I've worked at pretend to use points, but at some point directly translate the points to days anyway.
Also in the last 10 years (which is roughly when I started to encounter agile in the workplace) I've not worked anywhere that didn't have its release schedule set by marketing or product or legal. Yet everywhere paid lip service to developer estimation.
Also, also, I've never encountered an Epic which got over-estimated, which kind of gives the game away doesn't it?
On some teams, we're trending towards "story-count" rather than points. These teams are typically executing extremely well and don't need the (small) overhead of points.
-----
John Cutler has some good thinking on this issue: https://medium.com/hackernoon/work-small-even-if-it-makes-no...
"Double the number and increment the unit."
This is a simple formula to calculate how long something takes to go from idea to production.
So, 3 hours becomes 6 days; 2 weeks is 4 months, etc.
Enjoy!
In short, there is someone in your team who knows how to estimate. Find him. Find who is good at what and use their f-ing talent.
* Make risk assessments: Which tasks are most likely to cause a schedule problem or where an implementation approach could fail.
* Spend more time on retrospectives. Retrospectives are more valuable than estimates because the data is accurate.
https://erikbern.com/2019/04/15/why-software-projects-take-l...
"How Big Tech Runs Tech Projects and the Curious Absence of Scrum"
Estimates become less useful the further out they are. To get useful info about how long a project will take requires two large changes; careful definition of the problem and effective scoping of deliverables to validate your solution.
I push teams to understand the problem before attempting any implementations. Without this context people usually make something awesome that isn't useful. That's another thread on how.
The big hack is figuring out what can be delivered to test your assumption quickly. I shoot for about a month of work for this. Give or take.
Up front you get the team to agree, "this milestone should be easy to deliver, assuming we understand the problem and our assumptions are right". Then, if you miss that deliverable you stop work on the project and figure out why you were wrong.
This stop is meant to combat the Sunk Coat Fallacy. Then you can try a new approach, cancel the project, or keep going having only "wasted" a month. These are called Kill Metrics sometimes.
Long term estimates commonly fall to Sunk Cost issues in my experience. This is where a rush hits at the end and you get low quality product.
It takes a shift in how engineering communicates with other orgs to pull this off. You need to account for their needs in the milestones and keep them in the loop as a final delivery date comes into focus. It works to go from second half of the year -> Q4 -> Nov -> date. As long as you refine those with enough lead time.
"When we plotted the data, in all cases, the actual time was very accurately fit by a lognormal whose scale parameter was precisely the predicted completion time."
Most of the time expectations matter. Only sometimes time to market really matters. At that time nobody asks for estimate
The number of times that I've been tried to be held accountable for guesstimates is just not funny
- XS = +- 1 day of work
- S = +- a few days of work
- M = less than 1 sprint (= 2 weeks)
- L = a few sprints of work
- XL = months of work
1. Break down "project" in small enough Tasks. Measured in days or weeks and not in months. The scale and margin of error tends to be wrong once you are in the months category.
2. Ask your frontlines ( Engineers, Sales or whoever that is doing the actual work ) to do Estimate for 80% of those task. Not their Managers. You should also know their personalities in their numbers. Some give you numbers that are too conservatives, some are too aggressive. And needs to be adjusted accordingly.
The total Add up Estimate for the project will be the time for you to reach 80% of your Goal. This tends to be +/- 20% accuracy.
The remaining 20% will somehow take the same amount of time for your initial 80%. The reason for that is in tech speak, we are very good at measuring and estimating bandwidth, but absolutely appalling at measuring and understanding context switching and latency. The amount of small things are the remaining 20%.
So if the original estimate of 80% is 10 months, the worst case scenario would be 10 months + 20% = 12 Month and doubling that to 24 months. And if you are the top of the line project manager, you should also expect that 24 months to be within +/- 20% accuracy. i.e You should report your number as 28 months to your line of report to save your ass.
I find this rules to be reasonably good. Ignoring catastrophic failure and other external factors. I mean if your whole team quit mid way there is no way your original estimate could factor those in. Remember, estimate are, just guesses. There is absolutely no way to be certain. I think this rule is simple and straight forward enough to be used across any industry.