Quality during Design
Quality during Design is the podcast for engineers and product developers navigating the messy front end of product development. Each episode gives you practical quality and reliability tools you can use during the design phase — so your team catches problems early, avoids costly rework, and ships products people can depend on.
You'll hear solo episodes on early-stage clarity, risk-based decision-making, and quality thinking, along with conversations with cross-functional experts in the series A Chat with Cross-Functional Experts.
If you want to design products people love for less time, less cost, and a whole lot fewer headaches — this is your place.
Hosted by Dianna Deeney, consultant, coach, and author of Pierce the Design Fog. Subscribe on Substack for monthly guides, templates, and Q&A.
Quality during Design
Practice Makes Improvement in Subjective Probability Estimations
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Does the phrase "Subjective Probability Estimation" make you feel uncomfortable? If you're a data-driven professional, you're likely wary of each of those terms on their own, let alone combining them into one thing.
But we sometimes need to do it. And we can practice to get better at it.
In this episode, we emphasize the importance of subjective probability estimations in decision-making, especially in situations where concrete data may be unavailable or impractical.
We talk about:
• Exploring the discomfort of subjective probability estimations
• Utilizing Monte Carlo simulations for complex systems analysis
• Addressing bias and improving estimation accuracy
Inspired by Douglas W. Hubbard's "The Failure of Risk Management," we uncover strategies to sharpen our estimation skills. Consistent practice leads to improvements. Whether it's by imagining you're betting money, breaking down complex estimations, or engaging in true-false trivia, this episode emphasizes regular practice in refining these skills.
I invite you to participate in a collaborative endeavor tailored for engineers—creating a shared database of estimation questions—to foster a community of learning and improvement.
- Choose a true/false question (with answer).
- Or ask for a value (e.g. distance in air travel from LAX to PHL) (with answer).
Share it with Dianna to add to a database for sharing. Either leave a message through the link at the top of these show notes or respond in email to Dianna's newsletter.
Visiting the podcast blog? Leave a comment.
If your team is still catching problems too late — let's talk.
→ Schedule a free discovery call: Dianna's calendar
Want insights like this?
→ Subscribe to my newsletter: qualityduringdesign.substack.com
Get the full framework.
→ Pierce the Design Fog
ABOUT DIANNA
Dianna Deeney is a quality advocate for product development with over 25 years of experience in manufacturing. She is president of Deeney Enterprises, LLC, which helps organizations and people improve engineering design.
Probability and Monte Carlo Simulation
Speaker 1Subjective probability estimations. Does that make you feel uncomfortable? Just all of those words together or even considered on their own Probabilities estimations, and then the subjectivity of it all If you are a data friend, this probably does make you feel a little uncomfortable. I have a respected quality professional colleague who used to say a lot in God, we trust all others bring data, but there are cases when we need these subjective probability estimations. There may be a case where it's too expensive for us to get the data, or we don't have the things to be able to test to get data, and there may just be too many unknowns. Think of the engineers that are working on applications for space. There are a lot of unknowns with that Less than 50 years ago, but still many unknowns. So subjective probabilities estimations are something that we need to be able to get comfortable with and know how to use. So let's talk more about this topic after this brief introduction. Hello and welcome to Quality During Design, the place to use quality thinking to create products others love for less. I'm your host, diana Deeney. I'm a senior level quality professional and engineer with over 20 years of experience in manufacturing and design. I consult with businesses and coach individuals and how to apply quality during design to their processes. Listen in and then join us. Visit qualityduringdesigncom dot com.
Speaker 1When we're trying to figure out how likely something is to happen, we can break it down into parts and do a simple probability model. We have an event and we have the likelihood of that event happening, and we have an impact because of that event and we can consider the likelihood of that impact happening. If the event occurs and with some simple statistics conditional probabilities we can estimate what's the likelihood of the whole thing happening altogether and from that we can calculate all sorts of things reliability, life, expected loss in terms of money and we can use these things to be able to decide is this really worth it or not. But when we get into complex systems where there are a lot of moving pieces and parts, it gets harder. And it gets even harder still when we just don't have the data. Now, sometimes we try to get the data on the things that don't really matter, and the only way we would know that is if we do some estimations ahead of time to figure out what is really a priority for this project or for the questions that we're trying to answer, and this is where another popular statistical method is used, called the Monte Carlo simulation.
Speaker 1The scenario with Monte Carlo is you have an event and you put some structure around it, some bounds on it, you have a likelihood of something happening, and then what that's going to cost, and you create a model for it and then you let the Monte Carlo simulation run. The simulation part is that you're inputting these characteristics into a computer model and you're asking it to randomly choose measures within the bounds that you give it and produce a result. But you don't just do it a couple times, you do it thousands of times, tens of thousands of times, and what this does is this eventually gives you a model of your whole complex system that you can better understand and make decisions with. I will link to a Monte Carlo simulation explanation in the show notes. So this Monte Carlo simulation seems like something that could be really useful for us to be able to use. But we need an input, we need a scenario, and that's where the subjective probability estimations come into play. Now, when I say subjective, I'm not talking about qualitative things. If you're used to doing FMEA failure mode effects analysis you know there's a rating scale from 1 to 10 or 1 to 5, and you assign whatever you're analyzing to a category. That's qualitative. The subjective that I'm talking about today is really about quantitative estimations of probabilities, which are likelihoods of things happening, and some sort of measure, whether it be money or reliability, life data those are the two that come to top of mind. So an example is that we have an event that happens and it's 20% likely to happen, and when that happens we have a 90% confidence that the impact is going to be between $1,000 and $5,000. That is a subjective probability estimation based on what I know about the situation, and it's how we can begin to build our Monte Carlo model.
Speaker 1In our Monte Carlo model, we're going to calculate a couple things. One is did the event occur? We choose the likelihood of this event occurring as 20%, so we're going to use a formula to make the event occur randomly 20% of the time. The other part of our Monte Carlo simulation is if it does occur, what's the impact? We're going to use a confidence interval of 90%, so we're going to say 90% of the time. The value that we get is going to be between these two endpoints. So our simulation chooses a random value within those guideposts or those limits. Then that's what we do over and over again thousands of times, and that'll help us to create a model of our system. What we get as an output is the loss of money in our scenario versus the likelihood of it happening, or a loss-exceedance curve. If we're talking about reliability life data it could be the loss of reliability and the likelihood that that is to happen. These type of simulations can be set up in Excel. Another application for this is with FMEA failure mode and effects analysis.
Speaker 1I cover this subjective probability estimation method and the Monte Carlo method in my Udemy course, fmea in Practice. I'll link to the course in the show notes below, but in it I describe how to use these subjective probability estimations with your team to be able to better estimate likelihoods and probabilities for your FMEA or your hazard analysis. We ask our team questions, we use simple distributions like a triangle distribution or a beta distribution to be able to find or to define what those limits are, and then we run a Monte Carlo simulation in an Excel file. This Excel file is a downloadable asset. If you're a student and you have lifetime access to all the materials in the course, I will link to it in the show notes. Check it out. So now we know what data that we need and we know how we're going to handle it and what kind of output we're going to get. This all helps with understanding what we want to collect and the decisions we want to make with it. But now we still have this problem of the beginning, which is getting these estimations from people People making these estimations because we know that people are biased and these inputs are subjective and they are estimations, but that doesn't mean that they're useless.
Speaker 1I found this book a couple of years ago. It was written by Douglas W Hubbard and it's called the Failure of Risk Management. Hubbard is a management consultant and he works a lot with the application of quantitative methods and decision making. Since he's working with management, making business decisions, his focus is on loss of money for the company, which is why he uses a lot of loss-exceedance curves. He also uses this Monte Carlo method to help gather the data filter people's subjective probability estimations into a model where they can better make decisions. I mention this because this method isn't just for engineering and technical people. It's also being used for business cases. It's applicable in both worlds. What Hubbard demonstrated in his book is that people can be de-biased or calibrated to making these sort of estimations for a Monte Carlo model. What he did with his clients were exercises that involved repetition and then feedback in order for them to improve their estimations.
Speaker 1If you think about it with sports, it's similar. When you're learning a sport, you're practicing. You have a known answer of what would be the best If you could achieve the best. You have a goal in mind and you practice to try to achieve that goal. You also have a coach on the sidelines who is watching your form and working with you to develop your technique. That coach is giving you feedback so you can adjust how you do the game and how you think about it so you can reach your goal. How you do the game and how you think about it so you can reach your goal. The line of thinking is similar with training ourselves and helping our team to grow and do better at making these estimations.
Speaker 1So what kind of exercises does Hubbard do with his clients? He uses two. One of them is a true-false trivia question. He gives you a statement and you decide is that true or false? And then you assign a confidence that you're correct. Is that true or false? And how confident are you that you're correct? And you choose one 50, 60, 70, 80, 90, or 100. That's your practice. And then the feedback is that you compare what you gave to the real world answer. How did you do? Did you do well? Did you not do so well?
Speaker 1The other practice scenario that Hubbard uses with his clients is to assign a confidence interval to a value. So he asks you a question. This time it's not true-false, but it's what's the value of x? And then you list a lower bound and an upper bound in which you think x will lie. 90% is common, and if you stick with a consistent confidence level when you're doing these estimations, it'll be easier when you're setting up your Excel file. An example from Hubbard's book what is the air distance between New York and Los Angeles in miles? Now you list the lower bound and the upper bound in which you are 90% confident. The answer lies.
Speaker 1All of this can sound a little bit tedious, doesn't it? Can you imagine having to make a big business decision and the day before you're sitting in a meeting and practicing making estimations about when William Shakespeare was born and how heavy oil is compared to water? But Hubbard has some evidence that doing these activities helps improve people's estimation abilities. Doing all of this right before a big decision could be problematic, because now you might have decision fatigue, which is a real thing. People talk about waking up in the morning, you can make a great decision, but by the end of the day you don't even know what to make for dinner. So what can we do about this? How can we calibrate ourselves? How can we help practice calibrating with our team? We can do it a little bit every day, like exercising. I used the sport analogy earlier. Well, if you want to run farther or throw harder, sometimes you need to go to the gym and put in some reps on the weight machine. That's similar to using our mind and our experiences to make estimations.
Speaker 1Hubbard recommends a few techniques that we can use to help ourselves to improve making estimations. One is to pretend that we're betting money, like you're making this estimation. How confident are you in that? Are you willing to put $1,000 on the line? Now, this is not to say that you want to really gamble, but if you could pretend enough that it makes you question your own decision, that might be good practice. And that was the other thing he suggested. Stop and consider ways that we could be wrong about our estimation. That may make us think a little bit more about how we're estimating and what we're using to estimate, and may lead to us giving a more accurate estimation.
Speaker 1And the third thing was to just break it down. Instead of trying to assign a range to something, think of it as two separate questions. For the upper bound, think of it as I am 95% certain that the value is lower than this upper bound. And for the lower bound in our range, let's say I am 95% certain that it is above this lower bound. Rethinking about one question am I breaking it down into two could be helpful for us in making estimations.
Speaker 1So back to the beginning of our topic subjective probability estimations. Do they make you a little less uncomfortable after listening to what we've been talking about today? Are you up for giving it some practice? And you can look up Dr Hubbard's book, the Failure of Risk Management, and in the appendix he has some sample exercises with answers, but I can't reproduce those for you for obvious reasons. However, this is something that I would like to practice and get better at, so I'm going to ask for your help with this.
Speaker 1Can you contribute to a project? One or two questions about a topic that you think engineers should know when they're making risk estimations? The thing is, you have to know the answer. So we would need the question and the answer to be able to create a database that we can all use to start making estimations, to start practicing them. If we practice just one question a day, over time we're going to get better at making our estimations and we'll also start to consider how it is that we're making these estimations. Just by intentionally practicing we're going to get better at it. We're adding some focus to it. So we want to get that feedback in order to make improvements. But the only way that we're ever really going to start is if we just start practicing.
Speaker 1If you are interested in contributing to this project and practicing making these estimations for yourself and maybe providing a resource for your team to practice too, then message me. If you're listening to the podcast, in the beginning of the show notes there's a link where you can leave some feedback and message the host. Go ahead and click that and send me your question with your answer. Once it's published, I'll come back to this episode and link it in the show notes. Or, if you'd like, you can sign up for the newsletter at qualityduringdesigncom when you're part of the newsletter community, you can respond and email me. So what's today's insight to action? It's okay to make estimates, we can work with probabilities, and it's okay if it's subjective. It's still data. We recognize the limits of it, but we can practice to get better and it's something that we can use to help us make decisions. This has been a production of Dini Enterprises. Thanks for listening, thank you.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
Speaking Of Reliability: Friends Discussing Reliability Engineering Topics | Warranty | Plant Maintenance
Reliability.FM: Accendo Reliability, focused on improving your reliability program and career
Reliability Hero
MAINSTREAM Community
Manufacturers Make Strides
Martin Griffiths
The Manufacturing Executive
Joe Sullivan
The Antifragility Reframe
Dr. Frank L. Douglas
The SAFE Leader with Mark McBride-Wright
Mark McBride-Wright
Coaching for Leaders
Dave Stachowiak