Quality during Design

Keven Wang’s 4-Step Journey to AI-Powered Quality Control (A Chat with Cross-Functional Experts)

Dianna Deeney Season 3 Episode 16

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 49:37

What happens when cutting-edge AI meets manufacturing quality control? The results are nothing short of revolutionary.

Keven Wang, co-founder and CEO of UnitX, takes us through the world of AI-powered visual inspection, where it is transforming how factories detect defects and improve product quality. Drawing from his experience with over 160 manufacturers worldwide, Keven reveals how these systems consistently outperform both human inspectors and traditional rule-based vision systems, reducing escape rates by up to 10x while cutting scrap rates by approximately 50%.

We talk beyond basic implementation to explore Kevin's four-step roadmap for AI-powered manufacturing. Starting with end-of-line inspection, manufacturers progress to upstream checkpoints, then leverage collected data for powerful insights, ultimately working toward the "self-driving factory" where AI automatically implements process corrections. Keven shares a compelling success story of a motor manufacturer that reduced customer complaints by 90% and improved yield from 85% to 90% after adopting this technology across 90 production lines.

Perhaps most surprising is Keven's observation that the greatest challenge in AI adoption isn't technical but human: aligning teams, establishing shared goals, and helping personnel become comfortable with the technology. This insight underscores that successful implementation requires thoughtful change management alongside technical expertise.

Whether you're a design engineer, manufacturing engineer, quality professional, or technology leader, this episode provides both practical guidance for getting started with AI inspection and an inspiring vision of where these technologies are heading. Ready to explore how AI could transform quality in your production environment? This conversation is your perfect starting point.

Visit the UnitX website.

Visit the podcast blog for more. 

If your team is still catching problems too late — let's talk.
→ Schedule a free discovery call: Dianna's calendar

Want insights like this?
→ Subscribe to my newsletter: qualityduringdesign.substack.com

Get the full framework.
→ Pierce the Design Fog 

ABOUT DIANNA
Dianna Deeney is a quality advocate for product development with over 25 years of experience in manufacturing. She is president of Deeney Enterprises, LLC, which helps organizations and people improve engineering design.

Introduction to Quality and AI

Speaker 1

Hello listeners, welcome to Quality During Design. I'm your host, diana Deeney. This is a special interview episode of the Quality During Design podcast. Today I'm joined by Kevin Wang, the co-founder and CEO of UnitX, a leading AI robotics company. The AI technology for manufacturing industry is here and it's ready. Kevin shares his stories and experiences from the front line of implementing this technology. But we don't just talk about the status quo. Kevin has a four-step journey that is also aspirational for the future. I'll formally introduce Kevin after this brief introduction.

Speaker 1

Hello and welcome to Quality During Design, the place to use quality thinking to create products. Others love for less. I'm your host, diana Deeney. I'm a senior level quality professional and engineer with over 20 years of experience in manufacturing and design. I consult with businesses and coach individuals and how to apply quality during design to their processes. Listen in and then join us. Visit qualityduringdesigncom. Welcome back.

Speaker 1

This show is a special interview episode and is part of our series A Chat with Cross-Functional Experts. If you're listening on a player that breaks up the podcast by seasons, you're listening to season three. Our focus with these interviews is speaking with people that are typically part of a cross functional team with engineering projects. Manufacturing, and quality control is a big part of design engineering. They are our internal customers. We're not only designing for manufacturability, we're also defining what's acceptable and what's not. At design, we're choosing what is an essential design output or what is it a critical attribute, and all of this translates into the bottom line of the business. So understanding the underlying technology and capabilities is also an important thing to know.

Kevin Wang's Background in AI

Speaker 1

Let me tell you more about Kevin. Kevin Wang is the co-founder and CEO of UnitX, a leading AI robotics company automating visual inspection in factories. Kevin graduated from Stanford University with a degree in computer science concentrating on AI. He has visited 135 more. He has visited more than 135 manufacturers worldwide, helping to improve yield and quality, and holds patents in AI technology for visual inspection. I enjoyed my conversation with Kevin. It's not just the status quo, it's looking toward the future, what could be next, what's available now and you're not going to just walk away knowing some more things. You're going to walk away being inspired to take some next steps, whatever they may be. So I hope you listen in and enjoy the interview, whatever they may be. So I hope you listen in and enjoy the interview. Hello, welcome to the Quality During Design podcast. We have a special guest today, kevin Wang. Kevin, welcome to the show.

Speaker 2

Thank you, Diana. Glad to be part of it.

Speaker 1

I'm really looking forward to talking with you, because you have particular expertise. Can you tell us a little bit about yourself, how you came into working with engineers?

Speaker 2

Sure, my mother is a mechanical engineer and she worked in a factory, and when I was little, she always inspires me on the beautiful designs around us, in our home, for example. The beautiful designs around us in our home. For example, this chair is designed beautifully because of this rounded corner and it's very ergonomic, friendly things like that. So that inspired me to appreciate the physical objects and their designs in our daily lives. And my mother also told me don't study mechanical engineering.

Speaker 1

Oh, no, why.

Speaker 2

She said you know, go get a better life. That is more interesting and it's funny. It's so funny. And so I listened to her advice and I went for software, and when I was going to grad school, this was around 2017. I worked for a few years before then, but when I went back to grad school, this was right.

Speaker 2

When the AI is really taking off, you can see from the news that it's beating the best humans at doing certain tasks in classifying cats from dogs from cars, and, although these seem like toy tasks, but it's definitely. Ai is improving at a drastic pace and I thought okay, that's really cool technology and how do we make it useful? How do we make it useful? And I go back to my childhood days, where my mother inspires me and I always want to combine ai technology with the physical world. And where are physical things made? In factories, yeah. So I always have this dream or this inspiration I want to go to the source of where these physical objects are made and apply AI somehow to help make these things more beautifully, with higher quality, you know. So my dream come true is is applying AI into the, the manufacturing of these physical objects in the world yeah, and that's that's an interesting way.

Speaker 1

well, it's a very useful way to intersect those two joys that you like the physical products and how they're made and the quality of them, and then also your software engineering background. This is like the perfect time to merge these two things together. So you're looking to use AI to help in manufacturing production. Sometimes it's difficult for engineers to understand when is a good time to implement that, how far to go. Is their process really ready for something like that? How do they make some of those assessments if these AI applications are right for them?

Speaker 2

Take a step back and describe the applications that we see in the market and how we help to solve. It is inspection, especially quality inspection by vision. So, for example, when a cast aluminum block have some defects on them, for example porosities, these porosities can end up affecting the sealing surfaces and can cause leakage in a car, for example. See this common pain point where, during production, there is a vision inspection for these porosities of these cast aluminum blocks. They are either done by visual inspectors or by a rule-based vision system and the pain point is it's not very accurate. Well, because the humans inspectors, they get tired after a couple hours and you know, frankly, we as humans, we're not designed to be sitting in front of 5,000 pieces every day and look at something for five seconds and decide is a good part or bad part? That's not what humans are really designed for. You know, we would rather empower the humans to do more fulfilling and interesting and creative tasks in the factories interesting, creative tasks in the factories and, on the other hand, the vision systems. They also suffer from escaping a critical defect or sometimes overkilling, wrongfully rejecting a good part. So this is the problem we see and the way that we solve it is we introduce AI. We have a camera and lighting system that looks at the part being made and an AI system that processes the images and decides, okay, is this part good or bad and we do that through computer vision or bad, and we do that through computer vision. So we use a deep learning neural network to decide okay, is there any porosity in this image and how many are there? How big are they? Where they are, are they in the critical areas? And from there the AI decides okay, is this good or bad to help the factory improve their quality and improve their yield by reducing their scrap. So that's sort of a background of what we do.

Speaker 2

And back to your question about when is the right time to adopt this AI inspection in production. From our experience, it's when it reaches a certain volume in manufacturing, and a typical rule of thumb is when you have a dedicated production line that's making the same part. That's a good time because the ai takes some samples to teach and you want to make sure there's enough training samples to to teach the ai. That's where this teach and you want to make sure there's enough training samples to teach the AI. That's where this volume aspect you want to have enough parts and not only good parts, but also bad parts to train the AI. That's one. The other compelling events that may justify this AI adoption is two things.

Speaker 2

One is when there's a quality issue, for example, when you produce defective parts and the defective parts were not caught during production and it escapes to the customers.

Speaker 2

So that is a good time to consider introducing automated inspection with AI that can be much more accurate at catching those defects. The other event, compelling event, is what we call high scrap or overkill by the inspection system. For example, we run into this a lot where a customer may have a rule-based vision system in inspecting parts, however, the vision system is over-culling a lot of good parts. So over-culling means when it mistakes a good part as a bad part where in fact it's of good parts. So overkill means when mistakes a good part has a bad part where in fact it's a good part, so the part is being rejected, overkilled and this resulting scrap. You are basically throwing away good parts. And that's also a good opportunity to evaluate AI, where the AI can really, compared to the rule-based vision, it can understand the random, complex defects at a much deeper level and be much more accurate in rejecting only the bad parts and no more than that.

When to Adopt AI Inspection

Speaker 1

And no more than that. Yeah, so you mentioned a couple of interesting scenarios. One is like the risk-based scenario, where the risk of a nonconforming product is high and you want to make sure that you catch them. And the other one was just unusual high scrap rate from what you're using for a vision system today, and you're mentioning training the AI to be able to recognize what's good and what's bad. What's involved in training an AI? So, if I've decided I'm running in a factory with a lot of parts, there are visual inspections involved and it's either high risk or we're scrapping too many of what we have. I have an available samples of parts that are good and bad. What would be some of the next steps to be able to train an AI to use it in the way that you're describing?

Speaker 2

So it comes down to three key steps. So it comes down to three key steps. The first step is to image the parts and make sure we have a very good image. This step would be installing a camera plus a lighting vision system to capture the parts, and a key criteria is that the defects needs to be high contrast in the image. So in order for the AI to work, the data is really the key, right. It's garbage in, garbage out. It's the same. In order for the AI, in order for the brain, for the smart brain, to be useful, the eyes the eyes needs to be good enough. So the lighting is the key there to capture a good image. That's step one.

Speaker 2

Once you have a good image, the step two is training the AI to learn to differentiate the bad part from the good parts. So here we have a human in the loop process where a human engineer, for example, would teach the AI here are bad parts and here are the good parts. An engineer will come in and call label, basically using like a mouse and keyboard to draw a polygon around the defect and classify it. Is this a porosity versus a dent? And this helps the AI to learn what is a bad part versus good part.

Speaker 2

After that, the third step is to qualifying or verifying in production. So we run a number of parts and we calculate what is the what we call FA and FR. Fa stands for false acceptance, which means escape right. These are in fact bad parts, but they are mistakenly accepted by the system FA false acceptance. On the other side there's FR false rejection which means this is indeed a good part. However, it is mistakenly overkilled or mistakenly rejected by the AI. So we get those two numbers and then we present to the quality personnel, the production operations personnel, and we align on the standard that both parties can work with and agrees on. And then it's qualified for SAT and it runs in production autonomously.

Speaker 1

How many samples need to be quantified before an AI is trained, and I guess it would also depend on the type of defects you're looking for and also the level of quality that you're looking for with the false accepts and false rejects. Is it in the hundreds? Is it in the thousands?

Speaker 2

Yeah, that's a really insightful question. Today, there is no standard in the industry on how to qualify an AI inspection system. We have a best practice, an SOP, on how to do it. However, we are in the process of rolling it out and educating the market about what's the most scientific way to qualify an AI vision system. Some of our customers are very rigorous when it comes to adopting AI and they want to see a ton of data. I think the most samples we have run in order to qualify our vision system is more than 300,000 pieces.

Speaker 1

Oh, wow.

Speaker 2

Yeah, and that took like multiple months to get those data and finally the customer agreed yes, this is much more accurate than our previous inspection method and they are very happy with it today. So that's the extreme. What we would recommend is there is a thoughtful way to qualify the AI by using repeatability in addition to the accuracy, to run representative samples repeatedly, say 10 times, over the system and see the variation of the system. And as long as the AI can repeatably identify these sample parts over and over again, then it's reaching a certain kappa repeatability capability. So that will be much faster if we adopt that. You know, typically for each defect type we would need up to 50 samples and we will run that 10 times through the system. And let's say this application has 10 defect categories. It could be 20, it could be 100. The most we've run into is 127 defect categories for one single.

Speaker 2

Yeah, but if you use that method, you're running hundreds of representative samples through the system 10 times. Then that could be done in a matter of days. It could be very quick.

Training AI Systems for Inspection

Speaker 1

And those samples that they're giving you, do they run the range of what they see? I mean, how particular are they in choosing the samples that they give you for training the AI?

Speaker 2

It's very important that the samples are as representative as possible, meaning, for example, if there is a porosity and there is a small porosity, there's a big porosity. It's very important when we train the AI. The AI has seen a broad spectrum, as small to as big as possible and somewhere in between. The more representative the samples are, the more accurate the AI becomes, becomes Teaching the AI is like teaching a child it learns based on the parent's input and the input from the world, and if you give it a very comprehensive input, then the AI will also learn to be much more comprehensive in its own decision making.

Speaker 1

Much more comprehensive in its own decision making. Considering the process itself and I was a process engineer at one point in my life doing transfer to manufacturing, industrial engineering I know that there's drift over time. You know this too, that processes drift over time and AI vision systems can help to detect that drift to be able to make corrections. I'm wondering if the AI systems themselves do they experience the same kind of or a similar process drift where they need to be readjusted or they need to be retrained? On some new samples retrained- on some new samples?

Speaker 2

very interesting point. Yeah, we see that definitely a lot. Um. So to to answer the question, the ai system, it, it's basically a neural network model. Right, once it's trained and you freeze the, the layers, the weights, that means you, you freeze the moment in time of that brain, then it doesn't drift. The AI model itself doesn't drift. Given the same input, if you give the same image, it's always going to give the same output. However, you bring a really good point.

Speaker 2

The problem is the physical world's drift. You know, like my, my casting process upstream sometimes produces more porosity on tuesday than every other day because the, the incoming material for that day happens to, you know, be more porous in nature. Yeah, that that definitely can happen with with the ai. One benefit is so we fixing, we're basically controlling the variable, we're controlling the inspection, decision-making. So if something upstream in the data, in the input, changes, we'd expect an output to change. So typically when a drift occurs, we will see an increase in the NG rate. Yeah, we typically see that.

Speaker 2

Let's say, you know, porosity, typically we only get 1% on a daily basis, however, suddenly it is spiked to 3% or even 5%. Then we can troubleshoot and see oh, something has changed. We can troubleshoot and see oh, something has changed. At that point then we can get in a group with a quality and the production operations team and evaluate the data, the images, because those are saved Now they're digitized form, captured by camera, so they are saved in a hard disk. So now you can review the digitized history and evaluate. Okay, does it make sense to readjust the AI or readjust the rules so that we are not rejecting, or are these in fact defects? A lot of times they are legitimate defects. A lot of times they are legitimate defects and it's the process that needs to be readjusted back to reduce the scrap and improve the yield.

Speaker 1

So that's an important thing that you're bringing up here is the AI is monitoring the quality. You have upstream and downstream processes and you have more data, I think, at your fingertips a little faster because all the images are saved. So I would think that maybe a root cause analysis or understanding the scope of the problem wouldn't take as long as it would traditionally take because you have the data, I guess, as long as it's accessible. Do you find that people have this data accessible that they can use, or is that another step, another consideration that they need to take when they're adopting these AI vision systems?

Speaker 2

Yeah, I think it's a very exciting direction. It's a very exciting direction to utilize this data that's captured by the cameras, that's processed by AI, that's now in this digitized form. There's a huge opportunity to improve the yield, the uptime, the throughput, the OEE of manufacturing. I'll give a story. Our first customer makes motors and they've been around for more than 20 years and before we worked with this customer, they receive a lot of customer complaints every year about 200 customer complaints and their customers are big automotive tier ones. That makes these motors into windshield wipers in a car, and before this customer worked with us, they received about 200 customer complaints a year. They also hover their yield around 85%. After they worked with us. Initially they were very skeptical. You know there's AI.

Speaker 1

I haven't heard that before.

Speaker 2

Right, I haven't heard that before, and so this is the customer that asked to run hundreds of thousands of parts through the system to qualify it. Finally, they bought in and they installed one system, and today this factory has 90 of our systems installed.

Speaker 1

Oh my gosh.

Speaker 2

Their lines, yeah, and it's doing end of line inspection on every single line. What has happened to them? They have over the years over the course of five years, they have reduced the customer complaint by 10 times.

Speaker 1

Wow.

Process Drift and Data Management

Speaker 2

And because of that, their quality of making is better and their revenue has grown by a third over that period. And in addition, the AI has created a feedback loop to alert them of the process drifts and they have improved their yield from 85% now to 90%. And that 5% difference it is a big gross profit boost and also, fun fact the person that originally introduced our technology on the first line back then was an engineer, and now this person is a director in the company. He's being promoted because of the quality improvement and the yield improvement this factory has seen from adopting AI. Yeah, so it's a really good story and we are seeing this repeat over and over again and different manufacturers are at different phases of adoption of AI. Some of them are much more advanced than others.

Speaker 2

We have in our company what we call our vision of AI-powered manufacturing. That is, we want to empower our customers on this journey to achieve AI-powered manufacturing. So we mapped out a four-step process to get there and today we have a lot of customers in step one and some in step two, very few in step three, but none of them are quite in step four yet and our goal is to get all of our customers to step four. So what are these?

Speaker 1

steps yes, tell me more about this.

Speaker 2

Sure. So step one is applying AI in end-of-line inspection. This is the final check, the final gatekeeper before the parts go out of the door. So applying AI as a last step of manufacturing, stopping the bad parts from going out, preventing escapes this is step one and in this process you introduce AI as a concept to your workforce and getting people, getting the humans familiar and comfortable with AI. It's very, very important.

Speaker 1

That is important, yes, and after that.

Speaker 2

Then step two is applying AI further upstream to the in-process checkpoints before end of life, because sometimes it's too late to find out at the end of the life. Oh, you have a component that has a porosity. Okay, now I have to scrap the entire assembly or I have to rework it. It's very expensive, so, according to Lean manufacturing principle, rework it. It's very expensive. So, according to the lean manufacturing principle, we always want to identify the problem near the root source, the root cause of it, as close as possible. This is about moving the checkpoints further upstream so we can inspect immediately after the process, let's say, after casting. We immediately inspect after process, cool down and and zero porosity there. If, if there is okay, scrap this part, melt it right away so it doesn't get further value at that will otherwise be wasted, okay. So this is preventing waste early. That's step two. The step three after that is now. You have all these checkpoints throughout your production, both end of line and in process, and and these cameras are collecting the data in digitized form. You have all this data and now the data can tell a story. For example, every Tuesday evening shift I see my porosity rate spike up by 100%. It doubles every Tuesday evening, something is likely happening. Either the incoming material for that shift has some issue, or maybe that the machine setting for that shift and it has some issue, or maybe that the machine setting for that shift is off. Something is going on right. So it's turning the data into insights and identifying the problem early. Yeah, so this is the third step. But the data will only share insights. It's still up to the engineers to action it, whether to alert the supplier or to do something about it to fix the problem. So you still have that. We're just at the insights phase, but action is still done by the engineer.

Speaker 2

The step four and this is where none of the customers we have are at today, but we hope to help our customers get there eventually is what we call a self-driving factory. So this is where the data turning into insights, turning into actions, the actions are automated, closed-loop upstream. So let's say, the AI agent finds out okay, your porosity rate goes up every Wednesday evening shift. Okay, turns out every time that happens. You know, my casting machine temperature is 20% higher than it should be. Okay, then we should slow down the process, let the temperature cool down, adjust the parameters of the machine, automate it and this creates a self-feedback loop to fix the process drifts and reduce the scrap automated. Now, this is not easy to achieve and I think it's going to take a lot of effort and a long time. But ultimately, if we can achieve this, I think we can enable factories to produce with higher quality at a lower cost, and that is valuable, and we think AI can play a really big role in enabling this vision.

Four-Step Journey to AI Manufacturing

Speaker 1

Yeah, I guess one of the big challenges with step four is the AI having access to sensors on all the things that could be causing some of the problems at the end and making those correlations. So that would be an interesting state to get to and that's your step four.

Speaker 2

Absolutely.

Speaker 1

I wanted to ask a little bit more about step three, with telling data stories and this is the AI's noticing you know that there's some trends or patterns and bringing it to highlight that to the engineers to make decisions. What does that communication look like? Is it something where engineers have to? Is it something where engineers have to dig a lot into the data that the AI is capturing, or does the AI make a summary of? These are some things you may want to look at. I'm really curious to hear what that looks like in practice.

Speaker 2

Yeah, there's so much data and it's easy to get lost in the huge amount of data out there, especially now with the AI cameras. Our vision system can collect up to 250 megabytes of data every second. That's a lot of high-resolution image data. So the AI can do a lot of work in in summarizing and extracting the most important insights from the data. Yeah, so, uh, the, the form, the format, uh could be dashboards, could be dashboards showing the um, the the numbers, uh, over time.

Speaker 2

Okay, in my last 12 hours I see an increase in porosity rate compared to the rolling average for the last seven days by 100%.

Speaker 2

Now, that insight could be valuable because now, as an engineer, I know okay, I know there is a porosity spike and it's likely due to this process upstream. So this helps to pinpoint not yet solve it, but pinpoint probably much more focused way. So that's one way is through dashboards. Other ways are through alerts, so the engineers can set up rules. For example, if this porosity defect which I care about the most is this porosity, if I see this defect rate spike by more than 20% compared to rolling average in the last seven days, then send me an alert, say a text message or call me right, and then I know to go root cause the problem right away, the earliest moment it happens, instead of waiting for hours. So when I already have, you know, a thousand defective parts made, that's too late. Scrapping a lot of parts, yeah. So dashboards, alert, notifications are some ways to do it, and AI can play a big role in extracting the insights, finding the needle in a haystack.

Speaker 1

Yeah, and in one of your examples you talked about over 120 defect categories on a part, and we know that not all defects have the same weight or importance, depending on what it's being used for, what the part's being used for. Is AI able to help the engineer not only find the needle in the haystack, but also help promote some of the problems that they're seeing with the more important defects versus the ones that aren't as important? Can it help an engineer prioritize activities?

Speaker 2

Typically, yeah, when there is 127 defect categories, you know some are more important than others. And we do work very closely with our customers, their quality engineers, their production operations team to prioritize the list of defects what is truly important. Typically, uh, categorize into a few buckets like what is critical. That means we cannot tolerate any escape of these. These can cause safety issues. For example, lithium-ion battery if it has a pinhole in a ceiling feature, then it will cause a leakage of electrolyte. That's, that's a safety critical defects and none of them are allowed. And then after that there could be a important defects that affects certain functionality, performance of the, the, the battery cells, and after that will be cosmetic, like it will purely affect the look of the product but not the underlying function or safety. So we rank it by safety, function, cosmetic.

Data Storytelling and Prioritization

Speaker 1

Okay, and then that helps differentiate some of the data and decisions that are happening. If I can ask one more question about your four-step process with step number two, which is in-process, or inspections further upstream than just the last inspection and you mentioned suppliers, getting parts from suppliers and maybe having those inspected and coming I wanted to ask you have the companies that you're working with, are they starting to work with their suppliers to implement this kind of technology on the supplier's end? I know in medical devices and automotive there's a lot of suppliers feeding into other suppliers and there's original equipment manufacturers that they work and they coordinate together to improve the quality by implementing quality initiatives. I was just wondering if they're starting to do that with this technology too, if you've seen that yet.

Speaker 2

We are starting to, indeed, yeah, so we are very fortunate to have, uh, the customers we have and we have more than 160 customers today uh, unix, um, we have been very fortunate to be working with these customers that they really appreciate the quality improvement and the scrap reduction they see. Appreciate the quality improvement and the scrap reduction they see. Actually, just today I learned from one of our teammates that an automotive tier one manufacturer is so happy with our system. They invited their OEM customer to visit their site and OEM. After seeing it, they decided they want to introduce this to their other suppliers.

Speaker 1

Okay.

Speaker 2

Yeah, so they're introducing this AI vision to their other suppliers and I can see other suppliers. Some of them may feel, okay, now is the big brother going to be watching my shoulder To my customer, they're going to see all my process to my customer, going to see all my process. But we are seeing also some of the most forward-looking manufacturers. They are embracing this technology because it is really helping them to improve their quality and reduce their scrap and improve their OEE. But different manufacturers are at different adoption cycle. Some are much more ahead than others. Yeah, this is a very exciting time with the technology.

Speaker 1

Well, I really like your four steps. They're doable, actionable, but they're also very much, like you said, forward-looking and aspiring to the step four, which we're going to get there at some point. I also appreciate you publishing or sharing some of the test method, validation and the other validations and verifications that you're doing for your systems, because that's just going to be able to help people also understand how to do it for their businesses and it's also setting a standard. So I think that's pretty cool. So those are some of the things that are the difficulties, I guess, when approaching this topic, like, well, how do I do this? You have this four-step program and you're going to be sharing your testing methods. Is there another place that people get stuck? That would be sort of unusual. Is there anything that they would need to understand, learn or adopt that would help them to decide if this is the right avenue for them to approach and to adopt for their company?

Speaker 2

Yeah, I would say the biggest hurdle in adopting AI we've seen is getting the team, getting the people comfortable with it. And the people are across teams in the factory because the AI technology is really ready for prime time and we've deployed this across 160 customers and the AI vision system time and time again. It's much more accurate than the rule-based or manual vision inspection. In terms of the escape rate, it's up to 10 times lower and in terms of reducing the scraps reducing scrap by half. So it's ready, the technology is ready and the integration can be custom for each production line. But it is also easily solved by system integrators. There are many system integrators out there very capable and we work very closely with them to integrate technology onto the production line. To integrate technology onto the production line, what is the hardest, as you pointed out, is aligning the organization, aligning the people and getting the people up to speed with AI, how it works and getting them comfortable with it. That's the hardest.

Change Management and Adoption Challenges

Speaker 2

I'll give an example. When we deploy the vision system, typically the customers have a standard that is implemented by an operator, but because of the subjective nature of this inspection and how hard it is to quantify it, sometimes it is not implemented very repeatedly in practice. For example, I have a porosity. Very repeatedly in practice, for example, I have a porosity and the the on the on the sop book it may say if I have, it will be an ng reject. If I have three porosities within a the circle diameter of five millimeters and if, uh, any one of them is greater than half a millimeter, than it's a reject. Now that's a very, very complicated decision I.

Speaker 1

I've seen those before.

Speaker 2

Yes, and I would probably not expect most human beings to be able to repeatedly do that after five hours of staring at 3,000 parts.

Speaker 1

With their calibrated eyeballs, that's right.

Speaker 2

That's right. Yeah, so now we are for the first time in history with, with, with AI technology a camera plus AI is able to do it now and and and. This capability sometimes is very surprising. So. So this needs some calibration within the factory, especially between the quality team and production team, and to really align them. That, okay, we all have the same goal, which is to ship as many as possible with good quality. That's our common goal for this factory. So we can ship with higher quality. That's our common goal for this factory. So we can ship with higher quality, we can ship with better yield and volume and we can improve the business. That's the shared goal.

Speaker 2

And for us, a lot of our on-site deployment work is to align these teams toward this common goal and presenting them with data, with facts not opinions, but hard, real facts, data to align the team on a common baseline and then working toward the same goal. And that is not easy. I'd say that that is definitely the hardest challenge in adopting AI. But people do get there, especially the most forward-looking manufacturers. They get there. They have the willingness, the motivation to get there. They understand this will help make them better, more competitive in the market, and those are the customers that we see have benefited the most from this AI technology.

Speaker 1

That's interesting because if I'm working as an engineer on the production line and I would think about installing a new piece of equipment, I wouldn't really think too much about it being more of a change management process, which is how I'm hearing you describe the adoption of AI. It's less like putting in a new piece of equipment and it's more like change management within an organization, at least to start. And that's an interesting observation that you have, and I guess that is an important aspect that maybe some engineers aren't considering because they're not used to having to consider that when they're making changes on the production floor.

Speaker 2

Right and it's a good change. And the technology itself is neutral. It's a matter of how we adopt it, how we apply it, and AI technology is just like that. We can use it to benefit. Ai technology it's just like that. We can use it to benefit by improving the quality, reducing the scrap, and changes take time, All the changes take time. So it's a multi-year process to really getting the organization comfortable, aligned and applying these technology in production. It's a journey.

Speaker 1

So it's so interesting? Yeah, it's, I don't know. You know you had your four-step process where we were aspiring to stuff. I remember 20 years ago we were aspiring to the point we are today. So it's just fascinating technology that can, like you pointed out, can really make a difference on the bottom line. So I wanted to just ask you, you know, some last thoughts for engineers that are listening to the podcast here Is there one thing they can do today to improve their knowledge base about AI and this technology and the uses that we've been talking about, like if there's a course or a podcast or a book or just a general topic that they can start really digging into as soon as they press stop?

Resources and Closing Thoughts

Speaker 2

Absolutely yeah. So there is the A3 Association for Advancing Automation. It's a fantastic group and they organize a lot of trade shows in the space. A3 has a specific focus area on vision and imaging. So if you Google A3 Association for Advancing Automation Vision, you can go to their website. There's lots of good materials and training programs on AI vision in manufacturing Our website. We also have a lot of good materials including case studies, blog posts on AI, how to adopt it, how it has helped manufacturers to improve their quality and yield. So it's unitxlabscom, u-n-i-t-x-l-a-b-s dot com. So I also invite you to check it out.

Speaker 1

And I'll include links to both of those things on the show notes for this podcast episode. And just finally, how can the audience find out more about you, can they contact you, and what's the best way to do that?

Speaker 2

Absolutely. Yeah, feel free to send me an email. My email is keven, with two e's, so keven at unitxlabscom. Yeah, feel free to shoot me a note anytime.

Speaker 1

Great, kevin. Thanks so much. It was really interesting talking with you, hearing about the actual application of AI in industry and the current state of how things are going and, even more interestingly, where you see it heading in the future. Thank you so much for being a part of the quality during design show.

Speaker 2

Thank you, Diana. I appreciate the opportunity.

Speaker 1

That concludes the interview with Kevin Wang from unit X. Visit the blog for this podcast for extra links and show notes. You can find it through qualityduringdesigncom or you can go directly to dinienterprisescom. In fact, this podcast is a production of Dini Enterprises. Thanks for listening. Thank you.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.