Skip to content

What Owning a Manufacturing Business Taught Me About AI

FEB 12, 20264 MIN READ

Owning a manufacturing business makes you suspicious of AI hype very quickly.

Not anti-AI. Just suspicious.

Because once you've spent enough time around production, procurement, maintenance, delivery schedules, rework, and people trying to keep physical operations moving, you stop getting impressed by software demos alone.

A lot of AI conversations are shaped by software people.

That makes sense. They're building the tools. But it also creates a blind spot. Software people usually live in environments where iteration is cheap. If a feature breaks, you patch it. If a workflow changes, you update the code. If a test fails, you rerun it.

Manufacturing doesn't feel like that.

A mistake has weight.

It has material cost. It has lead time implications. It can affect safety. It can stall production. It can create delivery problems downstream. Even a small operational error can travel farther than expected because physical systems are less forgiving.

So when I hear people talk about AI as if every business should just "move fast" and experiment aggressively, I automatically translate that into operator language.

What is the failure mode?

Who absorbs the mistake?

What happens if the output is wrong?

How easy is it to recover?

Those questions matter a lot more in manufacturing than in most AI demos.

I think that's one of the first things manufacturing taught me about AI: reliability matters more than novelty.

A lot more.

In software circles, a rough but promising tool can still be exciting. In operations, rough but promising is often just another way of saying unstable. And unstable systems create hidden costs. More checking. More hesitation. More workarounds. More meetings to compensate for the fact that nobody fully trusts the tool.

That trust problem is not small.

If a system is going to be used around real operations, people need to know what it is good at, what it is bad at, and where human review still has to stay. Otherwise the tool doesn't reduce friction. It adds a new category of friction.

The second thing manufacturing taught me is that data is usually worse than people think.

AI demos love clean inputs.

Actual businesses do not have clean inputs.

You get missing records, inconsistent logs, handwritten notes, partial spreadsheets, machine data that isn't exposed properly, process knowledge sitting inside one employee's head, and old workflows that made sense ten years ago but nobody revisited.

So when an AI project struggles, I usually don't assume the model is the main issue.

A lot of the time the problem starts earlier.

Bad process creates bad data.

Bad data creates weak outputs.

Then people blame AI when the real problem was that the operation itself wasn't instrumented well enough in the first place.

Manufacturing forces you to respect that.

You can't build good intelligence on top of messy operational foundations and expect magic.

The third thing is that bottlenecks in physical businesses are more interconnected than they look.

If procurement is delayed, production gets affected.

If finance approval is slow, purchasing waits.

If scheduling is off, labor gets wasted.

If inventory visibility is weak, planning suffers.

That means AI is most useful when it helps reduce operational bottlenecks, not when it just produces impressive standalone outputs.

This is why I have a bias toward systems that help with monitoring, reporting, analysis, reminders, exception handling, and decision support. Those are not the most glamorous use cases, but they're closer to where the pain actually lives.

And pain is a better starting point than novelty.

I also think manufacturing changes how you think about automation speed.

In software, change can be immediate.

In physical operations, every change touches people, layout, habits, timing, materials, and coordination. Even when the idea is good, implementation is slower because reality has more moving parts. Staff need retraining. Existing habits fight back. Process changes have side effects.

So the question isn't just "Can this be automated?"

It's also:

Should this be automated now?

What dependency does it create?

How much supervision will it need?

Will it still work when real-world variability shows up?

That's a different mindset from pure software culture.

Maybe a more boring mindset. But I trust it more.

Owning a manufacturing business also made me less interested in the "replace workers" framing. I don't think that's where most practical value is, at least not in the near term for companies like ours. What I see more often is AI helping people handle complexity better.

Better reporting.

Faster issue detection.

Cleaner visibility.

Less time wasted on repetitive back-office work.

Better support for decisions that still need human judgment.

That's already a lot.

You don't need science fiction for AI to be useful.

You just need it to remove friction from the system.

And maybe that's the biggest lesson.

Manufacturing teaches you to respect constraints.

Heat is a constraint. Downtime is a constraint. Procurement is a constraint. Human skill is a constraint. Cash flow is a constraint. Safety is a constraint. Lead time is a constraint.

AI doesn't cancel those constraints.

It has to work inside them.

Once you see that clearly, your standards change.

You stop asking whether a tool looks smart.

You start asking whether it is dependable enough to live inside a messy operation without creating more work than it saves.

So far, that's been the most useful filter for me.

Not "Is this advanced?"

More like:

Would I trust this around actual operations?

If the answer is no, then it's still a demo.

If the answer is yes, then now we're talking.


Note: I use AI as a writing and thinking tool. The ideas, examples, and judgment in this post are mine.