01 — Origin
I think in systems.
I always have.
I trained as a Chemical Engineer. Not because I wanted to work in oil — because I wanted to understand how things work. How inputs become outputs. How small variables change everything downstream. How the failure of one component quietly breaks the whole.
That's still how I think. I came into product through the unglamorous side: requirements gathering, QA, enterprise transformation work where the client doesn't tell you the real problem until week three. I learned to wait for it.
The unglamorous side turns out to be where all the real learning is.
02 — Approach
Discovery before
solution. Always.
Most teams ship the wrong thing faster than the right thing. They optimise for velocity before they've earned the right to move fast. I've seen what happens: six months of sprint velocity and then a pivot because nobody stopped to ask whether the problem was real.
My default is to slow down at the beginning so I can move fast later. I'll spend twice as long in the problem space if it means I ship something that doesn't need to be thrown away.
The question nobody asked is usually the one that unlocks everything.
In practice: discovery sessions that feel like conversations. Prototypes honest about what they're testing. Hypotheses written before sprint planning, not after. And the willingness to say "we don't know enough yet" in a room full of people who want to ship.
03 — Philosophy
What I actually
believe.
Not values from a workshop. Things I've been wrong about, then right about, enough times that I just hold them now.
The product is never the point. The human problem is. Every feature, every sprint — if I can't trace it back to a real human pain, I get suspicious.
Good questions age better than good answers. Frameworks change. The right question does work for years.
Slow down to ship faster. The rushed decision that skips discovery always costs more downstream.
Conviction is not certainty. I'll commit fully to a direction while staying genuinely open to being wrong.
The metric proves it worked. The human proves it mattered. I track both.
04 — On AI
A collaborator.
Not a shortcut.
I've spent the better part of the last two years building AI-native products and evaluation frameworks. That changes how you think about what "done" means — with LLMs, done isn't a state, it's a dial.
What I built at Tech1M
Evaluation and guardrails framework for LLM features — content relevance, groundedness scoring, PII safeguards, rubric-based output evaluation. Goal was stabilising quality before GA, not patching after.
The teams getting the most from AI understand their problem space deeply enough to evaluate the output. You can't prompt your way out of not knowing what you're solving for.
I'm also thinking about what AI does to the humans using the products. That's the question I don't think enough PMs are asking. I'm asking it.
05 — Evidence
What the work
actually produced.
Numbers are honest. They're also not the whole story — ask me about the experiments that failed.
2.3k signups, H1 2024
under 6 months
rollout at Tedbree
enterprise deployment
Led a full enterprise transformation at Tedbree serving 50,000+ partners as the sole technical analyst. Kept 99.9% uptime post-launch and 75% user adoption within three months.
The adoption number is the one I'm most proud of. Getting 75% of 50,000 people to actually change how they work in 90 days — that's a product and people problem solved together.
If this sounds like
someone you want
in the room —
I'm easy to reach and enjoy a good conversation about hard problems. No pitch needed.