I treat every system I build like a first version, not a final one. The model improves. The pipeline tightens. The latency drops. That's not a workflow - that's a belief about how good work actually happens.
Two books that changed how I work - not what I do, but why I do it the way I do.
I started in Electronics Engineering because I wanted to understand how things work at the level where you can't abstract the problem away. That instinct never left. When I moved into ML, I wasn't chasing the field - I was following a way of thinking. You build something, you watch it fail in a specific way, and that failure tells you exactly what to fix.
Reid Hoffman's idea of staying permanently in Beta resonated because it isn't motivational - it's structural. A system that's never finished is a system that keeps getting better. James Clear's framing of identity over goals landed the same way: you don't aim for a deployment, you become the kind of engineer who ships things that hold up. Not the demo. The thing that's still running six months later.
The credentials are in the tags below. What they don't capture is simpler: I genuinely enjoy the part where the model is live and something unexpected happens. That's where the real engineering starts.
Pipelines, monitoring, retraining triggers, failure modes. The model is 20% of the work. The system around it is the other 80%.
Give me a tight latency budget, limited memory, and no cloud dependency. Pressure like that forces decisions that open-ended projects never do.
A model that only works in a Jupyter notebook isn't finished. I build for wherever it needs to run โ on-device, containerized, or serving thousands of requests a second.
$250K+ documented value. Every system I ship is tied to a business outcome, not just a metric.
Independent projects across every domain, each built to production standards.
Every skill is a habit. Every habit compounds.
Northeastern University. Graduated December 2025 with academic distinction.
Validated expertise in building, training and deploying ML models on AWS at production scale.
Peer-reviewed research on Customer and Sales Analytics with predictive modeling applications.
Verified impact across production ML systems, data engineering pipelines and AI deployments.
Full-stack ML expertise spanning Python, SQL, TensorFlow, PyTorch, AWS/Azure/GCP and MLOps.
Joint research and engineering portfolio spanning robotics, computer vision and production ML.
Actively seeking ML/AI Engineering roles where production matters and systems compound. Whether it's a role, an interesting problem or just talking shop - I'm in.
If you're in San Francisco or passing through, I'm always down for a coffee. Some of the best conversations about AI, F1 or geopolitics happen over a good flat white. Reach out and let's make it happen.