17+ years shipping production backends in Java, Go, Scala & Python across five industries. M.Sc. in AI/ML (With Distinction) specialising in Generative AI & Agentic AI. Currently shaping the future of cloud security at F-Secure.
I'm a polyglot engineer — Java is my first language, but I think in systems, not syntax. Production code in Java, Go, Scala, and Python. I've written backend services in Spring Boot, Play Framework, and raw Go. I pick the right tool for the problem, not the one I'm most comfortable with.
17 years building production systems across cybersecurity, fintech, automotive, and telecom — real-time monitoring on Kafka, authentication protocols analysed for replay and timing attacks, enterprise platforms at 99.9% SLA.
At 37, I went back to school. Completed an M.Sc. in AI/ML from Liverpool John Moores University (With Distinction, via UpGrad) and an Executive PG Diploma from IIIT Bangalore (3.7/4.0, via UpGrad), both specialising in Generative AI & Agentic AI. My thesis showed you can optimise LLMs for $51 instead of $2,000+.
I write about what I learn. I build tools that solve real problems. I use Claude Code, Cursor, and GitHub Copilot daily — not as a crutch, but with quality-gating workflows that keep code honest.
Java, Go, Scala, Python — production code across all four. Spring Boot, Play Framework, Kafka, Elasticsearch. I don't have a comfort zone; I have a toolkit.
AI-CodeMedic: LLM-powered AIOps debugging engine — auto-scans logs, diagnoses bugs, generates PRs. HackWeek 2nd place, actively being evaluated for production. Java 21 + Spring Boot 3.x + OpenAI APIs.
RAG semantic search (LangChain, FAISS, ChromaDB), gesture recognition CNNs (94% accuracy), NLP recommendation systems, melanoma detection.
17 years of production engineering + 2 years of formal AI education. Building toward production AI systems — from thesis research to AIOps tools to daily AI-augmented development.
Java, Go, Scala, Python — production systems across five industries. I've seen what breaks at scale and know how to build so it doesn't.
Designed systems in cybersecurity, mobile security, fintech, automotive IoT, and telecom. Each domain taught a different constraint — latency, compliance, real-time processing, scale. I bring that cross-industry lens to every whiteboard session.
Started writing code at TCS. Led teams at Globant. Architected platforms at Lookout and F-Secure. At every stage, I shipped on deadline — 45-day sprints delivered early, zero SLA breaches, teams of 5 to 15 unblocked and aligned.
Not a weekend course. Two years of formal education in GenAI, Agentic AI, deep learning, and NLP. Thesis under review for Springer. I didn't just learn AI — I researched it.
My thesis optimised LLMs for $51 instead of $2,000+. I bring the same instinct to every AI decision — what's the cheapest way to get the best result without cutting corners?
Most AI engineers lack production experience. Most production engineers lack AI education. I have both — and the enthusiasm to bring them together at scale.
The part I love most is the blank whiteboard. Evaluating which language fits the problem — Java for enterprise reliability, Go for concurrency, Python for rapid ML prototyping. Choosing between Kafka and RabbitMQ based on throughput needs. Deciding whether a monolith serves better than microservices for the current scale. Every architectural choice is a bet on the future, and I take those bets seriously.
Evaluated 40+ services for retain, integrate, or overhaul. Designed the migration framework and architectural plans collaborating with product leads, engineering managers, and platform teams. Decisions included which services to sunset, which to modernise, and which to rebuild from scratch — each with different technology choices based on the service's role in the unified platform.
Didn't just pick one auth approach — designed two competing protocols for different enterprise needs. Session-based PKCE with Redis caching for stateful clients, stateless HMAC-SHA256 with hybrid salt protection for lightweight proxies. Documented attack vectors (replay, timing, reverse engineering) and mitigation strategies for each. The client picks the architecture that fits their constraints.
Architected from scratch: third-party threat intelligence APIs for data ingestion, Kafka for event streaming at 1,000+ events/sec, webhooks for real-time alerting. Chose Kafka over RabbitMQ for throughput at scale. Reduced detection-to-notification time by 40%. Optimised backend for 50% increase in event volume without performance degradation. Delivered 50+ client-specific customisations within a 45-day deadline. Led 15 engineers end-to-end.
Built a subscription service delivering real-time breach reports to 100,000+ users at 99.95% uptime. Integrated with a headless CMS for content management. Optimised database schema, cutting data retrieval times by 25% and supporting 30% user growth without additional resources. Implemented failure recovery strategies that reduced operational disruptions by 50%. Delivered 5 days ahead of a 45-day deadline.
Applied the same architectural thinking to AI. Designed a two-tier model strategy: Claude Haiku for cheap, fast exploration (52,844 evaluations) and Claude Sonnet for expensive validation of top candidates. Five-stage progressive filtering pipeline that eliminated 95%+ of weak candidates early. The architecture decision itself — using model tiers strategically — is what made $51 work instead of $2,000+.
This is where I'm headed — bringing 17 years of system design instincts to AI/ML architecture. Choosing the right model for the task. Designing evaluation pipelines that don't burn budget. Building RAG systems where the retrieval architecture matters as much as the model. The patterns transfer. The thinking scales.
LJMU M.Sc. Thesis • 2025 • Under review for Springer publication
Five-stage automated methodology that achieved statistically significant LLM improvements (p < 0.001, Cohen’s d > 0.8) for $51.12 total — 40x cheaper than fine-tuning. Two-tier model strategy (Claude Haiku for exploration, Sonnet for validation) with progressive filtering. No specialised hardware required.
Technical articles on backend engineering, AI integration, and lessons from 17 years of shipping code.
Writing about what I learn — from optimising LLMs on a budget to designing authentication protocols that resist timing attacks. The blog is the thinking out loud.
Read on Medium →Open-source projects, AI experiments, and the code behind the blog posts.
From sentiment-based recommendation systems to LLM tooling experiments. The repo is the proof of work.
View on GitHub →I've spent 17 years proving I can build. I went back to school at 37 because I believed the next decade of engineering would look nothing like the last. Now I'm looking for the kind of role where engineering depth meets strategic impact — where I can architect systems today, shape engineering culture for the long term, and think well beyond the next sprint. The right role will find its own title. Let's talk.