Ramachandra Vikas Chamarthi

Unknown

Bio

’m Ramachandra Vikas Chamarthi, a systems and AI engineer—and more recently, a techie-turned-entrepreneur—working at the intersection of Python internals, high-performance computing, and production AI systems. I recently founded NavyaAI Private Limited, where I build and advise on agentic, performance-critical AI infrastructure with a strong focus on safety, observability, and cost efficiency.

I hold a Master’s degree in Electrical and Computer Engineering from the University of North Carolina at Charlotte, where my research centered on algorithmic optimization of convolution layers for custom hardware acceleration and hardware–software co-design for deep learning. That experience shaped a compute-first mindset I still carry today: I care deeply about how models execute on real hardware, how memory moves, and where latency and cost actually come from.

Over the years, I’ve built and led high-performance AI and MLOps systems across startups, enterprises, and research labs. My work spans extreme caching strategies, multi-agent orchestration engines, distributed async systems, and performance-critical computer vision pipelines. I’ve helped teams reduce inference costs by orders of magnitude, achieve real-time performance at scale, and improve production benchmark scores for agentic platforms.

I’ve worked with organizations including NEC Labs, Proscia, Code and Theory, and multiple AI infrastructure startups—often at the uncomfortable but rewarding boundary where Python meets C++, schedulers, queues, and hardware limits. These experiences pushed me from being “just a systems engineer” into building companies—because sometimes the cleanest architecture decision is to found the company yourself.

What differentiates my approach is a compute-aware perspective. I think in terms of cores, schedulers, isolation boundaries, execution graphs, and failure modes. This perspective strongly influences how I evaluate modern Python runtime changes—such as per-interpreter GILs, interpreter isolation, and low-overhead runtime hooks—and how they unlock new system designs for AI agents, HPC workloads, and safe parallel execution.

These days, I split my time between breaking abstractions, building infrastructure at NavyaAI, and helping teams ship ambitious AI systems that work reliably outside the lab.