Hello, I'm
I'm a final-year AI & Data Science student at ESLSCA University (graduating June 2026), but most of what I'm proud of was learned outside the classroom, by shipping things.
My focus sits at the seam where research-grade ML meets real, deployable systems: training and tuning models is only half the job; the other half is the wiring around them, data versioning, experiment tracking, feature pipelines, model registries, drift monitoring, CI quality gates, and the boring-but-critical infrastructure that lets a model survive contact with production.
Lately I've been going deeper on agentic AI, building RAG pipelines, retrieval-augmented codebase chatbots, and multi-step workflows with LangChain / LangGraph, and on reinforcement learning, where I designed a contextual RL agent for live equity trading. I care about clean abstractions, reproducibility, and writing code my future self won't be ashamed of.
Currently looking for an AI / ML / MLOps role where I can keep learning fast, work alongside engineers who are sharper than me, and contribute to systems that ship.
Primary language for ML, data analysis, and backend services. Comfortable across the scientific stack, NumPy, Pandas, Polars, and idiomatic in async, typing, and packaging conventions.
LangChain, LangGraph, LangSmith, Google Gemini API, FAISS vector search, prompt engineering, RAG pipelines, structure-aware chunking, and n8n workflow automation for agent-to-tool integration.
Designed a contextual RL agent for live position sizing in the Agentic Trading Bot, with reward shaping on net PnL and safety constraints. Familiar with policy-gradient and value-based methods.
Built supervised models with scikit-learn and XGBoost; applied feature engineering, cross-validation, hyperparameter tuning, calibration, and rigorous offline evaluation before promoting anything to production.
PyTorch, TensorFlow, and Hugging Face Transformers. Evaluated NLP models using BLEU, ROUGE, BERTScore, and perplexity. Comfortable with embeddings, fine-tuning, and tokenizer internals.
DVC for data versioning, MLflow for experiment tracking and model registry, FastAPI for model serving, Docker & Docker Compose for packaging, and GitHub Actions for CI/CD with quality gates.
Strong on Pandas / Polars for transformation and Pandera for schema enforcement. Comfortable in PostgreSQL (Supabase) and the SAP BO Universe semantic layer for governed business data modeling.
Power BI for stakeholder-facing interactive dashboards, SAP Web Intelligence for governed reporting, and Matplotlib / Plotly for analysis-time charts that help me actually understand a dataset.
FastAPI for typed Python APIs, Next.js for dashboards, and end-to-end deployment across Render, Vercel, and Supabase Postgres, including third-party API integration (Alpaca Markets, Gemini).
Evidently for data & model drift, Prometheus for service metrics, Pandera for schema validation, pytest for coverage gates, and flake8 for style, drift and breakage caught at the CI layer, not in production.
Git-flow, pull-request review, conventional commits, and team-friendly debugging. Linux-first dev workflow. Comfortable explaining technical work to non-technical stakeholders, picked up at Intercom & Raya.
Curious by default, low-ego when collaborating, and used to shipping under deadline. Have delivered internal technical sessions on LangChain & presentation skills, comfortable at the whiteboard, not just behind one.
A focused month inside the prompt-engineering function, where the work felt less like writing prompts and more like designing micro-products. Each prompt template became a contract between an unstructured input and a predictable output, and getting that contract right took the same discipline as writing a unit test. The day-to-day was turning analyst-authored technical documents into clean, structured design docs that downstream engineering teams could act on without a follow-up meeting. The loop was relentless: write, test on real cases, score outputs for completeness and structural fidelity, tighten, repeat. This is where I started to take agentic AI as a craft seriously, not just a topic to read about.
Coursework spanning machine learning, deep learning, statistics, data engineering, and software design. Most of my real growth has happened in self-directed projects on top of the curriculum, graduation thesis is the Agentic Trading Bot below.
General secondary education, graduated 2023.
Graduation project, an autonomous intraday equity trading system built on a Predict-then-Modulate architecture. An XGBoost model provides directional bias from OHLCV features, and a contextual RL agent modulates position sizing, timing, and hold duration based on net PnL feedback per trading block.
A hard safety layer enforces idempotent order placement, partial-fill correction, slippage and fee modeling, and daily state resets. Deployed end-to-end with a Python backend on Render, a Next.js dashboard on Vercel, Supabase Postgres for state, and the Alpaca Markets API for live execution.
End-to-end MLOps pipeline on the UCI Adult Income dataset for binary income classification. Implemented a 3-stage DVC pipeline (prepare, preprocess, train) with all parameters externalized to YAML. Tracked three model experiments in MLflow and registered the best to the Production stage.
Served predictions through a FastAPI app, monitored data drift with Evidently and Prometheus, and enforced quality gates in GitHub Actions CI (flake8, pytest coverage, Pandera schema validation, F1 threshold). Containerized the full stack with Docker Compose.
Open to AI/ML roles where I can grow alongside an experienced team and contribute to meaningful projects. Drop me a line, I read every message.