My Writing

Dive into my musings on life and tech in my latest posts; a blend of introspection and innovation. Keep an eye out for fresh insights and updates!

Advanced Reinforcement Learning Interview Questions #6 - The Initialization Gap Trap

You’re in a final-round interview for a Senior AI Engineer role at NVIDIA Robotics .

Posted on February 1, 2026

Advanced Reinforcement Learning Interview Questions #5 - The Success-Only Dataset Trap

You’re in a Research Scientist interview at Google DeepMind , and the lead researcher throws you a curveball: “I have a dataset of reasoning traces, but they’re all flawed.

Posted on January 31, 2026

Advanced Reinforcement Learning Interview Questions #4 - The LLM-as-a-Judge Trap

You’re in a Machine Learning interview at DeepSeek AI and the lead researcher asks: “We want to train a reasoning model using 𝐃𝐢𝐫𝐞𝐜𝐭 𝐏𝐫𝐞𝐟𝐞𝐫&#1...

Posted on January 30, 2026

Advanced Reinforcement Learning Interview Questions #3 - The Covariate Shift Trap

You’re in a Machine Learning Engineer interview at OpenAI and the lead researcher asks: “We have a massive dataset of human expert demonstrations for this task.

Posted on January 29, 2026

Advanced Reinforcement Learning Interview Questions #2 - The Mean Collapse Trap

You’re in a Machine Learning interview at Tesla the interviewer asks: “We have an imitation learning agent that is underfitting complex human driving data.

Posted on January 28, 2026

Advanced Reinforcement Learning Interview Questions #1 - The Stationarity Trap

You’re in a Machine Learning Engineer interview at Anthropic , and the interviewer drops this on you: “In Supervised Learning, we assume data is IID (Independent and Identically Distributed).

Posted on January 27, 2026

Computer Vision Interview Questions #25 – The Contrastive Shortcut Trap

You’re in a Computer Vision interview at OpenAI .

Posted on January 26, 2026

Computer Vision Interview Questions #24 - The Signal-to-Noise Trap

You’re in a Senior AI Interview at Google DeepMind .

Posted on January 25, 2026

Computer Vision Interview Questions #23 - The Flamingo Architecture Trap

You’re in a Senior AI Interview at Google DeepMind .

Posted on January 24, 2026

Computer Vision Interview Questions #22 - The Interactive Segmentation Trap

You’re in a final-round Computer Vision interview at OpenAI .

Posted on January 23, 2026

Computer Vision Interview Questions #21 – The Data Scaling Trap

You’re in a Senior Robotics interview at NVIDIA .

Posted on January 22, 2026

Computer Vision Interview Questions #20 - The Low-Contrast Bias Trap

You’re in a Senior Computer Vision interview at OpenAI .

Posted on January 21, 2026

Computer Vision Interview Questions #19 – The Fine-Grained Invariance Trap

You’re in a Senior Computer Vision interview at Google DeepMind .

Posted on January 20, 2026

Computer Vision Interview Questions #18 – The Compositionality Trap

You are in a Senior Computer Vision interview at Google DeepMind .

Posted on January 19, 2026

Computer Vision Interview Questions #17 - The Counting Hallucination Trap

You’re in a Senior AI Interview at OpenAI .

Posted on January 18, 2026

Computer Vision Interview Questions #16 - The Contrastive Hard Negative Trap

How aggressive batch difficulty pushes CLIP from semantic understanding into pixel-level cheating.

Posted on January 17, 2026

Computer Vision Interview Questions #15 – The Multimodal Geometry Trap

How contrastive pretraining collapses spatial information - and why LLaVA-style models must use penultimate patch embeddings.

Posted on January 16, 2026

Computer Vision Interview Questions #14 – The Attention vs MLP Responsibility Trap

Why attention handles communication, but MLPs do the real computation in modern vision transformers.

Posted on January 15, 2026

Computer Vision Interview Questions #13 – The Generalization Gap Trap

Why disabling data augmentation during evaluation is the only way to measure real generalization.

Posted on January 14, 2026

Computer Vision Interview Questions #12 - The Large Batch Generalization Trap

Why linear learning-rate scaling silently kills SGD’s implicit regularization and destroys test accuracy.

Posted on January 13, 2026

Computer Vision Interview Questions #11 – The CLIP Prompt Variance Trap

Why single-text prompts are noisy estimates in high-dimensional space—and how centroid stabilization fixes zero-shot accuracy.

Posted on January 12, 2026

Computer Vision Interview Questions #10 – The Early vs Slow Fusion Trap

The hidden activation-memory cost of keeping time alive in deep video networks.

Posted on January 11, 2026

Computer Vision Interview Questions #9 – The Tiny Object Trap

Why Faster R-CNN still beats YOLO when defects are smaller than your receptive field.

Posted on January 10, 2026

Computer Vision Interview Questions #8 – The Zero-Padding Distribution Trap

Why injecting zeros at image borders silently breaks translation equivariance and corrupts edge statistics.

Posted on January 9, 2026

Computer Vision Interview Questions #7 – The Receptive Field Trap

Why replacing a 7×7 convolution with three 3×3 layers isn’t about parameters — it’s about nonlinear expressivity.

Posted on January 8, 2026

Computer Vision Interview Questions #6 – The Model Capacity Trap

Why shrinking an overfitting network makes optimization harder, and why over-parameterization is the safer bet.

Posted on January 7, 2026

Computer Vision Interview Questions #5 – The Dead ReLU Trap

Why lowering the learning rate can't resurrect dead neurons - and how architectural gradient flow actually fixes it.

Posted on January 6, 2026

Computer Vision Interview Questions #4 - The L1 vs L2 Geometry Trap

Why rotating the feature space instantly exposes candidates who don’t understand metric invariance.

Posted on January 5, 2026

Computer Vision Interview Questions #3 - The Low Initial Loss Trap

Why a Softmax loss of 0.05 at step zero doesn’t mean your model is brilliant — it means your training pipeline is broken.

Posted on January 4, 2026

Computer Vision Interview Questions #2 – The Redundant Data Trap

Why labeling 500k more images from the same distribution won’t fix overfitting—and how active learning actually moves the decision boundary.

Posted on January 3, 2026

Top 25 ML System Design Interview Questions

Top 25 ML System Design Interview Questions

Posted on January 1, 2026

Computer Vision Interview Questions #1 – The Translation Equivariance Efficiency Trap

Why CNNs learn one visual feature once, while dense networks must relearn it at every pixel.

Posted on December 31, 2025

Advanced NLP Interview Questions #25 – The Back-Translation Direction Trap

Why generating synthetic sources (not targets) is the only way to preserve decoder fluency in production NMT systems.

Posted on December 30, 2025

Advanced NLP Interview Questions #24 – The Confidence Calibration Trap

Why model cascades fail not on routing logic, but on overconfident cheap models that never escalate.

Posted on December 29, 2025

Advanced NLP Interview Questions #23 – The Curriculum Learning Trap

Why shuffling General, Code, and Math data together silently caps reasoning performance and how staged pretraining unlocks true chain-of-thought.

Posted on December 28, 2025

AI Foundations - From LeNet-1 in 1989 to Today’s Deep Learning Revolution

Explore the origins of modern deep learning with a look back at Yann LeCun's groundbreaking LeNet-1 demo from 1989. This article delves into the foundational concepts of convolutional neural networks, their evolution, and what today's AI engineers can learn from the elegant simplicity of early models.

Posted on June 11, 2025

Building Robust RAG Systems - Addressing Hallucination and Retrieval Challenges

This doc covers everything from the basics of RAG to advanced techniques for addressing hallucination and retrieval challenges. It also includes practical insights and best practices for implementing RAG in real-world applications.

Posted on February 22, 2025

Understanding the Foundations of Repository-Level AI Software Engineering with RepoGraph

Introducing RepoGraph, a graph-based module that maps out the structure of an entire codebase

Posted on February 20, 2025

Comprehensive Overview of Running and Fine-tuning Open Source LLMs

Running and fine-tuning open-source LLMs have become essential practices in the field of natural language processing (NLP). This guide provides a detailed overview of the processes involved, the tools and frameworks used, and best practices for optimizing performance.

Posted on February 16, 2025

What is AI Agent?

AI agents are software programs that can perform tasks autonomously, using natural language to interact with users and other systems. They are designed to be able to learn and adapt to new situations, making them increasingly useful in a wide range of applications.

Posted on February 16, 2025

DeepSeek-R1 and V3 - Advancing AI Reasoning with Reinforcement Learning

A deep dive into DeepSeek's latest models, exploring their architecture, training methodology, and emergent reasoning capabilities.

Posted on February 15, 2025
haohoang

© 2026 Aria

LinkedIn YouTube Substack GitHub