Beyond Vectors: The Case for Sparse Embeddings & SPLADE
Dense vectors are magical at capturing semantics, but they fail when you need exact matches. This article unpacks the Vocabulary Mismatch Problem and introduces SPLADEāa neural approach that combines the precision of keyword search with the intelligence of transformers. Learn why sparse embeddings matter and how to architect hybrid search for production.
RAG Systems Engineering: The Structure-Aware Data Pipeline
Building production RAG systems is fundamentally an ETL (Extract, Transform, Load) challenge. We explore why documents must be treated as hierarchical data structures, not string soup. Discover structure-aware splitting, metadata injection, and multi-resolution indexing strategies that transform data quality and eliminate hallucinations.
Context Plumbing: From Request-Response to Event Sourcing for Agents
We are watching the AI industry commit the original sin of the web all over again. For the last two years, weāve obsessed over Context Engineering, treating Agents like static, PHP-era websites. When a user asks a question, the system performs a ādatabase fetchā on demand, pulling context just in time to generate an answer. We havenāt reinvented software; weāve just replaced the mouse click with a prompt, keeping the same brittle, pull-based architecture underneath....
How LLMs Read?
Everyone talks about the Neural Network, but the Tokenizer is the unsung hero of LLMs. This post explains what a Tokenizer actually does, why we use Byte Pair Encoding (BPE), and how these tokens bridge the gap between rigid integers and meaningful vector embeddings in models like GPT-4.
The Thesis ā Why Dictation is the New Interface
High-bandwidth input is the bottleneck of modern computing. Voice agents are for delegation; Voice dictation is for creation. This post explores why we need āAgentic Dictationā to match the speed of our thoughts.
Deep Dive: Keyword Search
Conventional keyword search matches user query words to document words using an inverted index data structure for efficient matching and ranking by relevancy.
Building RAG: All things retrieval
Retrieval is the backbone of RAG. We explore the critical steps often missed by developers: proper chunking strategies, the āLibrarianā analogy for vector vs. keyword search, and solving the math problem of Hybrid Search using Reciprocal Rank Fusion (RRF).
Revolutionizing Question-and-Answer Systems
LLMs revolutionize question-and-answer systems with exceptional language understanding and creative writing skills. Lossy compression during training may make retrieving information challenging. Leveraging LLMsā language expertise transforms building question-and-answer systems into reading comprehension using their ability to comprehend text, forming the basis for RAG systems that shift question answering to efficient knowledge base searches.