Master Data Science

Latest posts

LLM_log #012: Introduction to Diffusion Models — From Noise to Geometry to Sampling

Highlights: In this post we build a complete understanding of diffusion models from the ground up — what they are, how images are represented, how the network is trained, what it geometrically learns, and finally how we turn that geometry into samples using DDIM and DDPM. Every formula is accompanied by concrete numbers you can verify by hand. So let’s begin! Tutorial Overview: What Are Diffusion Models? How Images Are Represented The Denoiser Network Noise…
Read more

LLM_log #011: Diffusion Models — From Noise to Wolves, Training from Scratch

In this post we build a complete diffusion model from scratch — training a UNet on a custom dataset, implementing the full DDPM pipeline, and understanding the math that makes iterative denoising work. We cover noise schedules, the reparameterization trick, FID evaluation, and three diffusion objectives (ε, x₀, v). By the end you’ll have generated novel images from pure Gaussian noise, and understand why diffusion models overtook GANs as the dominant paradigm for image generation.…
Read more

LLM_log #010 Understanding Diffusion Models Through 1D Experiments — From DDPM to Manifold Compactness

Highlights: We implement a complete DDPM from scratch on 1D sine waves — same math as image diffusion, but every intermediate state is plottable. We track 100 parallel trajectories, measure when the model “commits” to a specific sample, then design a controlled experiment that reveals manifold compactness as the key factor determining whether diffusion succeeds or fails. So let’s begin! Tutorial Overview: Why 1D? The Dataset Forward Process Model and Training Generating from Noise What…
Read more

LLM_log #009: An Image is Worth 16×16 Words — From Transformers to Vision Transformers and SWIN

Highlights: In this post, we take a deep dive into the architecture that changed everything — the Transformer — and trace its evolution from NLP into computer vision. We start with the original encoder-decoder model, walk through self-attention and multi-head attention step by step, and then show how Vision Transformers (ViT) apply the exact same mechanism to image patches instead of words. Along the way, we answer the questions that trip everyone up: if we…
Read more

LLM_log #008: CLIP — Understanding Multimodal AI Through Step-by-Step Experiments

  Highlights: In this post, you’ll learn how CLIP connects images and text in a shared embedding space — enabling zero-shot image classification, semantic search, and visual perception scoring without any task-specific training. We start from the ground up with Vision Transformers, walk through CLIP’s contrastive learning architecture, run hands-on embedding experiments, and then push CLIP to its limits with a real-world challenge: can it tell cheap bedrooms from expensive ones using actual house sale…
Read more

LLM_log #007: From Random Text to Coherent Language – Pretraining Your First Large Language Model

Highlights: In this guide, you’ll learn how to pretrain a large language model from scratch — implementing training loops, evaluation metrics, and advanced text generation strategies. We’ll build a complete GPT-style training pipeline, watch it evolve from random gibberish to coherent text, and explore techniques like temperature scaling and top-k sampling. By the end, you’ll load professional pretrained weights into your own architecture. Source: This is part of our ongoing “Building LLMs from Scratch” series…
Read more

LLM_log #006: Implementing ChatGPT 2.0 from scratch – Rashchka

Highlights: In this post, we build a complete GPT-2 model (124 million parameters) from scratch in PyTorch. We implement every component — layer normalization, GELU activations, the feed forward network, shortcut connections — and wire them into a transformer block that we stack 12 times to create the full architecture. By the end, you will have a structurally complete GPT model that can generate text token by token. We also weave in key insights from…
Read more

LLM_log #005: Implementing Attention Mechanisms — From Simplified Self-Attention to Multi-Head Attention

Highlights: In this post, we will implement four types of attention mechanisms step by step. We start with a simplified self-attention to build intuition, then move to self-attention with trainable weight matrices that forms the backbone of modern LLMs. Next, we add causal masking and dropout to enforce temporal order during text generation. Finally, we extend everything to multi-head attention — the workhorse behind GPT, Claude, and LLaMA. Every formula is accompanied by concrete numbers…
Read more

LLM_log #004 From Scratch: Working with Text Data — Embeddings for LLMs

Highlights: Before we can build or train a Large Language Model, we need to solve a fundamental problem — LLMs cannot process raw text. In today’s post, we’ll walk through the complete pipeline that converts human-readable text into numerical vectors that a neural network can work with. We’ll cover tokenization, vocabulary building, byte pair encoding, sliding window sampling, and how token and positional embeddings come together to form the final input to a GPT-like transformer.…
Read more

LLM_log #003 Understanding Large Language Models – Transformers illustrative explanation

🚀 Understanding Large Language Models: A Complete Visual Guide 🎯 What You’ll Learn Large Language Models like GPT-4, Claude, and Llama have transformed how we interact with AI, but understanding what actually happens when you type “The cat sat on the” and the model predicts “mat” can feel like opening a black box. In this comprehensive guide, we’ll demystify the entire process by following a single example through every step of a Transformer’s architecture. You’ll…
Read more

LLM_log #002: Tokenization in Large Language Modelling

Understanding Tokenization in Large Language Models Why GPT-4 Can’t Count the Letters in “Strawberry” VLADIMIR MATIC, PhD – DataHacker.rs – January 2025 🍓 “How many letter ‘r’s are in the word strawberry?” strawberry s t r a w b e r r y ❌ GPT-4’s Answer “Two” ✅ Correct Answer Three In early 2024, this simple question stumped GPT-4. A billion-dollar AI model failed at counting letters in a 10-letter word. Why? The answer lies…
Read more

LLM_log #001: Understanding Large Language Models: From Word Counting to Neural Networks

Part 1: The Evolution of Text Representation DataHacker.rs | January 2026 🚀 The AI Revolution: How We Got Here The period from 2012 to today marked a fundamental transformation in artificial intelligence. Deep neural networks enabled systems that can understand and generate human language with unprecedented accuracy. Figure 0: From word embeddings to reasoning-capable AI systems – the complete LLM timeline The ChatGPT Moment November 2022 brought ChatGPT – an application that: Reached 1 million…
Read more

Featured posts

#004 How to smooth and sharpen an image in OpenCV?

Highlight: In our previous posts we mastered some basic image processing techniques and now we are ready to move on to more advanced concepts. In this post, we are going to explain how to blur and sharpen images. When we want to blur or sharpen our image, we need to apply a linear filter. You will learn several types of filters that we often use in the image processing In addition, we will also show…
Read more

#000 How to access and edit pixel values in OpenCV with Python?

Highlight: Welcome to another datahacker.rs post series! We are going to talk about digital image processing using OpenCV in Python. In this series, you will be introduced to the basic concepts of OpenCV and you will be able to start writing your first scripts in Python. Our first post will provide you with an introduction to the OpenCV library and some basic concepts that are necessary for building your computer vision applications. You will learn…
Read more

#006 Linear Algebra – Inner or Dot Product of two Vectors

Highlight: In this post we will review one of the fundamental operators in Linear Algebra. It is known as a Dot product or an Inner product of two vectors. Most of you are already familiar with this operator, and actually it’s quite easy to explain. And yet, we will give some additional insights as well as some basic info how to use it in Python. Tutorial Overview: Dot product :: Definition and properties Linear functions…
Read more

#008 Linear Algebra – Eigenvectors and Eigenvalues

Highlight: In this post we will talk about eigenvalues and eigenvectors. This concept proved to be quite puzzling to comprehend for many machine learning and linear algebra practitioners. However, with a very good and solid introduction that we provided in our previous posts we will be able to explain eigenvalues and eigenvectors and enable very good visual interpretation and intuition of these topics. We give some Python recipes as well. Tutorial Overview: Intuition about eigenvectors…
Read more

#011 TF How to improve the model performance with Data Augmentation?

Highlights: In this post we will show the benefits of data augmentation techniques as a way to improve performance of a model. This method will be very beneficial when we do not have enough data at our disposal. Tutorial Overview: Training without data augmentation What is data augmentation? Training with data augmentation Visualization 1. Training without data augmentation A familiar question is “why should we use data augmentation?”. So, let’s see the answer. In order…
Read more