The rider doing 15 kmph is not the product. The forecasting model that put the right item in the right store 12 hours before you ordered it, that is the product.
The Wrong Question Everyone Asks
When Deepinder Goyal posted that 10-minute delivery runs at an average of 15 kmph, most people were surprised. They assumed riders were sprinting through traffic. They were not.
The engineering problem was never "how do we move a package fast."
The real problem is: how do we know what to stock in 451 tiny warehouses, across an entire city, before a single customer has placed a single order today?
That is a fundamentally different problem. And it is solved by a combination of distributed systems, in-memory databases, and ML-based demand forecasting, not by putting a timer on a rider's screen.
What a Dark Store Actually Is
A dark store is a small warehouse, typically 2,500 to 4,000 sq ft, with no walk-in customers, no retail display, and no billing counter. It is a pure fulfillment center designed for one thing: picking and packing an order in under 90 seconds.
Blinkit operates 451+ of these across India. Zepto, Swiggy Instamart, and BigBasket BB Now operate similar networks.
The 2 km coverage radius per store is not a business decision. It is a physics constraint.
- Average rider speed: ~15 kmph
- Target delivery time: 10 minutes
- Max distance in 10 min: 15 x (10/60) = 2.5 km
- Subtract pack time (~2.5m): effective riding time = 7.5 min
- Maximum viable radius: 15 x (7.5/60) ~= 1.875 km ~= 2 km
Every quick-commerce player in India independently converged on roughly a 2 km radius. This is not coincidence. The math gives exactly one answer given Indian urban density and traffic conditions.
The Knapsack Problem Running in Production
Each dark store has finite shelf space. You cannot stock all 10,000+ SKUs in a 3,000 sq ft space. So the system must decide, for each store, which 2,000 to 3,000 products to carry.
This is the 0/1 Knapsack Problem.
- Each SKU has a weight: shelf space it occupies
- Each SKU has a value: expected demand x margin for that pin code
- Constraint: total shelf space is fixed
- Objective: maximize value under the space limit
This is NP-hard in the general case. In practice, it is solved approximately using ML, specifically, a demand forecasting model trained per pin code, per time window.
The model runs continuously. It does not wait for orders to arrive. It pre-positions inventory based on predicted demand.
The Forecasting Signals
The demand forecast model consumes multiple signals simultaneously.
Historical Order Data Per Pin Code
Every order ever placed in a 500m radius is a training point. The model knows that a store in Koramangala sells 3x more protein bars on weekday mornings than a store in Laxmi Nagar. Different pin codes, different demand curves, different optimal stock plans.
Real-Time Weather
Rain in Mumbai spikes demand for Maggi, chai ingredients, and hot beverages within 20 to 30 minutes of rain starting. The forecasting pipeline ingests weather APIs and adjusts restock recommendations before the demand wave hits.
Festival and Local Calendar
Diwali demand in West Delhi looks nothing like Eid demand in Old Delhi. The model encodes regional festival calendars and adjusts per-store stock plans weeks in advance. A store near a stadium gets restocked differently on match days.
Time-of-Day Demand Patterns
7 AM in a residential area: eggs, bread, milk. 11 PM: chips, soft drinks, ice cream. The store's stock composition is not static, it shifts based on time-of-day demand curves learned from historical data.
Redis: Why Not Postgres
When you open Blinkit or Zepto, the app shows you only items that are physically in stock at your nearest dark store right now. It does not show you the full catalog.
That filtering happens via a Redis lookup: GET stock:{store_id}:{item_id} returning the current quantity or 0.
Why Redis and Not a Relational Database?
This lookup runs for every user, every session, every scroll event. At 10M+ daily active users browsing simultaneously, a Postgres query per item per user would collapse the database under read load.
Redis handles this because everything lives in memory, with no disk I/O, single-digit microsecond read latency, atomic DECR operations when stock is reserved at checkout, and TTL-based expiry for stale data.
Redis is updated by the forecasting pipeline, not by user orders. When the model decides that a store should carry 200 units of a product, that figure gets written into the online inventory layer before the items are physically moved to the store. The app reflects the ground truth of what is on that shelf right now.
The Order Flow: What Happens in 10 Minutes
A simplified quick-commerce flow looks like this:
- T+0:00 Customer taps "Place Order" -> Store Assignment Service runs haversine distance across active stores within 3 km, picks the nearest one with inventory, and writes the order to DynamoDB
- T+0:05 Picker at the dark store gets a handheld notification showing exact shelf coordinates and an optimized pick path
- T+2:30 Order is packed, sealed, and labelled while rider assignment runs in parallel
- T+3:00 Rider picks up the order and leaves the store
- T+8:00 Order is delivered
The 2.5 minute pack time is not humans being fast. It is a store layout designed by engineers, not merchandisers. Every SKU in a dark store is positioned to minimize the picker's walking distance across the most common order combinations. Shelf position is an optimization output, not a display decision.
DynamoDB and Why Zepto Migrated
Zepto migrated their order management system from MongoDB to DynamoDB and reported 60% faster order creation as a result.
The reasons are straightforward for this use case:
- Single-digit millisecond latency at scale with no tuning
- No schema migrations as order structure evolves
- Auto-scaling during demand spikes
- Native event-driven integration for order state transitions
Order state is modeled as a finite state machine: PLACED -> ASSIGNED -> PICKING -> PACKED -> DISPATCHED -> DELIVERED.
Each transition emits an event consumed by downstream services like customer notifications, analytics, rider payout calculation, and inventory decrement.
Zepto Maps: Why Google Maps Is Not Enough
Google Maps knows roads. It does not know which gate of a gated society is the actual delivery entrance, that building C is a four-minute walk from the main gate, that a lane becomes one-way after 8 PM, or that the fastest path to an apartment cuts through a parking lot invisible on standard maps.
Zepto built a proprietary mapping layer trained on millions of completed deliveries. Every trip is a data point: the actual GPS path the rider took, the actual time taken, and the actual gate used. Over time, the system learns the real last-50-metre path to every delivery address in its network.
That is the difference between "navigate to the pin" and "navigate to the door."
Why All Four Companies Converged on the Same Stack
Blinkit, Zepto, Swiggy Instamart, and BigBasket BB Now all independently arrived at roughly the same architecture because the constraints force similar decisions.
- Real-time inventory: Redis for in-memory reads and atomic updates
- Order state: DynamoDB or another NoSQL store for flexible schema and low-latency writes
- Demand forecasting: ML per pin code because geographic demand variance is extreme
- Store radius: ~2 km because rider speed and SLA leave no room for more
- Pack target: under 90 seconds because transit time already consumes most of the SLA
- Last-mile routing: proprietary hyperlocal mapping because generic maps miss the final 50 metres
This convergence is not industry coordination. It is the same math and the same constraints producing the same optimal answers independently.
The Go Prototype: Core Store Assignment
A minimal implementation of the core flow:
go1package main 2 3import ( 4 "context" 5 "fmt" 6 "math" 7 "sort" 8 9 "github.com/redis/go-redis/v9" 10) 11 12type Store struct { 13 ID string 14 Lat float64 15 Lng float64 16 Name string 17} 18 19// haversine returns distance in km between two lat/lng points 20func haversine(lat1, lon1, lat2, lon2 float64) float64 { 21 const R = 6371.0 22 dLat := (lat2 - lat1) * math.Pi / 180 23 dLon := (lon2 - lon1) * math.Pi / 180 24 a := math.Sin(dLat/2)*math.Sin(dLat/2) + 25 math.Cos(lat1*math.Pi/180)*math.Cos(lat2*math.Pi/180)* 26 math.Sin(dLon/2)*math.Sin(dLon/2) 27 return R * 2 * math.Atan2(math.Sqrt(a), math.Sqrt(1-a)) 28} 29 30// checkStock queries Redis for current stock at a given store 31func checkStock(ctx context.Context, rdb *redis.Client, 32 storeID, itemID string) (int, error) { 33 key := fmt.Sprintf("stock:%s:%s", storeID, itemID) 34 val, err := rdb.Get(ctx, key).Int() 35 if err == redis.Nil { 36 return 0, nil // key missing = out of stock 37 } 38 return val, err 39} 40 41// assignStore finds nearest store within 2 km that has item in stock 42func assignStore(ctx context.Context, userLat, userLng float64, 43 itemID string, stores []Store, rdb *redis.Client) (*Store, float64, error) { 44 45 // sort stores by distance from user 46 sort.Slice(stores, func(i, j int) bool { 47 di := haversine(userLat, userLng, stores[i].Lat, stores[i].Lng) 48 dj := haversine(userLat, userLng, stores[j].Lat, stores[j].Lng) 49 return di < dj 50 }) 51 52 for _, store := range stores { 53 dist := haversine(userLat, userLng, store.Lat, store.Lng) 54 if dist > 2.0 { 55 break // beyond 2 km radius, stop searching 56 } 57 stock, err := checkStock(ctx, rdb, store.ID, itemID) 58 if err != nil { 59 continue 60 } 61 if stock > 0 { 62 return &store, dist, nil 63 } 64 } 65 return nil, 0, fmt.Errorf("item %s unavailable within 2 km", itemID) 66}
This is the core of the entire system. Everything else, the AI forecasting, the pack optimization, the proprietary maps, exists to make this store assignment return the right answer, reliably, in under a millisecond.
The Actual Insight
The 10-minute delivery promise is not a logistics achievement. It is a prediction achievement.
By the time you open the app, the system has already predicted what your neighbourhood is likely to order in the next few hours, moved those items to the store nearest your home, and exposed them through the inventory layer for your session.
The delivery is just confirming that the prediction was correct.
The hard engineering problem is not the last mile. It is the 12 hours before the first mile.
