This file is a ported to Haskell language code with some
simplifications of rack-attack
https://github.com/rack/rack-attack/blob/main/lib/rack/attack.rb
and is based on the structure of the original code of rack-attack,
Copyright (c) 2016 by Kickstarter, PBC, under the MIT License.
Oleksandr Zhabenko added several implementations of the window
algorithm: tinyLRU, sliding window, token bucket window, leaky bucket
window alongside with the initial count algorithm using AI chatbots.
IP Zone functionality added to allow separate caches per IP zone.
Overview ========
This module provides WAI middleware for declarative, IP-zone-aware
rate limiting with multiple algorithms:
- Fixed Window
- Sliding Window
- Token Bucket
- Leaky Bucket
- TinyLRU
Key points ----------
- Plugin-friendly construction: build an environment once
(Env) from RateLimiterConfig and produce a pure WAI
Middleware. This matches common WAI patterns and avoids
per-request setup or global mutable state.
- Concurrency model: all shared structures inside Env use STM
TVar, not IORef. This ensures thread-safe updates
under GHC's lightweight (green) threads.
- Zone-specific caches: per-IP-zone caches are stored in a HashMap
keyed by zone identifiers. Zones are derived from a configurable
strategy (ZoneBy), with a default.
- No global caches in Keter: you can build one Env per
compiled middleware chain and cache that chain externally (e.g.,
per-vhost + middleware-list), preserving counters/windows across
requests.
Quick start -----------
1) Declarative configuration (e.g., parsed from JSON/YAML):
let cfg = RateLimiterConfig
{ rlZoneBy = ZoneDefault
, rlThrottles =
[ RLThrottle "api" 1000 3600 FixedWindow IdIP Nothing
, RLThrottle "login" 5 300 TokenBucket IdIP (Just 600)
]
}
2) Build
Env once and obtain a pure
Middleware:
env <- buildEnvFromConfig cfg
let mw = buildRateLimiterWithEnv env
app = mw baseApplication
Alternatively:
mw <- buildRateLimiter cfg -- convenience: Env creation + Middleware
app = mw baseApplication
Usage patterns --------------
Declarative approach (recommended):
import Keter.RateLimiter.WAI
import Keter.RateLimiter.Cache (Algorithm(..))
main = do
let config = RateLimiterConfig
{ rlZoneBy = ZoneIP
, rlThrottles =
[ RLThrottle "api" 100 3600 FixedWindow IdIP Nothing
]
}
middleware <- buildRateLimiter config
let app = middleware baseApp
run 8080 app
Programmatic approach (advanced):
import Keter.RateLimiter.WAI
import Keter.RateLimiter.Cache (Algorithm(..))
main = do
env initConfig (\req - "zone1")
let throttleConfig = ThrottleConfig
{ throttleLimit = 100
, throttlePeriod = 3600
, throttleAlgorithm = FixedWindow
, throttleIdentifierBy = IdIP
, throttleTokenBucketTTL = Nothing
}
env' <- addThrottle env "api" throttleConfig
let middleware = buildRateLimiterWithEnv env'
app = middleware baseApp
run 8080 app
Configuration reference -----------------------
Client identification strategies (IdentifierBy):
- IdIP - Identify by client IP address
- IdIPAndPath - Identify by IP address and request path
- IdIPAndUA - Identify by IP address and User-Agent
header
- IdHeader headerName - Identify by custom header
value
- IdCookie cookieName - Identify by cookie
value
- IdHeaderAndIP headerName - Identify by header
value combined with IP
Zone derivation strategies (ZoneBy):
- ZoneDefault - All requests use the same cache (no zone
separation)
- ZoneIP - Separate zones by client IP address
- ZoneHeader headerName - Separate zones by custom
header value
Rate limiting algorithms:
- FixedWindow - Traditional fixed-window counting
- SlidingWindow - Precise sliding-window with timestamp
tracking
- TokenBucket - Allow bursts up to capacity, refill over
time
- LeakyBucket - Smooth rate limiting with configurable leak
rate
- TinyLRU - Least-recently-used eviction for memory
efficiency