Redis session storage
ToolHive uses Redis for two distinct purposes, each with a different configuration model:
-
Embedded authorization server sessions — stores upstream tokens so users don't need to re-authenticate after pod restarts. Uses Redis Sentinel with ACL-based authentication and a fixed
thv:auth:*key pattern. See Embedded auth server session storage. -
MCPServer and VirtualMCPServer horizontal scaling — shares MCP session state across pod replicas so any pod can handle any request. Uses a standalone Redis instance with a simple password. See Horizontal scaling session storage.
Redis is also required for rate limiting, which stores token bucket counters independently of session data.
You can reuse the same Redis instance for all three purposes by using different
keyPrefix values or different databases — see
Sharing a Redis instance for details.
Embedded auth server session storage
Configure Redis Sentinel as the session storage backend for the ToolHive embedded authorization server. By default, sessions are stored in memory, which means upstream tokens are lost when pods restart and users must re-authenticate. Redis Sentinel provides persistent storage with automatic master discovery, ACL-based access control, and optional failover when replicas are configured.
Before you begin, ensure you have:
- A Kubernetes cluster with the ToolHive Operator installed
kubectlconfigured to access your cluster- Familiarity with the embedded authorization server setup
If you need help installing the ToolHive Operator, see the Kubernetes quickstart guide.
Deploy Redis Sentinel
Deploy a Redis master and a three-node Sentinel cluster. The following manifests create the Redis and Sentinel StatefulSets with ACL authentication and persistent storage.
Create the redis namespace:
kubectl create namespace redis
Save the following manifests to a file called redis-sentinel.yaml.
The ACL Secret defines a toolhive-auth user with permissions restricted to the
thv:auth:* key pattern that ToolHive uses for session data. An init container
copies the ACL file into the Redis data directory so it persists across
restarts.
Generate a random password and use it in the ACL Secret and Kubernetes Secret below:
openssl rand -base64 32
In the ACL entry, the > prefix before the password is
Redis ACL syntax
meaning "set this user's password." Replace YOUR_REDIS_ACL_PASSWORD with the
generated value.
# --- Redis ACL Secret
apiVersion: v1
kind: Secret
metadata:
name: redis-acl
namespace: redis
type: Opaque
stringData:
users.acl: >-
user toolhive-auth on >YOUR_REDIS_ACL_PASSWORD ~thv:auth:* &* +GET +SET
+SETNX +DEL +EXISTS +EXPIRE +PEXPIRE +PTTL +MGET +SADD +SREM +SMEMBERS +EVAL
+MULTI +EXEC +EVALSHA +PING
---
# --- Redis headless Service
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: redis
spec:
clusterIP: None
selector:
app: redis
ports:
- name: redis
port: 6379
---
# --- Redis master StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
namespace: redis
spec:
serviceName: redis
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
initContainers:
- name: init-acl
image: redis:7-alpine
command: ['cp', '/etc/redis-acl/users.acl', '/data/users.acl']
volumeMounts:
- name: redis-acl
mountPath: /etc/redis-acl
- name: redis-data
mountPath: /data
containers:
- name: redis
image: redis:7-alpine
ports:
- containerPort: 6379
command:
- redis-server
- --bind
- '0.0.0.0'
- --aclfile
- /data/users.acl
readinessProbe:
exec:
command: ['redis-cli', 'PING']
initialDelaySeconds: 5
periodSeconds: 5
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
volumeMounts:
- name: redis-data
mountPath: /data
- name: redis-acl
mountPath: /etc/redis-acl
readOnly: true
volumes:
- name: redis-acl
secret:
secretName: redis-acl
volumeClaimTemplates:
- metadata:
name: redis-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
The next section deploys a three-node Sentinel cluster that monitors the Redis master. With a single master and no replicas, Sentinel provides master discovery for ToolHive but cannot perform automatic failover. To enable failover, add Redis replicas to the StatefulSet and configure replication.
# --- Sentinel configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-sentinel-config
namespace: redis
data:
sentinel.conf: |
sentinel resolve-hostnames yes
sentinel announce-hostnames yes
# quorum: 2 of 3 sentinels must agree to trigger failover
sentinel monitor mymaster redis-0.redis.redis.svc.cluster.local 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 10000
sentinel parallel-syncs mymaster 1
---
# --- Sentinel headless Service
apiVersion: v1
kind: Service
metadata:
name: redis-sentinel
namespace: redis
spec:
clusterIP: None
selector:
app: redis-sentinel
ports:
- name: sentinel
port: 26379
---
# --- Sentinel StatefulSet (3 replicas for quorum)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-sentinel
namespace: redis
spec:
serviceName: redis-sentinel
replicas: 3
selector:
matchLabels:
app: redis-sentinel
template:
metadata:
labels:
app: redis-sentinel
spec:
initContainers:
- name: copy-config
image: redis:7-alpine
command:
['cp', '/etc/sentinel-ro/sentinel.conf', '/data/sentinel.conf']
volumeMounts:
- name: sentinel-config-ro
mountPath: /etc/sentinel-ro
- name: sentinel-data
mountPath: /data
containers:
- name: sentinel
image: redis:7-alpine
ports:
- containerPort: 26379
name: sentinel
command: ['redis-sentinel', '/data/sentinel.conf']
readinessProbe:
exec:
command: ['redis-cli', '-p', '26379', 'PING']
initialDelaySeconds: 5
periodSeconds: 5
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
volumeMounts:
- name: sentinel-data
mountPath: /data
volumes:
- name: sentinel-config-ro
configMap:
name: redis-sentinel-config
volumeClaimTemplates:
- metadata:
name: sentinel-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
Apply the manifests and wait for all pods to be ready:
kubectl apply -f redis-sentinel.yaml
kubectl wait --for=condition=ready pod \
-l 'app in (redis, redis-sentinel)' \
--namespace redis \
--timeout=300s
The manifests above don't disable the Redis default user, which has full access
with no password. For production deployments, add user default off to the
users.acl entry in the redis-acl Secret. If you disable the default user,
you must also configure Sentinel to authenticate to Redis by adding
sentinel auth-user and sentinel auth-pass to the Sentinel ConfigMap, and
update the readiness probe commands to authenticate.
Create Kubernetes secrets
Create a Secret in the ToolHive namespace containing the Redis ACL credentials. The username and password must match the ACL user defined above:
kubectl create secret generic redis-acl-secret \
--namespace toolhive-system \
--from-literal=username=toolhive-auth \
--from-literal=password="YOUR_REDIS_ACL_PASSWORD"
Configure MCPExternalAuthConfig
Add the storage block to your MCPExternalAuthConfig resource. The following
example shows a working configuration with Redis Sentinel storage using Sentinel
service discovery, which automatically resolves Sentinel endpoints from the
headless Service deployed above:
apiVersion: toolhive.stacklok.dev/v1alpha1
kind: MCPExternalAuthConfig
metadata:
name: embedded-auth-server
namespace: toolhive-system
spec:
type: embeddedAuthServer
embeddedAuthServer:
issuer: 'https://mcp.example.com'
signingKeySecretRefs:
- name: auth-server-signing-key
key: signing-key
hmacSecretRefs:
- name: auth-server-hmac-secret
key: hmac-key
storage:
type: redis
redis:
sentinelConfig:
masterName: mymaster
sentinelService:
name: redis-sentinel
namespace: redis
aclUserConfig:
usernameSecretRef:
name: redis-acl-secret
key: username
passwordSecretRef:
name: redis-acl-secret
key: password
upstreamProviders:
- name: google
type: oidc
oidcConfig:
issuerUrl: 'https://accounts.google.com'
clientId: '<YOUR_GOOGLE_CLIENT_ID>'
clientSecretRef:
name: upstream-idp-secret
key: client-secret
scopes:
- openid
- profile
- email
kubectl apply -f embedded-auth-with-redis.yaml
Using explicit Sentinel addresses
sentinelAddrs and sentinelService are mutually exclusive. Use
sentinelService when your Sentinel instances run in the same cluster, or
sentinelAddrs when you need to specify exact endpoints.
Instead of service discovery, you can list Sentinel addresses explicitly. This is useful when Sentinel instances are in a different namespace or outside the cluster:
storage:
type: redis
redis:
sentinelConfig:
masterName: mymaster
sentinelAddrs:
- redis-sentinel-0.redis-sentinel.redis.svc.cluster.local:26379
- redis-sentinel-1.redis-sentinel.redis.svc.cluster.local:26379
- redis-sentinel-2.redis-sentinel.redis.svc.cluster.local:26379
aclUserConfig:
usernameSecretRef:
name: redis-acl-secret
key: username
passwordSecretRef:
name: redis-acl-secret
key: password
For the complete list of storage configuration fields, see the Kubernetes CRD reference.
Enable TLS
Without TLS, Redis credentials and session tokens travel in plaintext between ToolHive and Redis. You should enable TLS for any deployment beyond local development.
Configure the tls block in your storage config. ToolHive needs the CA
certificate that signed the Redis server certificate so it can verify the
connection.
This step only covers the ToolHive client-side TLS configuration. Your Redis and Sentinel instances must also be configured to serve TLS — see the Redis TLS documentation for server-side setup.
Create a CA certificate Secret
Store your CA certificate in a Secret in the ToolHive namespace:
kubectl create secret generic redis-ca-cert \
--namespace toolhive-system \
--from-file=ca.crt=<PATH_TO_CA_CERTIFICATE>
Configure TLS in MCPExternalAuthConfig
Add the tls block to the redis section of your storage config:
storage:
type: redis
redis:
sentinelConfig:
masterName: mymaster
sentinelService:
name: redis-sentinel
namespace: redis
aclUserConfig:
usernameSecretRef:
name: redis-acl-secret
key: username
passwordSecretRef:
name: redis-acl-secret
key: password
tls:
caCertSecretRef:
name: redis-ca-cert
key: ca.crt
When you set only tls, ToolHive automatically uses the same TLS configuration
for Sentinel connections. This is the recommended setup when both Redis and
Sentinel use certificates from the same CA.
Separate TLS config for Sentinel
If your Sentinel instances use a different CA or require different TLS settings,
add a sentinelTls block:
storage:
type: redis
redis:
sentinelConfig:
masterName: mymaster
sentinelService:
name: redis-sentinel
namespace: redis
aclUserConfig:
usernameSecretRef:
name: redis-acl-secret
key: username
passwordSecretRef:
name: redis-acl-secret
key: password
tls:
caCertSecretRef:
name: redis-ca-cert
key: ca.crt
sentinelTls:
caCertSecretRef:
name: sentinel-ca-cert
key: ca.crt
When sentinelTls is set, ToolHive uses separate TLS configurations for master
and Sentinel connections. Each connection type uses its own CA certificate for
verification.
Verify the integration
After applying the configuration, verify that ToolHive can connect to Redis. The
examples below use weather-server-embedded as the MCPServer name — substitute
your own.
Check that the MCPServer pod is running:
kubectl get pods -n toolhive-system \
-l app.kubernetes.io/name=weather-server-embedded
Check the proxy logs for Redis connection messages:
kubectl logs -n toolhive-system \
-l app.kubernetes.io/name=weather-server-embedded \
| grep -i redis
Look for log entries that confirm a successful Redis Sentinel connection. If the connection fails, the proxy logs contain error details.
Test the OAuth flow end-to-end by connecting with an MCP client. After authenticating, restart the proxy pod and verify that your session persists without requiring re-authentication:
# Restart the proxy pod
kubectl rollout restart deployment \
-n toolhive-system weather-server-embedded-proxy
# Wait for the new pod to be ready
kubectl rollout status deployment \
-n toolhive-system weather-server-embedded-proxy
If your MCP client can continue making requests without re-authenticating, Redis session storage is working correctly.
Troubleshooting
Connection refused or timeout errors
- Verify the Redis Sentinel pods are running:
kubectl get pods -n redis - Check that the Sentinel addresses in your config match the actual pod DNS
names:
kubectl get endpoints -n redis - Ensure network policies allow traffic from the
toolhive-systemnamespace to theredisnamespace - Verify the
masterNamematches the name in your Sentinel configuration (mymasterin the example manifests above)
ACL authentication failures
- Verify the Secret exists and contains the correct credentials:
kubectl get secret redis-acl-secret -n toolhive-system -o yaml - Connect to Redis directly to verify the ACL user exists:
kubectl exec -n redis redis-0 -- redis-cli ACL LIST
- Ensure the ACL user has the required permissions (
~thv:auth:*key pattern and the commands listed in the ACL Secret)
TLS handshake or certificate errors
- Verify the CA certificate Secret exists in the
toolhive-systemnamespace:kubectl get secret redis-ca-cert -n toolhive-system - Confirm the CA certificate matches the one that signed the Redis server certificate
- Check proxy logs for TLS-specific errors:
kubectl logs -n toolhive-system \-l app.kubernetes.io/name=weather-server-embedded \| grep -i "tls\|x509\|certificate"
- If using self-signed certificates for testing, you can set
insecureSkipVerify: trueto bypass verification (not recommended for production) - When using separate Sentinel TLS, ensure both
tlsandsentinelTlsare configured with the correct CA certificates for their respective services
Sessions lost after Redis failover
- Check Sentinel logs for failover events:
kubectl logs -n redis -l app=redis-sentinel - Verify that the master is reachable from Sentinel:
kubectl exec -n redis redis-sentinel-0 -- \redis-cli -p 26379 SENTINEL masters
- Ensure Sentinel quorum is met (at least 2 of 3 Sentinel instances must be running)
Horizontal scaling session storage
When you run multiple replicas of an MCPServer proxy runner or a
VirtualMCPServer, MCP sessions must be shared across pods so that any replica
can handle any client request. ToolHive stores this session state in Redis using
a simple password — no ACL user, no Sentinel.
Deploy a standalone Redis instance
A single Redis pod with a password is sufficient for most horizontal scaling
deployments. The manifests below create Redis in the toolhive-system namespace
alongside your ToolHive workloads.
openssl rand -base64 32
# --- Redis password Secret
apiVersion: v1
kind: Secret
metadata:
name: redis-password
namespace: toolhive-system
type: Opaque
stringData:
password: YOUR_REDIS_PASSWORD
---
# --- Redis Service
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: toolhive-system
spec:
selector:
app: redis
ports:
- name: redis
port: 6379
targetPort: 6379
---
# --- Redis Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: toolhive-system
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:7-alpine
args:
- redis-server
- --requirepass
- $(REDIS_PASSWORD)
env:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-password
key: password
ports:
- containerPort: 6379
readinessProbe:
exec:
command: ['redis-cli', '-a', '$(REDIS_PASSWORD)', 'PING']
initialDelaySeconds: 5
periodSeconds: 5
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
Apply the manifests:
kubectl apply -f redis-scaling.yaml
kubectl wait --for=condition=available deployment/redis \
--namespace toolhive-system --timeout=120s
Configure MCPServer session storage
Reference the Redis Service and Secret in your MCPServer spec:
apiVersion: toolhive.stacklok.dev/v1alpha1
kind: MCPServer
metadata:
name: my-server
namespace: toolhive-system
spec:
image: ghcr.io/example/my-mcp-server:latest
replicas: 2
sessionStorage:
provider: redis
address: redis.toolhive-system.svc.cluster.local:6379
db: 0
keyPrefix: mcp-sessions
passwordRef:
name: redis-password
key: password
Configure VirtualMCPServer session storage
The sessionStorage field is identical for VirtualMCPServer:
apiVersion: toolhive.stacklok.dev/v1alpha1
kind: VirtualMCPServer
metadata:
name: my-vmcp
namespace: toolhive-system
spec:
replicas: 2
sessionStorage:
provider: redis
address: redis.toolhive-system.svc.cluster.local:6379
db: 0
keyPrefix: vmcp-sessions
passwordRef:
name: redis-password
key: password
backends:
- name: my-backend
url: http://my-mcp-server.toolhive-system.svc.cluster.local:8080
Verify session storage is working
After applying your configuration, check that ToolHive has connected to Redis successfully.
Check the SessionStorageWarning condition:
kubectl describe mcpserver my-server -n toolhive-system
When Redis is properly configured, the SessionStorageWarning condition is
absent or set to False:
Conditions:
Type: Ready
Status: True
...
Type: SessionStorageWarning
Status: False
Reason: SessionStorageConfigured
If SessionStorageWarning is True, Redis is not configured or the
configuration is invalid. Check the proxy runner pod logs:
kubectl logs -n toolhive-system \
-l app.kubernetes.io/name=my-server \
| grep -i "redis\|session"
Test cross-pod session reconstruction:
Scale down to one replica, connect an MCP client and start a session, then scale back up and delete the original pod. If Redis session storage is working, your client can continue making requests without reconnecting:
# Start with 1 replica
kubectl scale deployment vmcp-my-vmcp -n toolhive-system --replicas=1
# Connect your MCP client and establish a session, then:
kubectl scale deployment vmcp-my-vmcp -n toolhive-system --replicas=2
# Delete the original pod — your client should stay connected
kubectl delete pod -n toolhive-system \
-l app.kubernetes.io/name=my-vmcp --field-selector='status.podIP=<ORIGINAL_POD_IP>'
Sharing a Redis instance
You can reuse the same Redis instance for embedded auth server sessions,
MCPServer scaling, and VirtualMCPServer scaling by using different keyPrefix
values per use case. The embedded auth server uses thv:auth:* by default; set
distinct prefixes for your scaling workloads:
| Use case | Suggested keyPrefix |
|---|---|
| Embedded auth server | thv:auth (fixed, set by ToolHive) |
| MCPServer scaling | mcp-sessions |
| VirtualMCPServer scaling | vmcp-sessions |
Alternatively, use separate db values (Redis databases 0–15) to provide hard
namespace isolation without requiring separate Redis instances.
Next steps
- Configure token exchange to let MCP servers authenticate to backend services
- Monitor server activity with OpenTelemetry and Prometheus