Configuration¶
This guide provides a comprehensive overview of the configuration options available for both Neo4jEnterpriseCluster and Neo4jEnterpriseStandalone custom resources. The operator allows for a declarative approach to managing your Neo4j deployments, where you define the desired state in a YAML file, and the operator works to make it a reality.
CRD Specification¶
The full CRD specifications, which detail every possible configuration field, can be found in the API Reference: - Neo4jEnterpriseCluster - For clustered deployments - Neo4jEnterpriseStandalone - For single-node deployments
Key Configuration Fields¶
Below are some of the most important fields you will use to configure your cluster. For a complete list, please consult the API reference.
spec.image: The Neo4j Docker image to use. Requires Neo4j Enterprise 5.26+ or 2025.x. You can specify the repository (e.g.,neo4j), tag (e.g.,5.26-enterprise), pull policy, and pull secrets for private registries.
Private Registry / Image Pull Secrets¶
To pull Neo4j images from a private registry (ECR, GCR, ACR, or a private Docker Hub account), create a Kubernetes image pull secret and reference it in your cluster spec:
# Create the pull secret
kubectl create secret docker-registry my-registry-secret \
--docker-server=<registry-url> \
--docker-username=<username> \
--docker-password=<password>
spec:
image:
repo: my-private-registry.example.com/neo4j
tag: "2025.01.0-enterprise"
pullSecrets:
- my-registry-secret
The pullSecrets field accepts a list of secret names. Secrets must exist in the same namespace as the cluster. The operator automatically propagates the secrets to the StatefulSet's imagePullSecrets field.
Cloud-managed registries: For ECR (AWS), GCR (Google Cloud), or ACR (Azure), use workload identity / IRSA to avoid long-lived credentials where possible. The pullSecrets field supports any Kubernetes kubernetes.io/dockerconfigjson secret.
* spec.topology: (Cluster only) Defines the architecture of your cluster. Specify the total number of servers (minimum 2) that will self-organize into primary and secondary roles based on database requirements. You can optionally configure server role constraints.
* spec.storage: Configures the persistent storage for the cluster, including storage class, size, and retention policy.
* spec.auth: Manages authentication, allowing you to specify the provider (native, LDAP, etc.) and the secret containing credentials.
* spec.resources: Allows you to set specific CPU and memory requests and limits for the Neo4j pods, which is crucial for performance tuning.
* spec.backups: (Deprecated) Use the separate Neo4jBackup CRD for backup management. The operator now uses a centralized backup StatefulSet for resource efficiency.
* spec.monitoring: Enable monitoring, Prometheus metrics exposure, and query logging.
Live Diagnostics: When
enabled: trueand the cluster isReady, the operator automatically runsSHOW SERVERSandSHOW DATABASESand writes results tostatus.diagnostics. Two new conditions,ServersHealthyandDatabasesHealthy, reflect cluster health without requiringkubectl exec. See the Monitoring Guide for full details.
- Plugin management: Use separate Neo4jPlugin CRDs to install plugins like APOC, GDS, Bloom, GenAI, and N10s. The operator automatically handles Neo4j 5.26+ compatibility requirements (see Neo4jPlugin API Reference).
spec.mcp: Optional Neo4j MCP server deployment for client integrations (HTTP or STDIO). Requires the APOC plugin via Neo4jPlugin; HTTP uses per-request auth and supports Service/Ingress/Route exposure with optional TLS.spec.tls: Configure TLS/SSL encryption. Set mode tocert-managerand provide an issuerRef for automatic certificate management.spec.config: Add custom Neo4j configuration settings as key-value pairs. These are added to neo4j.conf.spec.env: Add environment variables to Neo4j pods. Note that NEO4J_AUTH and NEO4J_ACCEPT_LICENSE_AGREEMENT are managed by the operator.spec.service: Configure service type (ClusterIP, NodePort, LoadBalancer), annotations, and external access settings (Ingress; OpenShift Route).spec.propertySharding: (Neo4j 2025.12+) Enable property sharding for horizontal scaling of large datasets. See the Property Sharding Guide for detailed configuration options.
Storage and PVC Retention¶
The spec.storage section configures persistent volumes for Neo4j data. The most important field users overlook is retentionPolicy, which controls what happens to your data when a cluster or standalone is deleted.
Retention Policy¶
| Value | Behavior | Use When |
|---|---|---|
Delete (default) |
PVCs are permanently deleted when the cluster/standalone is removed | Development, testing, temporary deployments |
Retain |
PVCs are preserved after deletion and can be manually recovered or reused | Production, valuable data, compliance requirements |
Data loss warning: The default is
Delete. If you delete aNeo4jEnterpriseClusterorNeo4jEnterpriseStandaloneresource without changing this default, all data on the associated PVCs will be permanently lost. There is no undo. For production deployments, always setretentionPolicy: Retain.
Configuration¶
spec:
storage:
className: premium-rwo # Your StorageClass
size: "100Gi"
retentionPolicy: Retain # Keep PVCs on deletion (recommended for production)
This applies identically to both Neo4jEnterpriseCluster and Neo4jEnterpriseStandalone.
What happens with each policy¶
With Delete (default):
1. You run kubectl delete neo4jenterprisecluster my-cluster
2. The operator deletes the StatefulSet and all associated PVCs
3. The underlying PersistentVolumes are released and reclaimed per the StorageClass reclaimPolicy
4. Data is gone
With Retain:
1. You run kubectl delete neo4jenterprisecluster my-cluster
2. The operator deletes the StatefulSet but leaves PVCs intact
3. PVCs remain in the namespace with their data
4. You can inspect the data, attach it to a new deployment, or manually delete when ready
Checking current policy¶
# Check the retention policy of a running cluster
kubectl get neo4jenterprisecluster my-cluster -o jsonpath='{.spec.storage.retentionPolicy}'
# List PVCs that would be affected
kubectl get pvc -l app=my-cluster
Recovering retained PVCs¶
If you deleted a cluster with Retain and want to redeploy using the same data, create a new cluster with the same name and storage configuration. The StatefulSet will reattach to the existing PVCs (matched by name).
Best practices¶
- Production: Always set
retentionPolicy: Retainand rely on backups (viaNeo4jBackupCRD) for disaster recovery - Development:
Deleteis fine for ephemeral environments — keeps namespaces clean - CI/CD: Use
Deletein test pipelines to avoid PVC accumulation - Before deletion: Always verify the retention policy before deleting a cluster:
kubectl get neo4jenterprisecluster <name> -o jsonpath='{.spec.storage.retentionPolicy}'
MCP Server¶
The operator can deploy an optional Neo4j MCP server alongside a cluster or standalone deployment. It uses the official mcp/neo4j image (Docker Hub, source) — the supported Neo4j product MCP server.
The MCP server runs as a separate Deployment and connects to the Neo4j service inside the namespace. For client configuration and HTTP/STDIO usage, see the MCP Client Setup Guide.
Requirements¶
- APOC: MCP requires APOC for the
get-schematool. Install APOC using the Neo4jPlugin CRD (see Neo4jPlugin API Reference). - Image: If
spec.mcp.imageis omitted the operator defaults tomcp/neo4j:latest. Pin a version withspec.mcp.image.tag.
Transport Modes¶
- HTTP (default): No static credentials in the MCP pod. Each client request carries a Basic Auth or Bearer token
Authorizationheader; the server uses those credentials to connect to Neo4j per-request. The operator creates a Service (<name>-mcp:8080) and optionally an Ingress or OpenShift Route. The endpoint path is/mcp(fixed).- Benefits: per-request auth, multi-user, works well with desktop clients (Claude Desktop, VSCode).
- STDIO (in-cluster only): The operator injects
NEO4J_USERNAMEandNEO4J_PASSWORDfrom the admin secret (or a custom secret viaspec.mcp.auth). No Service/Ingress/Route is created. Use for in-cluster automation.
TLS for HTTP¶
The official image supports container-level TLS. Provide a Kubernetes TLS secret via spec.mcp.http.tls.secretName; the operator mounts it and sets NEO4J_MCP_HTTP_TLS_ENABLED=true.
Default ports: 8080 (no TLS) or 8443 (with TLS). Override with spec.mcp.http.port.
Tip: For most deployments, terminate TLS at the Ingress layer and leave
spec.mcp.http.tlsunset.
Example: Cluster MCP (HTTP)¶
apiVersion: neo4j.neo4j.com/v1beta1
kind: Neo4jEnterpriseCluster
metadata:
name: graph-prod
spec:
image:
repo: neo4j
tag: 2025.01.0-enterprise
topology:
servers: 3
storage:
className: standard
size: 50Gi
mcp:
enabled: true
# image defaults to mcp/neo4j:latest — no need to specify
transport: http
readOnly: true
http:
service:
type: ClusterIP
Example: Cluster MCP (HTTP with Ingress)¶
mcp:
enabled: true
transport: http
readOnly: true
http:
service:
type: ClusterIP
ingress:
enabled: true
host: neo4j-mcp.example.com
className: nginx
tlsSecretName: neo4j-mcp-tls
Example: Standalone MCP (STDIO)¶
apiVersion: neo4j.neo4j.com/v1beta1
kind: Neo4jEnterpriseStandalone
metadata:
name: graph-dev
spec:
image:
repo: neo4j
tag: 5.26.0-enterprise
storage:
className: standard
size: 10Gi
auth:
adminSecret: neo4j-admin-secret
mcp:
enabled: true
transport: stdio
# auth defaults to the cluster admin secret; override if needed:
# auth:
# secretName: my-readonly-user