bitnamicharts/schema-registry

Verified Publisher

By VMware

Updated 8 months ago

Bitnami Helm chart for Confluent Schema Registry

Helm
Image
Security
Integration & delivery
Message queues
0

1M+

bitnamicharts/schema-registry repository overview

Bitnami Secure Images Helm chart for Confluent Schema Registry

Confluent Schema Registry provides a RESTful interface by adding a serving layer for your metadata on top of Kafka. It expands Kafka enabling support for Apache Avro, JSON, and Protobuf schemas.

Overview of Confluent Schema Registry

Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.

TL;DR

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/schema-registry

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository.

Introduction

This chart bootstraps a Schema Registry statefulset on a Kubernetes cluster using the Helm package manager.

Before you begin

  • Kubernetes 1.23+
  • Helm 3.8.0+
  • PV provisioner support in the underlying infrastructure

Installing the chart

To install the chart with the release name my-release:

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/schema-registry

Note You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

These commands deploy Schema Registry on the Kubernetes cluster with the default configuration. The Parameters section lists the parameters that can be configured during installation.

Note List all releases using helm list.

Configuration and installation details

This section describes resource settings, rolling tags, Kafka authentication, TLS, and other options.

Resource requests and limits

Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.

To make this process easier, the chart contains the resourcesPreset values, which automatically sets the resources section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.

Rolling vs immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Enable authentication for Kafka

You can configure different authentication protocols for each listener you configure in Kafka. For instance, you can use SASL over TLS authentication for client communications, while using tls for inter-broker communications. This table shows the available protocols and the security they provide:

MethodAuthenticationEncryption using TLS
plain textNoneNo
TLSNoneYes
mTLSYes (two-way authentication)Yes
SASLYes (using SASL)No
SASL over TLSYes (using SASL)Yes

Configure the authentication protocols for client and inter-broker communications by setting the auth.clientProtocol and auth.interBrokerProtocol parameters to the desired ones, respectively.

If you enabled SASL authentication on any listener, you can set the SASL credentials using the parameters below:

  • kafka.auth.sasl.jaas.clientUsers/kafka.auth.sasl.jaas.clientPasswords: when enabling SASL authentication for communications with clients.
  • kafka.auth.sasl.jaas.interBrokerUser/kafka.auth.sasl.jaas.interBrokerPassword: when enabling SASL authentication for inter-broker communications.
  • kafka.auth.jaas.zookeeperUser/kafka.auth.jaas.zookeeperPassword: In the case that the Zookeeper chart is deployed with SASL authentication enabled.

For instance, you can deploy the chart with the following parameters:

kafka.auth.clientProtocol=sasl
kafka.auth.jaas.clientUsers[0]=clientUser
kafka.auth.jaas.clientPasswords[0]=clientPassword
Securing traffic using TLS

In order to configure TLS authentication/encryption, you can create a secret per Kafka broker you have in the cluster containing the Java Key Stores (JKS) files: the truststore (kafka.truststore.jks) and the keystore (kafka.keystore.jks). Then, you need pass the secret names with the kafka.auth.tls.existingSecrets parameter when deploying the chart.

Note If the JKS files are password protected (recommended), you will need to provide the password to get access to the keystores. To do so, use the kafka.auth.tls.password parameter to provide your password.

For instance, to configure TLS authentication on a cluster with 2 Kafka brokers, and 1 Schema Registry replica use the commands below to create the secrets:

kubectl create secret generic schema-registry-jks --from-file=/schema-registry.truststore.jks --from-file=./schema-registry-0.keystore.jks
kubectl create secret generic kafka-jks-0 --from-file=kafka.truststore.jks=./kafka.truststore.jks --from-file=kafka.keystore.jks=./kafka-0.keystore.jks
kubectl create secret generic kafka-jks-1 --from-file=kafka.truststore.jks=./kafka.truststore.jks --from-file=kafka.keystore.jks=./kafka-1.keystore.jks

Note The command above assumes you already created the truststore and keystores files. This script can help you with the JKS files generation.

Then, deploy the chart with the following parameters:

auth.kafka.jksSecret=schema-registry-jks
auth.kafka.keystorePassword=some-password
auth.kafka.truststorePassword=some-password
kafka.replicaCount=2
kafka.auth.clientProtocol=tls
kafka.auth.tls.existingSecrets[0]=kafka-jks-0
kafka.auth.tls.existingSecrets[1]=kafka-jks-1
kafka.auth.tls.password=jksPassword

In case you want to ignore hostname verification on Kafka certificates, set the parameter auth.kafka.tls.endpointIdentificationAlgorithm with an empty string "". In this case, you can reuse the same truststore and keystore for every Kafka broker and Schema Registry replica. For instance, to configure TLS authentication on a cluster with 2 Kafka brokers, and 1 Schema Registry replica use the commands below to create the secrets:

kubectl create secret generic schema-registry-jks --from-file=schema-registry.truststore.jks=common.truststore.jks --from-file=schema-registry-0.keystore.jks=common.keystore.jks
kubectl create secret generic kafka-jks --from-file=kafka.truststore.jks=common.truststore.jks --from-file=kafka.keystore.jks=common.keystore.jks
FIPS parameters

The FIPS parameters only have effect if you are using images from the Bitnami Secure Images catalog.

For more information on this new support, please refer to the FIPS Compliance section.

Backup and restore

To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool. Find the instructions for using Velero in this guide.

Adding extra flags

In case you want to add extra environment variables to Schema Registry, you can use extraEnvs parameter. For instance:

extraEnvVars:
  - name: FOO
    value: BAR
Using custom configuration

This helm chart supports using custom configuration for Schema Registry.

You can specify the configuration for Schema Registry using the configuration parameter. In addition, you can also set an external configmap with the configuration file. This is done by setting the existingConfigmap parameter.

Sidecars and Init Containers

If you have a need for additional containers to run within the same pod as Schema Registry (e.g. an additional metrics or logging exporter), you can do so using the sidecars config parameter. Simply define your container according to the Kubernetes container spec.

sidecars:
  - name: your-image-name
    image: your-image
    imagePullPolicy: Always
    ports:
      - name: portname
       containerPort: 1234

Similarly, you can add extra init containers using the initContainers parameter.

initContainers:
  - name: your-image-name
    image: your-image
    imagePullPolicy: Always
    ports:
      - name: portname
        containerPort: 1234
Using an external Kafka

Sometimes you may want to have Schema Registry connect to an external Kafka cluster rather than installing one as dependency. To do this, the chart allows you to specify credentials for an existing Kafka cluster under the externalKafka parameter. You should also disable the Kafka installation with the kafka.enabled option.

For example, use the parameters below to connect Schema Registry with an existing Kafka installation using SASL authentication:

kafka.enabled=false
externalKafka.brokers=SASL_PLAINTEXT://kafka-0.kafka-headless.default.svc.cluster.local:9092
externalKafka.auth.protocol=sasl
externalKafka.auth.jaas.user=myuser
externalKafka.auth.jaas.password=mypassword

Alternatively, you can use an existing secret with a key "client-passwords":

kafka.enabled=false
externalKafka.brokers=SASL_PLAINTEXT://kafka-0.kafka-headless.default.svc.cluster.local:9092
externalKafka.auth.protocol=sasl
externalKafka.auth.jaas.user=myuser
externalKafka.auth.jaas.existingSecret=my-secret
Gateway API

This chart provides support for exposing Schema Registry using the Gateway API and its HTTPRoute resource. If you have a Gateway controller installed on your cluster, such as APISIX, Contour, Envoy Gateway, NGINX Gateway Fabric or Kong Ingress Controller you can utilize the Gateway controller to serve your application. To enable Gateway API integration, set httpRoute.enabled to true. The Gateway to be used can be customized by setting the httpRoute.parentRefs parameter. By default, it will reference a Gateway named gateway in the same namespace as the release.

You can specify the list of hostnames to be mapped to the deployment using the httpRoute.hostnames parameter. Additionally, you can customize the rules used to route the traffic to the service by modifying the httpRoute.matches and httpRoute.filters parameters or adding new rules using the httpRoute.extraRules parameter.

Ingress

This chart provides support for Ingress resources. If you have an ingress controller installed on your cluster, such as nginx-ingress-controller or contour you can utilize the ingress controller to serve your application. To enable Ingress integration, set ingress.enabled to true. The ingress.hostname property can be used to set the host name.

Hosts

Most likely you will only want to have one hostname that maps to this Schema Registry installation. If that's your case, the property ingress.hostname will set it. However, it is possible to have more than one host. To facilitate this, the ingress.extraHosts object is can be specified as an array. You can also use ingress.extraTLS to add the TLS configuration for extra hosts.

For each host indicated at ingress.extraHosts, please indicate a name, path, and any annotations that you may want the ingress controller to know about.

For annotations, please see this document. Not all annotations are supported by all ingress controllers, but this document does a good job of indicating which annotation is supported by many popular ingress controllers.

Parameters

The following subsections list global, common, and component-specific parameters.

Global parameters
NameDescriptionValue
global.imageRegistryGlobal Docker image registry""
global.imagePullSecretsGlobal Docker registry secret names as an array[]
global.defaultStorageClassGlobal default StorageClass for Persistent Volume(s)""
global.defaultFipsDefault value for the FIPS configuration (allowed values: '', restricted, relaxed, off). Can be overridden by the 'fips' objectrestricted
global.security.allowInsecureImagesAllows skipping image verificationfalse
global.compatibility.openshift.adaptSecurityContextAdapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation)auto
kubeVersionOverride Kubernetes version""
Common parameters
NameDescriptionValue
nameOverrideString to partially override common.names.fullname template with a string (will prepend the release name)""
fullnameOverrideString to fully override common.names.fullname template with a string""
namespaceOverrideString to fully override common.names.namespace""
commonLabelsLabels to add to all deployed objects{}
commonAnnotationsAnnotations to add to all deployed objects{}
clusterDomainKubernetes cluster domain namecluster.local
extraDeployArray of extra objects to deploy with the release[]
usePasswordFilesMount credentials as files instead of using environment variablestrue
diagnosticMode.enabledEnable diagnostic mode (all probes will be disabled and the command will be overridden)false
diagnosticMode.commandCommand to override all containers in the deployment["sleep"]
diagnosticMode.argsArgs to override all containers in the deployment["infinity"]
Schema Registry parameters
NameDescriptionValue
image.registrySchema Registry image registryREGISTRY_NAME
image.repositorySchema Registry image repositoryREPOSITORY_NAME/schema-registry
image.digestSchema Registry image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
image.pullPolicySchema Registry image pull policyIfNotPresent
image.pullSecretsSchema Registry image pull secrets[]
image.debugEnable image debug modefalse
commandOverride default container command (useful when using custom images)[]
argsOverride default container args (useful when using custom images)[]
automountServiceAccountTokenMount Service Account token in podfalse
hostAliasesSchema Registry pods host aliases[]
podLabelsExtra labels for Schema Registry pods{}
configurationSpecify content for schema-registry.properties. Auto-generated based on other parameters when not specified{}
existingConfigmapName of existing ConfigMap with Schema Registry configuration""
log4jSchema Registry Log4J Configuration (optional){}
existingLog4jConfigMapName of existing ConfigMap containing a custom log4j.properties file.""
auth.tls.enabledEnable TLS configuration to provide to be used when a listener uses HTTPSfalse
auth.tls.jksSecretExisting secret containing the truststore and one keystore per Schema Registry replica""
auth.tls.keystorePasswordPassword to access the keystore when it's password-protected""
auth.tls.truststorePasswordPassword to access the truststore when it's password-protected""
auth.tls.clientAuthenticationClient authentication configuration.

Note: the README for this chart is longer than the DockerHub length limit of 25000, so it has been trimmed. The full README can be found at https://techdocs.broadcom.com/us/en/vmware-tanzu/bitnami-secure-images/bitnami-secure-images/services/bsi-app-doc/apps-charts-schema-registry-index.html

Tag summary

Content type

Image

Digest

sha256:766c1ed4d

Size

7.8 kB

Last updated

8 months ago

Requires Docker Desktop 4.37.1 or later.

This week's pulls

Pulls:

7,131

Last week

Bitnami