v0.21 release

Published on: 2021-02-27 ,  Revised on: 2023-08-03

Announcing Knative v0.21 Release

A new version of Knative is now available across multiple components. Follow the instructions in the documentation Installing Knative for the respective component.

Table of Contents

Highlights

  • Kubernetes minimum version has changed to v1.18
  • Now Serving supports Istio 1.9 and Contour 1.12
  • Fix for DomainMapping when using Kourier with AutoTLS
  • Eventing Source PingSource binary mode has breaking changes.
  • Eventing sync has the ability to know when to reply, see the event reply header contract specification for more details.
  • The CLI kn 0.21.0 comes with some bug fixes and minor feature enhancements. It's mostly a polishing release. It is also the first release that brings two kn plugins to the Knative release train.
  • The Knative Operator now supports net-kourier
  • The Knative Opertor now supports a version of latest as a special version supported by the operator

Serving v0.21

🚨 Breaking or Notable

  • Kubernetes minimum version has changed to v1.18
  • GC v1 and Labeler v1 deprecated and removed from the code base
  • Webhooks certificates now use Ed25519 instead of RSA/2048 and have an expiry of one week (knative/pkg#1998)

πŸ’« New Features & Changes

  • Introduces autocreateClusterDomainClaim in config-network config map. This allows DomainMappings to be safely used in shared clusters by disabling automatic ClusterDomainClaim creation. With this option set to "false", cluster administrators must explicitly delegate domain names to namespaces by creating a ClusterDomainClaim with an appropriate spec.Namespace set. (#10537)
  • Domain mappings disallow mapping from cluster local domain names (generally domains under "cluster.local") (#10798)
  • Allow setting ReadOnlyRootFilesystem on the container's SecurityContext (#10560)
  • A container's readiness probe FailureThreshold & TimeoutSeconds are now defaulted to 3 and 1 respectively when a user opts into non-aggressive probing (ie. PeriodTimeout > 1) (#10700)
  • Avoids implicitly adding an "Accept-Encoding: gzip" header to proxied requests if one was not already present. (#10691)
  • Gradual Rollout is possible to set on individual Revisions usingserving.knative.dev/rolloutDuration annotation, (#10561)
  • Support Istio 1.9 (knative-extensions/net-istio#515](https://github.com/knative-extensions/net-istio/pull/515))
  • Support Contour 1.12 (knative-extensions/net-contour#414](https://github.com/knative-extensions/net-contour/pull/414))

🐞 Bug Fixes

  • Fixes problem with domainmapping when working with auto-tls and kourier challenge (#10811
  • Fixed a bug where the activator's metrics could get stuck and thus scale to and from zero didn't work as expected. (#10729)
  • Fixes a race in Queue Proxy drain logic that could, in a very unlikely edge case, lead to the pre-stop hook not exiting even though draining has finished (#10781)
  • Avoid slow out-of-memory issue related to metrics (knative/pkg#2005)
  • Stop reporting reflector metrics since they were removed upstream (knative/pkg#2020)

Eventing v0.21

🚨 Breaking or Notable

  • BREAKING CHANGE: PingSource binary mode now sends the actual binary data in event body instead of base64-encoded form. (#4851](https://github.com/knative/eventing/pull/4851), @eclipselu)
  • You need to run the storage migration tool after the upgrade to migrate from v1beta1 to v1beta2 pingsources.sources.knative.dev resources. (#4750](https://github.com/knative/eventing/pull/4750), @eclipselu)

πŸ’« New Features & Changes

  • Adding HorizontalPodAutoscaler and PodDisruptionBudget for the eventing webhook (#4792)
  • Add event reply header contract to the spec (#4560)
  • Cloudevent traces available for PingSource (#4877)
  • CloudEvents send to dead-letter endpoints include extension attribute called ce-knativedispatcherr which contains encoded HTTP Response error information from final dispatch attempt. (#4760, @travis-minke-sap)
  • Message receiver supports customized liveness and readiness check (#4730)
  • The imc-dispatcher service adds new trace span attributes to be consistent with broker-ingress.knative-eventing & broker-filter.knative-eventing services. The new attributes are messaging.destination, messaging.message_id, messaging.protocol and messaging.system (#4659)
  • Add Trigger.Delivery field which allows configuration of Delivery per Trigger. (#4654)

🐞 Bug Fixes

  • Fix availability of zipkin traces for APIServerSource (#4842)
  • Fix bug where sometimes the pods were not deemed up during setup. (#4725, #4741)
  • Fix bug where v1beta1 would allow mutations to immutable fields. v1beta1 trigger.spec.broker is immutable. (#4843)

🧹 Clean up

  • Now the config-imc-event-dispatcher values are not configurable on-fly anymore, that is, if you need to configure such values, you need to redeploy the dispatcher deployment (#4543)
  • PingSource: remove special handling of JSON data since events are always sent in binary mode. (#4858)
  • Cleanup channel duck types internals (#4749)

Eventing Extensions

Eventing RabbitMQ v0.21

🚨 Breaking or Notable

  • Kubernetes minimum version has changed to v1.18
    • Upgrade to v0.19.7 of k8s libraries. Minimum k8s version is now 1.18. (#213)

πŸ’« New Features & Changes

  • Support new releases of rabbitmq cluster operator v1.0, v1.1, v1.2, v1.3 . (#204)

πŸ“– Documentation

  • Added user-facing documentation for the RabbitMQ source (#201)

🧹 Clean up

  • Update or clean up current behavior - Use go 1.15 in kind e2e tests. (#196)
  • Update or clean up current behavior Use go 1.15 in go.mod (#215)
  • Update the comments in cmd/failer/main.go to match reality. (#210)
  • Use scripts from hack to determine pod readiness.(#209)

Eventing Kafka Source, Channel v0.21

πŸ’« New Features & Changes

  • Adding new optional field named sasltype to default kafka-secret to enable other Kafka SASL Methods than PLAIN. Supports SCRAM-SHA-256 or SCRAM-SHA-512. (#332)
  • Adding tls.enabled flag for public cert usage and allowing skipping CA/User certs and key (#359)
  • KafkaSource and KafkaChannel will be default use the config-leader-election CM for configs (#231)
  • Removed support for pooling Azure EventHub Namespaces and now only support a single Namespace/Authentication which limits Azure EventHub usage to their constrained number of EventHubs (Kafka Topics). (#297)
  • The "distributed" KafkaChannel configuration YAML now includes the KafkaChannel WebHook which provides conversion. (#187)
  • The KafkaSource will be installed in the knative-eventing namespace, and the old controller in knative-sources is scalled to 0 (#224)
  • Add a new alternative KafkaSource implementation in which a single global StatefulSet handles all KafkaSource instances. (#186)
  • It is now possible to define Sarama config defaults for KafkaSource in config-kafka configmap with a sarama field. (#337)
  • It is now possible to define Sarama config defaults for consolidated channel in config-kafka configmap with a sarama field. (#305)
  • KafkaChannel CustomResourceDefinition now uses apiextensions.k8s.io/v1 APIs (#132)
  • The KafkaSource scale subresource can now be used to scale up and down the underlying deployment (#138)
  • Defaulting the connection args to a sane value (#353)

🐞 Bug Fixes

  • A bug was fixed in the consolidated KafkaChannel where subscriptions would show up in the channel's status.subscribers before the dispatcher becomes ready to dispatch messages for those subscribers.
    • Consolidated KafkaChannel dispatcher's horizontal scalability works now seamlessly with reconciler leader election. (#182)
  • Fix concurrent modification of consumer groups map, which causes undefined behaviours while running reconciliation in the dispatcher (#352, @slinkydeveloper)
  • Fix crash in Kafka consumer when a rebalance occurs (#263, @lionelvillard)
  • Fix race on error channel in consumer factory (#364)
  • The KafkaSource dispatchers now expose metrics and profiling information (#221)
  • The consolidated KafkaChannel now relies by default on SyncProducer for safer event production. (#181)

Eventing Kafka Broker v0.21

πŸ’« New Features & Changes

🐞 Bug Fixes

  • Consume topic from the earliest offset (#557)
  • Fix offset management (#557)
  • Data plane reconciler handles failed reconciliation. (#568)
  • Fix TimeoutException and DnsNameResolverTimeoutException. (#539)

Client v0.21

🚨 Breaking or Notable

Revision naming

In this version, kn changes the default of how revisions are named. Up to now, the name was selected by the client itself, leveraging the "bring-your-own" (BYO) revision name support of Knative serving.

However, it turned out that this mode has several severe drawbacks:

  • If you create a service with client-side revision naming, you have to provide a new revision name on every update. This is especially tedious if using other clients than kn, like editing the resource directly in the cluster or you tools like the OpenShift Developer console. Assuming that kn is the only client to be used is a bit of a too bold attitude.
  • SinkBinding do not work with BYO revision names
  • kn service apply can't use client-generated revision names, so kn service apply ignore --revision-name option and always uses server-side generated revision names. The same is true if you want to use kubectl apply after you have created a service with BYO revision name mode with kn.
  • Revision name's are random and do not reflect a certain generational order as for server-side generated revision names
  • There are issues with new revision created when updated with the same image name again (see #398)

Please refer to issue #1144 (and issues referencing this issue) for more details about the reasoning for this breaking change.

ACTION REQUIRED

If you rely on client-side revision naming, you have to add --revision-name {{.Service}}-{{.Random 5}}-{{.Generation}} to kn service create to get back the previous default behaviour. However, in most of all cases, you shound not worry about whether the revision names are created by kn or by the Knative serving controller

In case of issues with this change, please let us know and we will fix it asap. We are committed to supporting you with any issues caused by this change.

πŸ’« New Features & Changes

  • Options --context and --cluster allow you to select the parameters for connecting to your Kubernetes cluster. Those options work the same as for kubectl.
  • Some cleanup of cluster-specific runtime information when doing a kn export.

CLI Plugins

πŸ’« New Features & Changes

CLI kn Plugins jump on the release train

With release v0.21, Knative ships also it first set of kn plugins, that are aligned with respect to their dependencies, so that they can be easily inlined.

The plugins included in version v0.21 are:

To give those plugins a try, just download them and put the binary into your execution path. You then get help with kn admin --help and kn source kafka --help, respectively.

Operator v0.21

πŸ’« New Features & Changes

The latest network ingress v0.21.0 artifacts, bundled within the image of this operator, include net-istio.yaml, net-contour.yaml and kourier.yaml.

  • Allow to configure Kourier gateway service-type (#470)
  • Adds support for extension custom manifests (#468)
  • Add HA support for autoscaler (#480)
  • Support spec.deployments to override configuration of system deployments (#472)
  • Add ha eventing master (#444)

🐞 Bug Fixes

  • Transition to the new upgrade framework for upgrade tests (#437)
  • Add ingress configuration support (#312)

🧹 Clean up

  • Add latest as a special version supported by the operator (#443)
  • Rewrite the tests for serving and eventing upgrade (#441)
  • Allow to specify build platform for test images (#451)
  • Bump a few assorted dependencies to their latest versions (#463)
  • Align all used YAML modules (#462)
  • Move istio gateway's override setting into spec.ingress.istio (#469)

Thank you contributors v0.21

Learn more

Knative is an open source project that anyone in the community can use, improve, and enjoy. We'd love you to join us!

We use analytics and cookies to understand site traffic. Information about your use of our site is shared with Google for that purpose. Learn more.

× OK