Knative Eventing

Knative Eventing is a system that is designed to address a common need for cloud native development and provides composable primitives to enable late-binding event sources and event consumers.

Design overview

Knative Eventing is designed around the following goals:

  1. Knative Eventing services are loosely coupled. These services can be developed and deployed independently on, and across a variety of platforms (for example Kubernetes, VMs, SaaS or FaaS).
  2. Event producers and event consumers are independent. Any producer (or source), can generate events before there are active event consumers that are listening. Any event consumer can express interest in an event or class of events, before there are producers that are creating those events.
  3. Other services can be connected to the Eventing system. These services can perform the following functions:
    • Create new applications without modifying the event producer or event consumer.
    • Select and target specific subsets of the events from their producers.
  4. Ensure cross-service interoperability. Knative Eventing is consistent with the CloudEvents specification that is developed by the CNCF Serverless WG.

Event consumers

To enable delivery to multiple types of Services, Knative Eventing defines two generic interfaces that can be implemented by multiple Kubernetes resources:

  1. Addressable objects are able to receive and acknowledge an event delivered over HTTP to an address defined in their status.address.url field. As a special case, the core Kubernetes Service object also fulfils the Addressable interface.
  2. Callable objects are able to receive an event delivered over HTTP and transform the event, returning 0 or 1 new events in the HTTP response. These returned events may be further processed in the same way that events from an external event source are processed.

Event brokers and triggers

As of v0.5, Knative Eventing defines Broker and Trigger objects to make it easier to filter events.

A Broker provides a bucket of events which can be selected by attribute. It receives events and forwards them to subscribers defined by one or more matching Triggers.

A Trigger describes a filter on event attributes which should be delivered to an Addressable. You can create as many Triggers as necessary.

Broker Trigger Diagram

Event registry

As of v0.6, Knative Eventing defines an EventType object to make it easier for consumers to discover the types of events they can consume from the different Brokers.

The registry consists of a collection of event types. The event types stored in the registry contain (all) the required information for a consumer to create a Trigger without resorting to some other out-of-band mechanism.

To learn how to use the registry, see the Event Registry documentation.

Event channels and subscriptions

Knative Eventing also defines an event forwarding and persistence layer, called a Channel. Each channel is a separate Kubernetes Custom Resource. Events are delivered to Services or forwarded to other channels (possibly of a different type) using Subscriptions. This allows message delivery in a cluster to vary based on requirements, so that some events might be handled by an in-memory implementation while others would be persisted using Apache Kafka or NATS Streaming.

See the List of Channel implementations.

Higher Level eventing constructs

There are cases where you may want to utilize a set of co-operating functions together and for those use cases, Knative Eventing provides two additional resources:

  1. Sequence provides a way to define an in-order list of functions.
  2. Parallel provides a way to define a list of branches for events.

Future design goals

The focus for the next Eventing release will be to enable easy implementation of event sources. Sources manage registration and delivery of events from external systems using Kubernetes Custom Resources. Learn more about Eventing development in the


Knative Eventing currently requires Knative Serving installed with either Istio version >=1.0, Contour version >=1.1, or Gloo version >=0.18.16. Follow the instructions to install on the platform of your choice.


The eventing infrastructure supports two forms of event delivery at the moment:

  1. Direct delivery from a source to a single Service (an Addressable endpoint, including a Knative Service or a core Kubernetes Service). In this case, the Source is responsible for retrying or queueing events if the destination Service is not available.
  2. Fan-out delivery from a source or Service response to multiple endpoints using Channels and Subscriptions. In this case, the Channel implementation ensures that messages are delivered to the requested destinations and should buffer the events if the destination Service is unavailable.

Control plane object model

The actual message forwarding is implemented by multiple data plane components which provide observability, persistence, and translation between different messaging protocols.

Data plane implementation


Each source is a separate Kubernetes custom resource. This allows each type of Source to define the arguments and parameters needed to instantiate a source. Knative Eventing defines the following Sources in the API group. Types below are declared in golang format, but may be expressed as simple lists, etc in YAML. All Sources should be part of the sources category, so you can list all existing Sources with kubectl get sources. The currently-implemented Sources are described below.

In addition to the core sources (explained below), there are other sources that you can install.

If you need a Source not covered by the available Source implementations, there is a tutorial on writing your own Source with kubebuilder as well as an extended tutorial on writing a Source with Receive Adapter.

If your code needs to send events as part of its business logic and doesn't fit the model of a Source, consider feeding events directly to a Broker.


The KubernetesEventSource fires a new event each time a Kubernetes Event is created or updated.

Spec fields:

  • namespace: string The namespace to watch for events.
  • serviceAccountname: string The name of the ServiceAccount used to connect to the Kubernetes apiserver.
  • sink: ObjectReference A reference to the object that should receive events.

See the Kubernetes Event Source example.


The GitHubSource fires a new event for selected GitHub event types.

Spec fields:

  • ownerAndRepository: string The GitHub owner/org and repository to receive events from. The repository may be left off to receive events from an entire organization.
  • eventTypes: []string A list of event types in “Webhook event name” format (lower_case).
  • accessToken.secretKeyRef: SecretKeySelector containing a GitHub access token for configuring a GitHub webhook. One of this or secretToken must be set.
  • secretToken.secretKeyRef: SecretKeySelector containing a GitHub secret token for configuring a GitHub webhook. One of this or accessToken must be set.
  • serviceAccountName: string The name of the ServiceAccount to run the container as.
  • sink: ObjectReference A reference to the object that should receive events.
  • githubAPIURL: string Optional field to specify the base URL for API requests. Defaults to the public GitHub API if not specified, but can be set to a domain endpoint to use with GitHub Enterprise, for example, This base URL should always be specified with a trailing slash.

See the GitHub Source example.


The GcpPubSubSource fires a new event each time a message is published on a Google Cloud Platform PubSub topic.

Spec fields:

  • googleCloudProject: string The GCP project ID that owns the topic.
  • topic: string The name of the PubSub topic.
  • serviceAccountName: string The name of the ServiceAccount used to access the gcpCredsSecret.
  • gcpCredsSecret: ObjectReference A reference to a Secret which contains a GCP refresh token for talking to PubSub.
  • sink: ObjectReference A reference to the object that should receive events.

See the GCP PubSub Source example.


The AwsSqsSource fires a new event each time an event is published on an AWS SQS topic.

Spec fields:

  • queueURL: URL of the SQS queue to pull events from.
  • awsCredsSecret: credential to use to poll the AWS SQS queue.
  • sink: ObjectReference A reference to the object that should receive events.
  • serviceAccountName: string The name of the ServiceAccount used to access the awsCredsSecret.


The ContainerSource will instantiate a container image which can generate events until the ContainerSource is deleted. This may be used (for example) to poll an FTP server for new files or generate events at a set time interval.

Spec fields:

  • image (required): string A docker image of the container to be run.
  • args: []string Command-line arguments. If no --sink flag is provided, one will be added and filled in with the DNS address of the sink object.
  • env: map[string]string Environment variables to be set in the container.
  • serviceAccountName: string The name of the ServiceAccount to run the container as.
  • sink: ObjectReference A reference to the object that should receive events.


The CronJobSource fires events based on given Cron schedule.

Spec fields:

  • schedule (required): string A Cron format string, such as 0 * * * * or @hourly.
  • data: string Optional data sent to downstream receiver.
  • serviceAccountName: string The name of the ServiceAccount to run the container as.
  • sink: ObjectReference A reference to the object that should receive events.

See the Cronjob Source example.


The KafkaSource reads events from an Apache Kafka Cluster, and passes these to a Knative Serving application so that they can be consumed.

Spec fields:

  • consumerGroup: string Name of a Kafka consumer group.
  • bootstrapServers: string Comma separated list of hostname:port pairs for the Kafka Broker.
  • topics: string Name of the Kafka topic to consume messages from.
  • net: Optional network configuration.
    • sasl: Optional SASL authentication configuration.
      • enable: boolean If true, use SASL for authentication.
      • user.secretKeyRef: SecretKeySelector containing the SASL username to use.
      • password.secretKeyRef: SecretKeySelector containing the SASL password to use.
    • tls: Optional TLS configuration.
      • enable: boolean If true, use TLS when connecting.
      • cert.secretKeyRef: SecretKeySelector containing the client certificate to use.
      • key.secretKeyRef: SecretKeySelector containing the client key to use.
      • caCert.secretKeyRef: SecretKeySelector containing a server CA certificate to use when verifying the server certificate.

See the Kafka Source example.


A CamelSource is an event source that can represent any existing Apache Camel component that provides a consumer side, and enables publishing events to an addressable endpoint. Each Camel endpoint has the form of a URI where the scheme is the ID of the component to use.

CamelSource requires Camel-K to be installed into the current namespace.

Spec fields:

  • source: information on the kind of Camel source that should be created.
    • component: the default kind of source, enables creating an EventSource by configuring a single Camel component.
      • uri: string contains the Camel URI that should be used to push events into the target sink.
      • properties: key/value map contains Camel global options or component specific configuration. Options are available in the documentation of each existing Apache Camel component.
  • serviceAccountName: string an optional service account that can be used to run the source pod.
  • image: string an optional base image to use for the source pod, mainly for development purposes.

See the CamelSource example.

Getting Started


  • Default Channels provide a way to choose the persistence strategy for Channels across the cluster.