diff --git a/cncf/CNCF project - GTR.md b/cncf/CNCF project - GTR.md new file mode 100644 index 0000000..972e71f --- /dev/null +++ b/cncf/CNCF project - GTR.md @@ -0,0 +1,350 @@ + + +# General Technical Review - NetObserv / Sandbox + +- **Project:** NetObserv +- **Project Version:** 1.11 and above +- **Website:** https://netobserv.io/ +- **Date Updated:** 2026-02-10 +- **Template Version:** v1.0 +- **Description:** + +NetObserv is a set of components used to observe network traffic by generating NetFlows from eBPF agents with zero-instrumentation, enriching those flows using a Kubernetes-aware configurable pipeline, exporting them in various ways (logs, metrics, Kafka, IPFIX...), and finally providing a comprehensive visualization tool for making sense of that data, a network health dashboard, and a CLI. Those components are mainly designed to be deployed in Kubernetes via an integrated Operator, although they can also be used as standalones. + +The enriched NetFlows consist of basic 5-tuples information (IPs, ports…), metrics (bytes, packets, drops, latency…), kube metadata (pods, namespaces, services, owners), cloud data (zones), CNI data (network policy events), DNS (codes, qname) and more. + +The Network Health dashboard comes with its own set of health information derived from NetObserv data, and can also integrate data from other / third-party components, or customized data from users. An API is also provided for users to fully customize the generated metrics for their own use (e.g. for customized alerts). + +The CLI is a separate tool independent from the Operator, that provides similar functionality, but tailored for on-demand monitoring (as opposed to 24/7), and adds a packet capture (pcap) functionality. + +NetObserv is largely CNI-agnostic, although some specific features can relate to a particular CNI (e.g: getting network events from ovn-kubernetes). + + +## Day 0 - Planning Phase + +### Scope + + * Describe the roadmap process, how scope is determined for mid to long term features, as well as how the roadmap maps back to current contributions and maintainer ladder? + +NetObserv is the upstream of Red Hat [Network Observability](https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/network_observability/index) for OpenShift. As such, a large part of the roadmap comes from the requirements on that downstream product, while it benefits equally to the upstream (there are no downstream-only features). + +TBC... + + * Describe the target persona or user(s) for the project? + +The project targets both cluster administrators and project teams. Cluster administrators have a cluster-wide view over all the network traffic, full topology, access to metrics and alerts. They can run packet-capture, they configure the cluster-scoped flow collection process. + +Through multi-tenancy, project teams have access to a subset of the traffic and the related topology. They have limited configuration options, such as per-namespace sampling or traffic flagging. + + * Explain the primary use case for the project. What additional use cases are supported by the project? + +Observing the network runtime traffic with different levels of granularity and aggregations, receiving network health info such as saturation, degraded latency, DNS issues, etc. Troubleshooting network issues, narrowing down to specific pods or services, deep-diving in netflow data or pcap. Being alerted. + +With OVN-Kubernetes CNI, network policy troubleshooting, and network isolation (UDN) visualization. + + * Explain which use cases have been identified as unsupported by the project. + +Currently, network policy troubleshooting with other CNI than OVN-Kubernetes are not supported. + +L7 observability not planned to this date (no insight into http specific data such as error codes or URLs; NetObserv operates at a lower level). + + * Describe the intended types of organizations who would benefit from adopting this project. (i.e. financial services, any software manufacturer, organizations providing platform engineering services)? + +All types of organizations may benefit from network observability. + + * Please describe any completed end user research and link to any reports. + +### Usability + +* How should the target personas interact with your project? + +Configuration is done entirely through the CRD APIs, managed by an k8s operator. It is gitops-friendly. A web console is provided for the network traffic and network health visualization. Metrics and alerts are provided for Prometheus, meaning that the users can leverage their existing tooling if they already have it. A command-line interface tool is also provided, independently from the operator, allowing users to troubleshoot network from the command line. + +* Describe the user experience (UX) and user interface (UI) of the project. + +The provided web console offers two views: Network Traffic (flows visualization) and Network Health (health rules and alerts visualization). The Network Traffic view itself consists in three subviews: + - an overview of the traffic, showing various charts + - a table displaying a flat list of network flows + - a topology graph + +In all these views, traffic can be filtered by any data (e.g. by pod, namespace, IP, port, drop cause, dns error, etc.) + +In traffic overview and topology, traffic can be aggregated at different levels (e.g. per namespace, per cloud availability zone, etc.) + +A special attention is paid to the UX with many small details, to quickly filter on a displayed element, step into an aggregated topology element, etc. + +* Describe how this project integrates with other projects in a production environment. + +NetObserv can generate many metrics, ingested by Prometheus, and alerting rules for AlertManager. Users who already use them can leverage their existing setup. + +For comprehensive observability, NetObserv can also send the network flows to Grafana Loki, and/or export them to other systems by different means: using the IPFIX standard, or the OpenTelemetry protocol (as logs or as metrics), or to a Kafka broker. Those exporting options allow to integrate with many different systems (Splunk, ElasticSearch, etc.) + +In OpenShift, the web console comes as a plugin for the OpenShift Console, ensuring a smooth integration. + +In the future, we may investigate other UI integration, such as with Headlamp. + +### Design + + * Explain the design principles and best practices the project is following. + +The project design principles and best practices are globally common to many Red Hat products. The development philosophy is "upstream first", meaning that there is no hidden code/feature that only downstream users would get. In fact, there is even no specific repository for downstream. + +All contributions happen on our GitHub repositories, which are public, go through code reviewing, automated testing, and generally manual testing. A special attention is given to performance: regressions are tracked with several tools, based on kube-burner. + +We expect a reasonably high code quality standard, without being too picky on style matters. The goal is not to discourage new contributors. + +All architectural decisions are made with care, and must be well balanced according to their drawbacks. When that happens, we expect to discuss a list of pros and cons thoughtfully. One aspect that is often overlooked at first is the impact on the maintenance and support workloads. + + * Outline or link to the project’s architecture requirements? Describe how they differ for Proof of Concept, Development, Test and Production environments, as applicable. + +?? + + * Define any specific service dependencies the project relies on in the cluster. + +Both the NetObserv operator and the `flowlogs-pipeline` component interact with the Kube API server to watch resources and, for the operator, to create or update them. + +As mentioned before, NetObserv has dependencies on Loki and Prometheus. NetObserv does not install any of them, they must be installed separately (except for Loki when configured in "demo" mode). The provided helm chart includes those dependencies as optional, to simplify the installation, but they remain unmanaged. It is not required to use Loki though, it can be disabled in configuration, in which case NetObserv relies solely on Prometheus metrics, but losing precision in the process (data in Prometheus is more aggregated). + +Optionally, Kafka can be used at a pre-ingestion stage for a production-grade, high-availability deployment (e.g, using Strimzi). + +Finally, several services require TLS certificates, which are generally provided by cert-manager or OpenShift Service Certificates. + + * Describe how the project implements Identity and Access Management. + +On the ingestion side, there is no Identity and Access Management other than with the components service accounts themselves, associated with RBAC permissions. + +On the consuming side, NetObserv does not implement by itself Identity and Access Management, however all queries run against Loki or Prometheus forward the Authorization header, delegating this aspect to those backends. In a production-grade environment, Thanos and the Loki Operator can be used to enable multi-tenancy. This is how it is implemented in OpenShift. + + * Describe how the project has addressed sovereignty. + +Open-source addresses independence. + +NetObserv does not store any data directly, this is delegated to Loki and/or Prometheus and the aforementioned exporting methods. All these options offer a very decent flexibility in terms of storage options, with interoperability, which should not cause any independence blockers. + + * Describe any compliance requirements addressed by the project. + +?? + +Downstream builds are FIPS compliant (those build recipes are open-source as well). + + * Describe the project’s High Availability requirements. + +High availability can be implemented by using Kafka deployment model (e.g. with Strimzi), and using an autoscaler for the `flowlogs-pipeline` component. Loki and Prometheus should be configured for high availability as well (this aspect is not managed by NetObserv itself; using Thanos and the Loki Operator can serve this purpose). + + * Describe the project’s resource requirements, including CPU, Network and Memory. + +Resource requirements highly depend on the cluster network topology: how many nodes and pods you have, how much traffic, etc. While eBPF ensures a minimal impact on workload performance, the generated network flows can represent a significant amount of data, which impact nodes CPU, memory and bandwitdh. Some [recommendations](https://github.com/netobserv/network-observability-operator/blob/main/config/descriptions/ocp.md#resource-considerations) are provided, but your mileage may and will vary. Some statistics are documented [here](https://docs.redhat.com/en/documentation/openshift_container_platform/4.21/html/network_observability/configuring-network-observability-operators#network-observability-resource-recommendations_network_observability). + +Mitigating high resource requirements can be done in several ways, such as by increasing the sampling interval, adding filters, or considering whether or not to use Loki. More information [here](https://github.com/netobserv/network-observability-operator/tree/main?tab=readme-ov-file#configuration). + + * Describe the project’s storage requirements, including its use of ephemeral and/or persistent storage. + +Storage is not directly managed by NetObserv, and is to be configured via Prometheus and/or Loki. TTL is important to consider. Loki is often configured with a S3 backend storage, but other options exist, such as ODF. Just like memory, storage requirements highly depend on the cluster network topology, and can be mitigated the same way as mentioned above. + + * Please outline the project’s API Design: + +NetObserv defines several APIs: +- The [FlowCollector CRD](https://github.com/netobserv/network-observability-operator/blob/main/docs/FlowCollector.md) contains the main, cluster-wide configuration for NetObserv. +- The [FlowMetric CRD](https://github.com/netobserv/network-observability-operator/blob/main/docs/FlowMetric.md) allows users to entirely customize metrics derived from network flows. +- The [FlowCollectorSlice CRD](https://github.com/netobserv/network-observability-operator/blob/main/docs/FlowCollectorSlice.md) allows cluster admins to delegate some of the configuration to project teams. +- The [flows format reference](https://github.com/netobserv/network-observability-operator/blob/main/docs/flows-format.adoc) describes the structure and content of a network flow, which can be consumed in various ways. +- Additionally, [there is some documentation](https://github.com/netobserv/network-observability-operator/blob/main/docs/HealthRules.md#creating-your-own-rules-that-contribute-to-the-health-dashboard) on how users can leverage the Network Health dashboard with customized metrics and alerts, involving some less formal API. + +The project CRDs follow standard Kubernetes API conventions as well as the OpenShift ones as best effort. Deviating from them is not impossible but must be argumented. + +The project configuration is designed to work well with minimal configuration. This is especially true in OpenShift, thanks to its opinionated nature, but less true in other environments. + +The default configuration is designed to work well on small/mid-sized clusters, ie. between roughly 5 and 50 nodes, with a default sampling interval set to 50 in order to preserve resource usage (as opposed to an interval of 1, which would capture all the traffic). On bigger cluster topologies, it is recommended to optimize carefully. + +Best effort is done to achieve security by default, but this is sometimes too dependent on the environment. For instance, while a network policy is installed by default in OpenShift, it is not when running in a different environment, as this may break with some CNIs. In that case, enabling the network policy must be done explicitely, or the user can configure their own policy. + +Loki must be configured accordingly to its installation, disabled, or enabled in "demo" mode. Prometheus querier URL must be configured. It is recommended to enable the embedded network policy, or to install one. In OpenShift, Prometheus and the network policy are enabled and configured automatically. + + * Describe any new or changed API types and calls \- including to cloud providers \- that will result from this project being enabled and used + * Describe compatibility of any new or changed APIs with API servers, including the Kubernetes API server + * Describe versioning of any new or changed APIs, including how breaking changes are handled + +The project release process is split between upstream and downstream releases. For both of them, content can be tracked from the repositories, which are public. + +Upstream releases happen from the `main` branches without a well-defined cadence. They use GitHub workflows to generate images and artifacts, triggered by git tags. Versions are suffixed with `-community`, e.g. `v1.11.0-community`. A helm chart is manually updated after each component is released. The release process is described [here](https://github.com/netobserv/network-observability-operator/blob/main/RELEASE.md). + +Downstream releases happen from release branches (e.g. `release-1.11`) and use Konflux / Tekton. They produce an OLM bundle and OLM catalog fragments. They are loosely aligned with OpenShift releases. + +Versioning upstream and downstream is aligned on "major.minor", but not necessarily on ".patch". For instance, downstream `v1.2.3` and `v1.2.3-community` should have the same features (in `1.2`) but not necessarily the same fixes/patches (in `.3`). + +### Installation + +Upstream releases can be installed via Helm, as [documented here](https://github.com/netobserv/network-observability-operator/blob/main/README.md#getting-started). From a fresh/vanilla cluster (e.g. using KIND), it can be done in 5 commands (installing cert-manager, installing NetObserv, configuring a `FlowCollector`). + +Testing and validating the installation can be done by port-forwarding the web console URL and checking its content. This is described in the same link above. + +### Security + + +- [Self assessment](./Security%20Self-Assessment.md) +- On TAG Security whitepaper: +1. Make security a design requirement +Security measures have been baked in from GA day-0, and continuously improved over time. For instance, from day-0, TLS / mTLS has been recommended through Kafka; RBAC and multi-tenancy supported via the Loki Operator; eBPF agents, running with elevated privileges, are segregated in a different namespace; fine-grained capabilities are favored whenever possible. A threat modeling as been done internally at Red Hat. +2. Applying secure configuration has the best user experience +Security by default is preferred, although not always possible. Servers use TLS by default. eBPF agents run in non-privileged mode by default. +Network policy is unfortunately not always installed by default, as it may blocks communications unexpectedly with some CNIs, but it does in OpenShift. +3. Selecting insecure configuration is a conscious decision +Features that require the eBPF agent privileged mode will not automatically enable it: it remains a conscious decision. +4. Transition from insecure to secure state is possible +All the configuration is managed through the Operator with a typical reconciliation, which ensures transitions work seemlessly, in one way or another. +5. Secure defaults are inherited +NetObserv does not override any known secure defaults. +6. Exception lists have first class support +N/A +7. Secure defaults protect against pervasive vulnerability exploits. +Containers run as non-root; Release pipeline includes vulnerability scans. +8. Security limitations of a system are explainable +While security limitations are not hidden, they may not be very visible. This is something to add to the roadmap. + +TBC + + +## Day 1 \- Installation and Deployment Phase + +### Project Installation and Configuration + + + +### Project Enablement and Rollback + + + +### Rollout, Upgrade and Rollback Planning + + + +## Day 2 \- Day-to-Day Operations Phase + +### Scalability/Reliability + + + +Load tests are performed very regularly on different cluster sizes (25 and 250 nodes) to track any performance regression, using prow and kube-burner-ocp. Not all configurations can be tested this way, so the focus is set on very high range of production-grade installations, with Kafka, the Loki Operator, all features enabled, and maximum sampling (capturing all the traffic). + +[This page](https://docs.redhat.com/en/documentation/openshift_container_platform/4.21/html/network_observability/configuring-network-observability-operators) shows a short summary of these tests, alongside with resource limits recommendations. More information can be obtained from prow runs, publicly available ([here's an example](https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-eng-ocp-qe-perfscale-ci-netobserv-perf-tests-netobserv-aws-4.21-nightly-x86-node-density-heavy-25nodes/2020627868538638336/artifacts/node-density-heavy-25nodes/openshift-qe-orion/artifacts/data-netobserv-perf-node-density-heavy-AWS-25w.csv)). + + +### Observability Requirements + + + +NetObserv own observability relies heavily on Prometheus metrics, and to a lesser extent, unstructured logs and profiling. There is no plan at this time to bake tracing or structured logs directly in the code. + +From the 4 components part of the operator (eBPF agent, flowlogs-pipeline, the web console and the operator itself), the eBPF agent and flowlogs-pipeline are the two most critical to observe. They both provide metrics such as: +- Error counters, labeled by code and component. +- Gauges tracking persistent data structure sizes. +- Messages / events counters. +- Some histograms tracking operation latency. + +In OpenShift, a Health dashboard is provided to track the most meaningful metrics, alongside with more general ones (CPU, memory, file descriptors, goroutines...). For non-OpenShift, a similar dashboard could be created. + +Two Prometheus alerting rules are created, to detect the absence of flows: one for flows received by flowlogs-pipeline, the other for flows written to Loki. Those alerts fire when something prevents NetObserv from running normally. + +In addition to the metrics, potential configuration issues, or deployment issues, are reported as FlowCollector Conditions by the operator. + +Profiling (pprof) can be enabled by configuring ports in FlowCollector. It triggers a restart of the profiled workloads. + +### Dependencies + + + +### Troubleshooting + + + +### Compliance + + + + + +### Security + + diff --git a/cncf/Security Self-Assessment.md b/cncf/Security Self-Assessment.md new file mode 100644 index 0000000..67f267d --- /dev/null +++ b/cncf/Security Self-Assessment.md @@ -0,0 +1,179 @@ +# NetObserv Self-Assessment + +Security reviewers: Joël Takvorian + +This document is the Security Self-Assessment required for CNCF sandbox projects. + +## Table of Contents + +* [Metadata](#metadata) + * [Security links](#security-links) +* [Overview](#overview) + * [Actors](#actors) + * [Actions](#actions) + * [Background](#background) + * [Goals](#goals) + * [Non-goals](#non-goals) +* [Self-assessment use](#self-assessment-use) +* [Security functions and features](#security-functions-and-features) +* [Project compliance](#project-compliance) +* [Secure development practices](#secure-development-practices) +* [Security issue resolution](#security-issue-resolution) +* [Appendix](#appendix) + +## Metadata + +### Software + +- https://github.com/netobserv/network-observability-operator +- https://github.com/netobserv/flowlogs-pipeline +- https://github.com/netobserv/netobserv-ebpf-agent +- https://github.com/netobserv/network-observability-console-plugin +- https://github.com/netobserv/network-observability-cli + +### Security Provider? + +No. + +### Languages + +- Go +- TypeScript +- C (eBPF) +- Bash + +### Software Bill of Materials + +SBOM of downstream builds are publicly available (e.g. https://quay.io/repository/redhat-user-workloads/ocp-network-observab-tenant/network-observability-operator-ystream, see .sbom suffixed tags). While upstream builds don't have SBOM attached, they should be mostly identical, as upstream and downstream builds share the same code and base images. Minor differences should be expected though. + +### Security Links + +TODO + +## Overview + +NetObserv is a set of components used to observe network traffic by generating NetFlows from eBPF agents, enriching those flows with Kubernetes metadata, exporting them in various ways (logs, metrics, Kafka, IPFIX...), and finally providing a comprehensive visualization tool for making sense of that data, a network health dashboard, and a CLI. Those components are mainly designed to be deployed in Kubernetes via an integrated Operator. + +### Background + +Kubernetes can be complex, and so does Kubernetes networking. Especially as it can differ from a CNI to another. Cluster admins often find important to have a good observability over the network, that clearly maps with Kubernetes resources (Services, Pods, Nodes...). This is what NetObserv aims to offer. Additionally, it aims at identifying network issues, and raising alerts. While it is not designed as a security tool, the data that it provides can be leveraged, for instance, to detect network threat patterns. + +### Actors + +1. The [operator](https://github.com/netobserv/network-observability-operator), orchestrates the deployment of all related components (listed below), based on the supplied configuration. It operates at the cluster scope. +2. The [eBPF agent](https://github.com/netobserv/flowlogs-pipeline) and [flowlogs-pipeline](https://github.com/netobserv/flowlogs-pipeline) are collecting network flows from the hosts (nodes), processing them, before sending them to storage or custom exporters. +3. The [web console](https://github.com/netobserv/network-observability-console-plugin) reads data from the stores to display dashboards. +4. The [CLI](https://github.com/netobserv/network-observability-cli) is an independent piece that also starts the eBPF agents and flowlogs-pipeline for on-demand monitoring, from the command line. + +### Actions + +The operator reads the main configuration (FlowCollector CRD) to determine how to deploy and configure the related components. + +The eBPF agents are deployed, one per node (DaemonSet), with elevated privleges, load their eBPF payload in the host kernel, and start collecting network flows. Those flows are sent to flowlogs-pipeline, which correlate them with Kubernetes resources, and performs various transformations, before sending them to a log store (Loki) and/or expose them as Prometheus metrics. Other exporting options exist. Loki, Prometheus and any receiving system are not part of the NetObserv payload, they must be installed and managed separately. + +Optionally, Apache Kafka can be used as an intermediate between the eBPF agents and flowlogs-pipeline. + +The web console fetches the network flows from the stores (Loki and/or Prometheus) to display dashboards. It does not connect directly to other NetObserv components. + +The architecture is described more in details [here](https://github.com/netobserv/network-observability-operator/blob/main/docs/Architecture.md), with diagrams included. + +### Goals + +NetObserv intends to provide visibility on the cluster network traffic, and to help troubleshooting network issues. + +In terms of security, because the NetObserv operator has cluster-wide access to many resources, and because the eBPF agents have elevated privileges on nodes, both of them must not be accessible by non-admins. + +Additionally, NetObserv MUST NOT + +- Leak any network data or metadata to unauthorized users. +- Cause any harm by being gamed when reading network packets (untrusted). +- Allow connections from untrusted workloads to any ingest-side component, that could alter the data produced. + +### Non-Goals + +- Enforce RBAC when querying backend stores: this is the responsibility of the components that manage those stores (e.g. the Loki Operator comes with a Gateway that enforces RBAC; NetObserv connects to that Gateway). + +## Self-assessment Use + +This self-assessment is created by the NetObserv team to perform an internal analysis of the project's security. It is not intended to provide a security audit of NetObserv, or function as an independent assessment or attestation of NetObserv's security health. + +This document serves to provide NetObserv users with an initial understanding of NetObserv's security, where to find existing security documentation, NetObserv plans for security, and general overview of NetObserv security practices, both for development of NetObserv as well as security of NetObserv. + +This document provides NetObserv maintainers and stakeholders with additional context to help inform the roadmap creation process, so that security and feature improvements can be prioritized accordingly. + +## Security functions and features + +| Component | Applicability | Description of Importance | +| ------------------------- | ---------------- | --------------------------------------------------------------------------------- | +| Namespace segregation | Critical | For hardened security, the components that require elevated privileges are deployed in their own namespace, flagged as privileged, that should be only accessible by cluster admins. | +| Non-root eBPF agents | SecurityRelevant | Whenever possible, the eBPF agents run with fine-grained privileges (e.g. CAP_BPF) instead of full privileges. Some features, however, do require full privileges. | +| Network policies | SecurityRelevant | A network policy can be installed automatically to better isolate the communications of the NetObserv workloads. However, due to policies being somewhat CNI-dependent and the inherent risk of breaking communications with untested CNIs, this feature is not enabled by default, except in OpenShift. | +| Encrypted traffic | SecurityRelevant | All servers are configured with TLS by default. | +| Authorized traffic (mTLS) | SecurityRelevant | Traffic between the eBPF agents and flowlogs-pipeline can be authorized on both sides (mTLS) when using with Kafka. It is planned to bring mTLS to other modes, without Kafka. When not using mTLS, it is highly recommended to protect the netobserv namespace with a network policy. | +| RBAC-enforced stores | SecurityRelevant | Multi-tenancy can be achieved when supported by the backend stores: e.g. Loki with the Loki Operator, Prometheus with Thanos. In that case, NetObserv can be configured to forward user tokens. | + +## Project Compliance + +N/A: the project has not been evaluated against compliance standards as of today. + +### Future State + +Compliance can be evaluated based on demand. + +## Secure Development Practices + +A high security standard is observed, enforced by company policy (Red Hat). + +### Deployment Pipeline + +In order to secure the SDLC from development to deployment, the following measures are in place: + +- Branch protection on the default (`main`) branch, and release branches (`release-*`): + - Require a pull request before merging + - Require approvals: 1 + - Dismiss stale pull request approvals when new commits are pushed + - Require review from Code Owners + - Require status checks to pass before merging + - Build, linting, tests, clean state checks must pass + - In the eBPF agent, BPF bytecode is verified + - Force-push not allowed +- Code owners need to have 2FA enabled. +- Vulnerabilities in dependencies, and dependency upgrades, are managed via Dependabot and Renovate. +- Some weaknesses are reported by linters (golangci-lint, eslint). + - `govulncheck` use to be added to the roadmap. +- Downstream release process is automated. + - It includes vulnerability scans, FIPS-compliance checks, immutable images, SBOM, signing. +- Upstream release process is partly automated (the helm chart bundling is not, at this time). + - More security measures to be added to the roadmap. + +### Communication Channels + +- Internal communications among NetObserv maintainers working at Red Hat happen in private Slack channels. +- Communications with maintainers external to Red Hat happen in the public Slack channel (`#netobserv-project` on http://cloud-native.slack.com/). +- Inbound communications are accepted through that same channel, or through GitHub Issues, or the GitHub discussion pages. +- Outbound messages to users can be made via documentation, release notes, blogs, social media and the public slack channel. + +## Security Issue Resolution + +As a Red Hat product, security issues and procedures are described on the [Security Contacts and Procedures](https://access.redhat.com/security/team/contact/?extIdCarryOver=true&sc_cid=701f2000001Css5AAC) page. + +### Responsible Disclosure Practice + +The same page mentioned above describes the Responsible Disclosure Practice. An email should be send to the Red Hat Product Security team, who will engage the discussion with the project maintainers, and respond to the reporter. + +### Incident Response + +In the event that a vulnerability is reported, the maintainer team, the Red Hat Product Security team and the reporter will collaborate to determine the validity and criticality of the report. Based on these findings, the fix will be triaged and the maintainer team will work to issue a patch in a timely manner. + +Patches will be made to the `main` and the latest release branches, and new releases (upstream and downstream) will be triggered. Information will be disseminated to the community through all appropriate outbound channels as soon as possible based on the circumstance. + +## Appendix + +- Known Issues Over Time + - Known issues are currently tracked in the project roadmap. There are currently no known vulnerabilities in the current supported version. +- OpenSSF Best Practices + - The process to get a Best Practices badge is not yet on the roadmap. +- Case Studies + - TBC +- Related Projects / Vendors + - Similar to: Cilium Hubble, Pixie, Microsoft Retina. A differentiator is that NetObserv is fully open-source, CNI-independent, and actively maintained. It has some unique features, such as its FlowMetrics API. It also tries to differentiate with a polished UX.