PMP is an open source, modularly designed, programmable platform for collecting, exposing and visualising data from data sources in the Continuum Cloud. In addition, it provides threat detection to alert and notify on anomalous behaviour by analysing network traffic. Finally, PMP uses agnostic Sigma rules to configure the tools.
- 🌀 Data collection in real time
- 🔌 Automation process
- 🔔 Alerts and notifications
- 🔨 Dynamic configuration
- 📊 Data visualisation
- ➕ Modular
- 🚀 RESTful Public API for programmatic access
- 🐳 Dockerized deployment for easy setup
🔒 Developed
- Fluentd
- Telegraf
- Falco
- Tshark
- Filebeat
- Kafka
- Snort3
- MongoDB
- CICFlowMeter
- Prometheus
- Logstash
- OpenSearch
🚧 Future development
- Grafana
- InfluxDB
- Sigma translator
- Clone the repository:
gh repo clone CyberDataLab/ROBUST-6G_PMP
- Navigate to the project directory:
cd ROBUST-6G_PMP/
- Usage and deployment as a general option in which all modules are activated.
python3 ./Launcher/start_containers.py all
- Usage and deployment exploiting the modularity of PMP. Use
-mto name each module followed by-twith the simple name of the tools to be deployed. Tools can be concatenated using spaces or commas. If you need to use all the tools in the module, you can use-t all.Orsudo python3 ./Launcher/start_containers.py -m moduleName -t all
In examplesudo python3 ./Launcher/start_containers.py -m moduleName -t toolName1,toolName2
sudo python3 ./Launcher/start_containers.py -m alert_module -t all -m db_module -t all -m communication_module -t all -m flow_module -t all -m collection_module -t tshark,fluentd,telegraf
Do not use the docker-compose.yml file, as the PMP requires an environment file to run correctly.
- Deletes containers, volumes, and Docker networks, but not the data generated.
python3 ./Launcher/remove_containers.py
Table of current modules and tools implemented.
| Modules | Tool 1 | Tool 2 | Tool 3 | Tool 4 | Tool 5 |
|---|---|---|---|---|---|
| alert_module | alert_module | ||||
| communication_module | kafka | filebeat | |||
| collection_module | fluentd | telegraf | tshark | falco | info* |
| flow_module | flow_module | ||||
| db_module | mongodb | ||||
| aggregation_module | prometheus | opensearch |
* info: This container exposes the endpoint addresses of the data collection tools deployed on the target device, as well as its machine_id, which identifies it. Use it only if prometheus is to be deployed or is already deployed.
There are more containers associated with some tools to provide necessary services such as these:
- Collection_Module > falco-exporter:
falcohas a current exporter plugin developed by the official organisation to expose information toprometheus. It is automatically implemented withfalco. - Aggregation_module > init-prometheus: Used to change the owner of the /prometheus folder to user 65534 (nobody). It is necessary to manage
prometheusdata from the Docker volume. It changes the owner and is removed when its job is done. It is automatically deployed withprometheus. - Aggregation_module > discovery-agent: Continuously scans the network to discover devices that expose their data from the
infocontainer.prometheusextracts the information from the endpoint of this container. It is automatically deployed withprometheus. - Aggregation_module > init-opensearch: Performs the same task as the
init-prometheuscontainer, but is used inopensearchto correct OpenSearch data permissions (UID 1000). It is automatically implemented withopensearch. - Aggregation_module > opensearch-dashboards: Official dashboard implementation for
opensearch. Used to visualise data exposed toopensearchin Elastic Common Schema format. Automatically deployed withopensearch. - Aggregation_module > logstash: This container implements
logstashto analyse information fromkafkatopics to Elastic Common Schema. It sends the information toopensearch. It is automatically deployed withopensearch.
Docker28.5.1 or higher.Please do not use the individual docker-compose module. Docker 28.5.1 or higher utilises the updated version ofdocker-compose1.29.2 or higher.docker compose, which has the appropriate functionalities to run the PMP.Python3.12or higher.
The tool containers already satisfy their requirements without the need of any user installation.
PMP is open-source and distributed under the GNU AGPLv3 License. See LICENSE for more information.
- Community Edition — released under the GNU Affero GPL v3.0.
- Enterprise Edition — proprietary license & premium support available.
Contact alberto.garciap@um.es and josemaria.jorquera@um.es for commercial terms.
In case filebeat.yml is showing errors, change the permissions manually with:
sudo chmod 644 /Communication_Bus/Configuration_Files/filebeat.yml
sudo chown root:root /Communication_Bus/Configuration_Files/filebeat.ymlIf you are using PMP as a test on your local machine, remember to update the /etc/hosts file to avoid issues with DNS addressing on Kafka brokers. In example:
sudo nano /etc/hostsWrite the following line below the 127.0.1.1 user:
yourIP kafka_robust6g-node1.lan