-
Notifications
You must be signed in to change notification settings - Fork 0
Introduction
This page will give you an introduction of the workings of all services which are part of the "OpenStack Credits Service" setup. It will show you how the setups of the master thesis Accounting and Reporting of OpenStack Cloud Instances via Prometheus are actually working with the help of the development profile of this repository. See the graphic below to understand how all components interact with each other. We recommend reading the relevant sections of the thesis first and then coming back here to reproduce the steps/follow on your local machine.

-
The usage information is obtained from the Openstack API by the Usage Exporter.
-
The resources of the running machines are multiplied with weights (depending on their values). The weights are obtained from the Cloud API.
-
The information is scraped by a Prometheus instance, which is hidden behind a proxy.

-
The Prometheus instance is scraped by the Prometheus instance of the portal and the data gets written into the InfluxDB.
-
The InfluxDB pushes the data to the credits service.
-
The credits service bills the projects by updating the used credits value via the Perun attribute field.
-
The credits service saves a history of billings in InfluxDB.
-
The Cloud API gets the credits values for a running project from Perun. The history can be pulled from the credits service.
Make sure that the Perun groups used for this setup, os_credits_1 and
os_credits_2, have a positive value for credits_granted and delete the attribute
Credits Current (Credits Used once it has been renamed).
To ease further commands export all environment variables used by the development profile into your current shell and use the following helper functions:
export $(bin/project_usage-compose.py --print-env dev | xargs)
container_ip(){
docker inspect "$1"| jq --raw-output '.[].NetworkSettings.Networks | to_entries | .[0].value.IPAddress'
}
whitelist_query='match%5B%5D=%7Bjob%3D%22project_usages%22%2C__name__%3D~%22project_.%2A_usage%22%7D'Note: This requires jq
This steps assert two running dummy Site stacks, site-a and site-b.
Launch them via bin/project_usage-compose.py dev up -d.
- See the exported metrics directly
curl http://$(container_ip site-a_exporter):8080/metrics
# Reduced to usage metrics
curl http://$(container_ip site-a_exporter):8080/metrics | grep project_- Take a look at the value inside inside the Site prometheus instance
- Or the local grafana instance
# echo the credentials in use
echo $GF_SECURITY_ADMIN_USER $GF_SECURITY_ADMIN_PASSWORD- Open http://localhost:3001, enter the credentials, under 'Home' select 'Project Usages'.
- Follow the steps described here and add the dashboard to see the metrics of grafana itself which are exported to prometheus from which they are retrieved for visualization
- See how the HAProxy only allows the whitelisted queries with the correct access token:
docker logs -f site-a_prometheus_proxy
# this will also show you the valid requests from the portal prometheus instance- Issue some invalid or valid requests
# no bearer token
curl "http://$(container_ip site-a_prometheus_proxy)/federate?$whitelist_query"
# valid request with correct query and bearer token
curl "http://$(container_ip site-a_prometheus_proxy)/federate?$whitelist_query" -H "Authorization:Bearer $PORTAL_AUTH_TOKEN"
# invalid query
curl "http://$(container_ip site-a_prometheus_proxy)/federate?match%5B%5D=.%2A" -H "Authorization:Bearer $PORTAL_AUTH_TOKEN"
-
See the usage values of all locations in the Prometheus and Grafana instances running inside the de.NBI office:
- Grafana - same credentials as Site instance
- Prometheus
-
As of now some the both projects for which we are emulating usage values have been billed several times, take a look at their credits history
- The visualization is only a prototype and might contain too many billing points. The reason for this behaviour is that currently bill credits every 5 seconds and the metrics are probably way too high to every measurement leads to a billing.
-
We can query the api directly from the Swagger UI
- Use
{start,end}_dateto narrow the time window. Note that the service is running in UTC, take a look at the logging timestamps for a hint:docker logs portal_credits | tail -n10
- Use
-
Take a look at the output of
docker logs portal_smtp_server-
If no projects have received a 50% Notification yet:
- Start following the log by appending
-fto the previous command - Change the Credits Granted value of one of the projects inside Perun to value whose half is soon exceeded by a project's used credits.
- Monitor how a notification is send.
- Start following the log by appending
-
Explore the other endpoints inside the Swagger UI
-
Read the official documentation of OpenStack Credits Service, its Endpoints page shows to query the expected costs per hour.