diff --git a/ci/vale/dictionary.txt b/ci/vale/dictionary.txt index 71255e2131e..472fb0e3b27 100644 --- a/ci/vale/dictionary.txt +++ b/ci/vale/dictionary.txt @@ -2852,6 +2852,7 @@ wazuh wc wchar wchar_t +Weaviate webadmin webalizer webapp diff --git a/docs/guides/applications/messaging/install-mastodon-on-ubuntu-1604/docker-compose.yml b/docs/guides/applications/messaging/install-mastodon-on-ubuntu-1604/docker-compose.yml index 89ce6bee265..f67fccb52ea 100644 --- a/docs/guides/applications/messaging/install-mastodon-on-ubuntu-1604/docker-compose.yml +++ b/docs/guides/applications/messaging/install-mastodon-on-ubuntu-1604/docker-compose.yml @@ -81,24 +81,6 @@ services: - ./public/system:/mastodon/public/system - ./public/packs:/mastodon/public/packs -## Uncomment to enable federation with tor instances along with adding the following ENV variables -## http_proxy=http://privoxy:8118 -## ALLOW_ACCESS_TO_HIDDEN_SERVICE=true -# tor: -# build: https://github.com/usbsnowcrash/docker-tor.git -# networks: -# - external_network -# - internal_network -# -# privoxy: -# build: https://github.com/usbsnowcrash/docker-privoxy.git -# command: /opt/sbin/privoxy --no-daemon --user privoxy.privoxy /opt/config -# volumes: -# - ./priv-config:/opt/config -# networks: -# - external_network -# - internal_network - nginx: build: context: ./nginx @@ -121,4 +103,4 @@ services: networks: external_network: internal_network: - internal: true \ No newline at end of file + internal: true diff --git a/docs/guides/security/vulnerabilities/linux-red-team-defense-evasion-rootkits/index.md b/docs/guides/security/vulnerabilities/linux-red-team-defense-evasion-rootkits/index.md index 3c3991911ba..e913a19e16a 100644 --- a/docs/guides/security/vulnerabilities/linux-red-team-defense-evasion-rootkits/index.md +++ b/docs/guides/security/vulnerabilities/linux-red-team-defense-evasion-rootkits/index.md @@ -79,8 +79,6 @@ We can leverage the ability to load Apache2 modules to load our own rootkit modu Command injection vulnerabilities allow attackers to execute arbitrary commands on the target operating system. -To achieve this, we will be using the apache-rootkit module that can be found here: https://github.com/ChristianPapathanasiou/apache-rootkit - Apache-rootkit is a malicious Apache module with rootkit functionality that can be loaded into an Apache2 configuration with ease and with minimal artifacts. The following procedures outline the process of setting up the apache-rootkit module on a target Linux system: @@ -97,10 +95,7 @@ The following procedures outline the process of setting up the apache-rootkit mo cd /tmp -1. The next step will involve cloning the apache-rootkit repository on to the target system, this can be done by running the following command: - - git clone https://github.com/ChristianPapathanasiou/apache-rootkit.git - +1. The next step will involve cloning the apache-rootkit repository on to the target system. 1. After cloning the repository you will need to navigate to the “apache-rootkit” directory: cd apache-rootkit @@ -215,4 +210,4 @@ Given that the target server is running the LAMP stack, we can create a PHP mete ![Meterpreter session receiving connection from Commix PHP backdoor](meterpreter-session-receiving-connection-from-commix-php-backdoor.png "Meterpreter session receiving connection from Commix PHP backdoor") - We have been able to successfully set up the apache-rootkit module and leverage the command injection functionality afforded by the module to execute arbitrary commands on the target system as well as upload a PHP backdoor that will provide you with a meterpreter session. \ No newline at end of file + We have been able to successfully set up the apache-rootkit module and leverage the command injection functionality afforded by the module to execute arbitrary commands on the target system as well as upload a PHP backdoor that will provide you with a meterpreter session. diff --git a/docs/marketplace-docs/guides/elk-cluster/elastic-home.png b/docs/marketplace-docs/guides/elk-cluster/elastic-home.png new file mode 100644 index 00000000000..b80ec977d56 Binary files /dev/null and b/docs/marketplace-docs/guides/elk-cluster/elastic-home.png differ diff --git a/docs/marketplace-docs/guides/elk-cluster/elastic-login.png b/docs/marketplace-docs/guides/elk-cluster/elastic-login.png new file mode 100644 index 00000000000..381e16db81f Binary files /dev/null and b/docs/marketplace-docs/guides/elk-cluster/elastic-login.png differ diff --git a/docs/marketplace-docs/guides/elk-cluster/elasticstack-overview.png b/docs/marketplace-docs/guides/elk-cluster/elasticstack-overview.png new file mode 100644 index 00000000000..e6152d14972 Binary files /dev/null and b/docs/marketplace-docs/guides/elk-cluster/elasticstack-overview.png differ diff --git a/docs/marketplace-docs/guides/elk-cluster/index.md b/docs/marketplace-docs/guides/elk-cluster/index.md new file mode 100644 index 00000000000..d317d32a25d --- /dev/null +++ b/docs/marketplace-docs/guides/elk-cluster/index.md @@ -0,0 +1,183 @@ +--- +title: "Deploy the Elastic Stack through the Linode Marketplace" +description: "This guide helps you configure the Elastic Stack using the Akamai Compute Marketplace." +published: 2025-12-05 +modified: 2025-12-05 +keywords: ['elk stack', 'elk', 'kibana', 'logstash', 'elasticsearch', 'logging', 'siem', 'cluster', 'elastic stack'] +tags: ["marketplace", "linode platform", "cloud manager", "elk", "logging"] +aliases: ['/products/tools/marketplace/guides/elastic-stack/'] +external_resources: +- '[Elastic Stack Documentation](https://www.elastic.co/docs)' +authors: ["Akamai"] +contributors: ["Akamai"] +license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)' +marketplace_app_id: 804144 +marketplace_app_name: "Elastic Stack" +--- +## Cluster Deployment Architecture + +!["Elastic Stack Cluster Architecture"](elasticstack-overview.png "Elastic Stack Cluster Architecture") + +The Elastic Stack is a unified observability platform that brings together search, data processing, and visualization through Elasticsearch, Logstash, and Kibana. It provides an end-to-end pipeline for ingesting, transforming, indexing, and exploring operational data at scale. Elasticsearch delivers distributed search and analytics with near real-time indexing, while Logstash enables flexible data collection and enrichment from diverse sources. Kibana offers an interactive interface for visualizing log streams, building dashboards, and performing advanced analysis. + +This solution is well-suited for log aggregation, application monitoring, infrastructure observability, and security analytics. Its open architecture and extensive ecosystem make it adaptable to a wide range of use cases—including distributed system debugging, SIEM workflows, API performance monitoring, and centralized logging. + +This Marketplace application stands up a multi-node Elastic Stack cluster using an automated deployment script configured by Akamai. + +## Deploying a Marketplace App + +{{% content "deploy-marketplace-apps-shortguide" %}} + +{{% content "marketplace-verify-standard-shortguide" %}} + +{{< note title="Estimated deployment time" >}} +Your cluster should be fully installed within 5-10 minutes with a cluster of 5 nodes. Larger clusters may take longer to provision, and you can use the formula, 8 minutes per 5 nodes, to estimate completion time. +{{< /note >}} + +## Configuration Options + +### Elastic Stack Options + +- **Linode API Token** *(required)*: Your API token is used to deploy additional Compute Instances as part of this cluster. At a minimum, this token must have Read/Write access to *Linodes*. If you do not yet have an API token, see [Get an API Access Token](/docs/products/platform/accounts/guides/manage-api-tokens/) to create one. + +- **Email address (for the Let's Encrypt SSL certificate)** *(required)*: Your email is used for Let's Encrypt renewal notices. A valid SSL certificate is validated through certbot and installed on the Kibana instance in the cluster. This allows you to visit Kibana securely through a browser. + +{{% content "marketplace-required-limited-user-fields-shortguide" %}} + +{{% content "marketplace-special-character-limitations-shortguide" %}} + +#### TLS/SSL Certificate Options + +The following fields are used when creating the self-signed TLS/SSL certificates for the cluster. + +- **Country or region** *(required)*: Enter the country or region for you or your organization. +- **State or province** *(required)*: Enter the state or province for you or your organization. +- **Locality** *(required)*: Enter the town or other locality for you or your organization. +- **Organization** *(required)*: Enter the name of your organization. +- **Email address** *(required)*: Enter the email address you wish to use for your certificate file. +- **CA Common name:** This is the common name for the self-signed Certificate Authority. + +#### Picking the Correct Instance Plan and Size + +In the **Cluster Settings** section you can designate the size for each component in your Elastic deployment. The size of the cluster depends on your needs--if you are looking for a faster deployment, stick with the defaults provided. + +- **Kibana Size**: This deployment creates a single Kibana instance with Let's Encrypt certificates. This option cannot be changed. +- **Elasticsearch Cluster Size**: The total number of nodes in your Elasticsearch cluster. +- **Logstash Cluster Size**: The total number of nodes in your Logstash cluster. + +Next, associate your Elasticsearch and Logstash clusters with a corresponding instance plan option. + +- **Elasticsearch Instance Type**: This is the plan type used for your Elasticsearch cluster. +- **Logstash Instance Type**: This is the plan type used for your Logstash cluster. + +{{< note title="Kibana instance type" >}} +In order to choose the Kibana instance, you first need to select a deployment region and then pick a plan from the **[Linode Plan](https://techdocs.akamai.com/cloud-computing/docs/create-a-compute-instance#choose-a-linode-type-and-plan)** section. +{{< /note >}} + +#### Additional Configuration + +- **Filebeat IP addresses allowed to access Logstash**: If you have existing Filebeat agents already installed, you can provide their IP addresses for an allowlist. The IP addresses must be comma separated. + +- **Logstash username to be created for index**: This is the username that is created and can access index below. This is created so that you can begin ingesting logs after deployment. + +- **Elasticsearch index to be created for log ingestion**: This lets you start ingesting logs. Edit the index name for your specific use-case. For example, if you have WordPress application you want to perform log aggregation for, the index name `wordpress-logs` would be appropriate. + +## Getting Started After Deployment + +### Accessing Elastic Frontend + +Once you cluster has finished deploying, you can log into your Elastic cluster using your local browser. + +1. Log into the provisioner node as your limited sudo user, replacing `{{< placeholder "USER" >}}` with the sudo username you created, and `{{< placeholder "IP_ADDRESS" >}}` with the instance's IPv4 address: + + ```command + ssh {{< placeholder "USER" >}}@{{< placeholder "IP_ADDRESS" >}} + ``` + + {{< note title="The provisioner node is also the Kibana node" >}} + Your provisioner node is the first Linode created in your cluster and is also the instance running Kibana. To identify the node in your list of Linodes, look for the node appended with the name "kibana". For example: `kibana-76f0443c` + {{< /note >}} + +1. Open the `.credentials` file with the following command. Replace `{{< placeholder "USER" >}}` with your sudo username: + + ```command + sudo cat /home/{{< placeholder "USER" >}}/.credentials + ``` + +1. In the `.credentials` file, locate the Kibana URL. Paste the URL into your browser of choice, and you should be greeted with a login page. + + !["Elastic Login Page"](elastic-login.png "Elastic Login Page") + +1. To access the console, enter `elastic` as the username along with the password posted in the `.credentials` file. A successful login redirects you to the welcome page. From there you are able to add integrations, visualizations, and make other config changes. + + !["Elastic Welcome Page"](elastic-home.png "Elastic Welcome Page") + +#### Configure Filebeat (Optional) + +Follow the next steps if you already have Filebeat configured on a system. + +1. Create a backup of your `/etc/filebeat/filebeat.yml` configuration: + + ```command + cp /etc/filebeat/filebeat.yml{,.bak} + ``` + +1. Update your Filebeat inputs: + + ```file {title="/etc/filebeat/filebeat.yml" lang="yaml"} + filebeat.inputs: + + # Each - is an input. Most options can be set at the input level, so + # you can use different inputs for various configurations. + # Below are the input-specific configurations. + + # filestream is an input for collecting log messages from files. + - type: filestream + + # Unique ID among all inputs, an ID is required. + id: web-01 + + # Change to true to enable this input configuration. + #enabled: false + enabled: true + + # Paths that should be crawled and fetched. Glob based paths. + paths: + - /var/log/apache2/access.log + ``` + + In this example, the `id` must be unique to the instance so you know the source of the log. Ideally this should be the hostname of the instance, and this example uses the value **web-01**. Update `paths` to the log that you want to send to Logstash. + +1. While in the `/etc/filebeat/filebeat.yml`, update the Filebeat output directive: + + ```file {title="/etc/filebeat/filebeat.yml" lang="yaml"} + output.logstash: + # Logstash hosts + hosts: ["logstash-1.example.com:5044", "logstash-2.example.com:5044"] + loadbalance: true + + # List of root certificates for HTTPS server verifications + ssl.certificate_authorities: ["/etc/filebeat/certs/ca.pem"] + ``` + + The `hosts` param can be the IP addresses of your Logstash host or a FQDN. In this example, **logstash-1.example.com** and **logstash-2.example.com** are added to the `/etc/hosts` file. + +1. Add a Certificate Authority (CA) certificate by adding the contents of `ca.crt` to your `/etc/filebeat/certs/ca.pem` file. + + To obtain your `ca.crt`, open a separate terminal session, and log into your Kibana node. Navigate to the `/etc/kibana/certs/ca` directory, and view the file contents with the `cat` command: + + ```command + cd /etc/kibana/certs/ca + sudo cat ca.crt + ``` + + Copy the file contents, and add it to your `ca.pem` file on your Filebeat system. + +1. Once you've added the certificate to your `ca.pem` file, restart the Filebeat service: + + ```command + systemctl start filebeat + systemctl enable filebeat + ``` + +Once complete, you should be able to start ingesting logs into your cluster using the index you created. \ No newline at end of file diff --git a/docs/marketplace-docs/guides/weaviate/index.md b/docs/marketplace-docs/guides/weaviate/index.md new file mode 100644 index 00000000000..660676bcba2 --- /dev/null +++ b/docs/marketplace-docs/guides/weaviate/index.md @@ -0,0 +1,69 @@ +--- +title: "Deploy Weaviate through the Linode Marketplace" +description: "Learn how to deploy Weaviate, an AI-native vector database with GPU-accelerated semantic search capabilities, on an Akamai Compute Instance." +published: 2025-12-05 +modified: 2025-12-05 +keywords: ['vector database','database','weaviate'] +tags: ["marketplace", "linode platform", "cloud manager"] +external_resources: +- '[Weaviate Official Documentation](https://docs.weaviate.io/weaviate)' +aliases: ['/products/tools/marketplace/guides/weaviate/','/guides/weaviate-marketplace-app/'] +authors: ["Akamai"] +contributors: ["Akamai"] +license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)' +marketplace_app_id: 1902904 # Need to update +marketplace_app_name: "Weaviate" +--- + +[Weaviate](https://www.weaviate.io/) is an open-source AI-native vector database designed for building advanced AI applications. It stores and indexes both data objects and their vector embeddings, enabling semantic search, hybrid search, and Retrieval Augmented Generation (RAG) workflows. This deployment includes GPU acceleration for transformer models and comes pre-configured with the sentence-transformers model for high-performance semantic search capabilities. + +## Deploying a Marketplace App + +{{% content "deploy-marketplace-apps-shortguide" %}} + +{{% content "marketplace-verify-standard-shortguide" %}} + +{{< note title="Estimated deployment time" >}} +Weaviate should be fully installed within 5-10 minutes after your instance has finished provisioning. +{{< /note >}} + +## Configuration Options + +- **Supported distributions**: Ubuntu 24.04 LTS +- **Recommended plan**: All GPU plan types and sizes can be used. + +### Weaviate Options + +{{% content "marketplace-required-limited-user-fields-shortguide" %}} + +{{% content "marketplace-custom-domain-fields-shortguide" %}} + +{{% content "marketplace-special-character-limitations-shortguide" %}} + +## Getting Started after Deployment + +### Obtain Your API Keys + +Weaviate is a database service accessed programmatically through its API rather than through a web-based user interface. Your deployment includes two API keys stored in a credentials file. + +1. Log in to your instance via SSH or Lish. See [Connecting to a Remote Server Over SSH](/docs/guides/connect-to-server-over-ssh/) for assistance, or use the [Lish Console](/docs/products/compute/compute-instances/guides/lish/). + +1. Once logged in, retrieve your API keys from the `.credentials` file: + + ```command + sudo cat /home/$USER/.credentials + ``` + + The credentials file contains two API keys: + - **Admin API Key**: Full read/write access to all Weaviate operations + - **User API Key**: Read-only access for querying data + +### Connect Your Application to Weaviate + +To integrate Weaviate into your application, use one of the official client libraries. Weaviate provides native clients for multiple programming languages, allowing you to perform all database operations including creating schemas, importing data, and running vector searches. + +See the [Weaviate Client Libraries documentation](https://docs.weaviate.io/weaviate/client-libraries) for installation instructions and API references. + +For complete examples and advanced usage, refer to the [Weaviate Quickstart Guide](https://docs.weaviate.io/weaviate/quickstart) and the client library documentation for your preferred language. + +{{% content "marketplace-update-note-shortguide" %}} \ No newline at end of file