The Neuron Morphology Community Toolbox is a collection of services for managing annotated neuron data.
The current implementation uses Docker with Docker Compose to manage the multiple independent services. The containers should also be compatible with Podman however this is untested.
A standard installation of Docker is sufficient.
The databases use Data Volumes. Starting, stopping, and removing/updating the service containers will not remove database contents.
Most provided scripts require copying .env-template to .env, copying options-template.sh to options.sh and setting the following values
NMCP_AUTH_CLIENT_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx- where this value is a secret key to be used between services internally, e.g., a random uuid or similar
NMCP_SLICE_LOCATION,NMCP_ONTOLOGY_LOCATION, andNMCP_ONTOLOGY_PATHmust be set- can be set to
/tmpfor deployment testing - Otherwise, set to actual host locations for the data on the host machine
- can be set to
NMCP_PRECOMPUTED_OUTPUT- A location url support by the cloud-volume package for saving data sets generated in the Neuroglancer precomputed format
NMCP_SECRETS_VOLUME- Host location of any required secrets files for cloud-volume to be mapped into the necessary containers
DATABASE_PWcan be set to any valueNMCP_COMPOSE_PROJECTcan be set to any value as a docker container prefix, e.g.,nmcpNMCP_LOG_VOLUMEcan be set to /tmp for testing deployment, otherwise to any desired permanent log location on the hostNMCP_SERVICES_FILE- An optional reference to an alternate compose file with the required services that the production one (
docker-compose.services.yml) docker-compose.services.staging.ymlis one possible alternate value - it is included in this repository and uses images generated from the develop branch rather than main
- An optional reference to an alternate compose file with the required services that the production one (
Most Docker Compose operations are wrapped in batch scripts to ensure the required flags are correct, e.g. test vs. production.
./up.sh
./stop.sh
./down.sh
destroys existing service containers but does not remove any data volumes. Newer container images can then be pulled and the services restarted with the up script.
./logs.sh
will bring up the correct set of Compose logs for this container set (vs. test or any other deployment on the same host).
./dev.sh
will start only those services that are not developed as part of the project, e.g., generic database instances. The rest of the services are presumed to be run locally from code, etc.
Precomputed skeletons and other system-generated Neuroglancer data sources that are typically stored in S3 or similar can be
hosted locally via an S3-compatible service, setting NMCP_PRECOMPUTED_OUTPUT for nmcp-client, and configuring nmcp-precomputed
for this service.
One option is to run a local MinIO Docker container as an S3-compatible destination. By default, the MinIO container will run on port 9000.
- Create a bucket in MinIO
- Set the expected env var
NMCP_PRECOMPUTED_OUTPUTtohttp://localhost:9000/<your-bucket>/ngv01when runningnmcp-client - In your Python environment for
nmcp-precomputed, use thecloudfilescommand line interface to configure the MinIO instance as an alias - Configure a minio-secrets.json files for this instance, as described in the
cloudvolumeandcloudfilesdocumentation - Pass the MinIO alias as the output location (
-oargument) when runningnmcp-precomputed
Additional details on using an alias with cloudfiles can be found in the nmcp-computed README.