BirdNET-Tiny Forge simplifies the training of BirdNET-Tiny models and their deployment on embedded devices.
BirdNET-Tiny Forge is a collaboration between BirdNET and fold ecosystemics.
Please find the documentation here.
- BirdWatcher PUC
This project is in early development stage, and not recommended for production use:
- APIs may change without notice
- Features might be incomplete
- Documentation may be outdated
- Ubuntu 24.04 LTS. Other Debian-based OSs and Ubuntu versions will most likely work with minimal tweaks if any, but are not officially supported.
- Python 3.10 or 3.11 and corresponding
python-devlibrary (python3.10-devandpython3.11-devin Ubuntu apt repository) - bazel (needed to build a patched python wheel of tflite-micro)
- docker
# Download and build patched version of tflite (it makes custom signal ops available, and pins the tensorflow version to the one used in this repo)
./build_tflite.sh tflite-micro
# Project uses poetry for python dependency management
pip install poetry
# Finally, install all deps
poetry installIf using xeno-canto data to train your network, please make sure to review its terms to check your usage is compatible with them.
-
Please create an account on https://xeno-canto.org, which grants you the API key we will use to download bird recordings to train our network.
-
Create a
species.txtfile, where each line contains the scientific name of a species you want to train on. -
Call
xc-download --api-key <your API key> --species-file <path to species file> --n-recs <number of recordings per species to download>
- Create a
audioset.txtfile, with each line being a class in AudioSet you wish to download data for. - Run
audioset-download --labels-file <path to audioset labels> --limit <max number of files to download per class>
If you have your own recordings you'd like to train on, first isolate the bird calls you're interested in, creating audio clips. Place your clips in data/01_raw/audio_clips, with the following structure:
audio_clips
├── <label, e.g. Apteryx mantelli>
│ ├── <recording, e.g. abc123.wav>
│ ├── ...
├── <label, e.g. Apteryx owenii>
├── ...
poetry install --with devSee LICENSE file at project root.
See our contribution documentation.
This project is supported by Jake Holshuh (Cornell class of ’69) and The Arthur Vining Davis Foundations. Our work in the K. Lisa Yang Center for Conservation Bioacoustics is made possible by the generosity of K. Lisa Yang to advance innovative conservation technologies to inspire and inform the conservation of wildlife and habitats.
The development of BirdNET is supported by the German Federal Ministry of Education and Research through the project “BirdNET+” (FKZ 01|S22072). The German Federal Ministry for the Environment, Nature Conservation and Nuclear Safety contributes through the “DeepBirdDetect” project (FKZ 67KI31040E). In addition, the Deutsche Bundesstiftung Umwelt supports BirdNET through the project “RangerSound” (project 39263/01).
BirdNET is a joint effort of partners from academia and industry. Without these partnerships, this project would not have been possible. Thank you!



