HazeFlow : Revisit Haze Physical Model as ODE and Non-Homogeneous Haze Generation for Real-World Dehazing (ICCV2025)
Junseong Shin*, Seungwoo Chung*, Yunjeong Yang, Tae Hyun Kim†
This is the official implementation of ICCV2025 "HazeFlow: Revisit Haze Physical Model as ODE and Non-Homogeneous Haze Generation for Real-World Dehazing" [paper] / [project page]
More qualitative and quantitative results can be found on the [project page].
git clone https://github.com/cloor/HazeFlow.git
cd HazeFlow
pip install -r requirements.txtor
git clone https://github.com/cloor/HazeFlow.git
cd HazeFlow
conda env create -f environment.yamlCheckpoints can be downloaded here.
Figure: Example of non-homogeneous haze synthesized via MCBM. (a) Generated hazy image. (b) Transmission map TMCBM. (c) Spatially varying density coefficient map 𝛽̃.
You can generate haze density maps using MCBM by running the command below:
python haze_generation/brownian_motion_generation.pyPlease download and organize the datasets as follows:
| Dataset | Description | Download Link |
|---|---|---|
| RIDCP500 | 500 clear RGB images | rgb_500 / da_depth_500 |
| RTTS | Real-world task-driven testing set | Link |
| URHI | Urban and rural haze images (duplicate-removed version) | Link |
HazeFlow/
├── datasets/
│ ├── RIDCP500/
│ │ ├── rgb_500/
│ │ ├── da_depth_500/
│ │ ├── MCBM/
│ ├── RTTS/
│ ├── URHI/
│ └── custom/
Before training, make sure the datasets are properly structured as shown above.
Additionally, prepare the MCBM-based haze density maps and corresponding depth maps.
To estimate depth maps, follow the instructions provided in the Depth Anything V2 repository and place the depth maps in the datasets/RIDCP500/da_depth_500/ directory.
Once depth maps are ready, you can proceed to training and inference as described below.
We propose using a color loss to reduce color distortion.
You can configure the loss type by editing --config.training.loss_type in pretrain.sh.
sh pretrain.shSpecify the pretrained checkpoint from the pretrain phase by editing --config.flow.pre_train_model in reflow.sh.
sh reflow.shSpecify the checkpoint obtained from the reflow phase by editing --config.flow.pre_train_model in distill.sh.
sh distill.shTo run inference on your own images, place them in the dataset/custom/ directory.
Then, configure the following options in sampling.sh:
--config.sampling.ckpt: path to your trained model checkpoint--config.data.dataset: name of your dataset (rttsorcustom)--config.data.test_data_root: path to your input images
Finally, run:
sh sampling.shOur implementation is based on RectifiedFlow and SlimFlow. We sincerely thank the authors for their contributions to the community.
If you use this code or find our work helpful, please cite our paper:
@inproceedings{shin2025hazeflow,
title={HazeFlow: Revisit Haze Physical Model as ODE and Non-Homogeneous Haze Generation for Real-World Dehazing},
author={Shin, Junseong and Chung, Seungwoo and Yang, Yunjeong and Kim, Tae Hyun},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={6263--6272},
year={2025}
}If you have any questions, please contact junsung6140@hanyang.ac.kr.

