Skip to content

Fix tuple error, mentioned in some of the issues#289

Open
ankuPRK wants to merge 4 commits intoadvimman:mainfrom
geomagical:fix_tuple_error
Open

Fix tuple error, mentioned in some of the issues#289
ankuPRK wants to merge 4 commits intoadvimman:mainfrom
geomagical:fix_tuple_error

Conversation

@ankuPRK
Copy link
Contributor

@ankuPRK ankuPRK commented Jan 8, 2024

Summary

  • We see some GitHub issues when some folks were trying to use the refinement step for non-fourier models. So we add a fix for that.
  • (minor) We also correct a typo in the website link to Geomagical.com

Issues

#274
#167

Problem

In the lama-regular model, the latent feature is a single Tensor, while in lama-fourier and big-lama, the latent feature is a tuple of two tensors. In our refinement code, we assumed the feature to be a tuple.

Solution

This PR adds a function that adapts the feature appropriately. We can't just keep it as-it-is because when Pytorch optimizers (like Adam) don't take a tuple as input.

Visual Results

  • Input Images
    image
    image

  • Big LaMa (before and after refinement)
    image
    image
    image
    image

  • LaMa-Fourier (before and after refinement, of course, parameters tuned to this model could make results even better)
    image
    image
    image
    image

  • LaMa-Regular (the one on which refinement was failing)
    image
    image
    image
    image

@jf1957
Copy link

jf1957 commented Jan 30, 2026

Summary

  • We see some GitHub issues when some folks were trying to use the refinement step for non-fourier models. So we add a fix for that.
  • (minor) We also correct a typo in the website link to Geomagical.com

Issues

#274 #167

Problem

In the lama-regular model, the latent feature is a single Tensor, while in lama-fourier and big-lama, the latent feature is a tuple of two tensors. In our refinement code, we assumed the feature to be a tuple.

Solution

This PR adds a function that adapts the feature appropriately. We can't just keep it as-it-is because when Pytorch optimizers (like Adam) don't take a tuple as input.

Visual Results

  • Input Images
    image
    image
  • Big LaMa (before and after refinement)
    image
    image
    image
    image
  • LaMa-Fourier (before and after refinement, of course, parameters tuned to this model could make results even better)
    image
    image
    image
    image
  • LaMa-Regular (the one on which refinement was failing)
    image
    image
    image
    image

Dear Author,

I would like to kindly ask you some questions regarding training on the CelebA dataset. Specifically, based on LaMa-Fourier, what small modifications or fine-tuning strategies could be applied to slightly improve the final performance metrics?

I am currently working on my undergraduate graduation thesis, and these suggestions would be extremely helpful to my research. I would sincerely appreciate it if you could share some advice or insights.

Thank you very much for your time and help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants