max-downloads seems not to delete the file #120

Closed
opened 2026-01-19 18:29:02 +00:00 by michael · 7 comments
Owner

Originally created by @fnpanic on GitHub.

Hi,

Great tool! Thank you very much.

i am using max downloads 1 header on the files. It works for non images perfect. Then i noticed that the image is embedded in the page and displayed but the download button is still there. When i click it, it says 404 which is desired. When i press back in the browser it downloads the file as often as i want.

I am running transfer.sh behind a nginx under the patch /transfer with the latest docker. this works great.

Why is the file not removed after download?

Thanks in advance.

Originally created by @fnpanic on GitHub. Hi, Great tool! Thank you very much. i am using max downloads 1 header on the files. It works for non images perfect. Then i noticed that the image is embedded in the page and displayed but the download button is still there. When i click it, it says 404 which is desired. When i press back in the browser it downloads the file as often as i want. I am running transfer.sh behind a nginx under the patch /transfer with the latest docker. this works great. Why is the file not removed after download? Thanks in advance.
Author
Owner

@stefanbenten commented on GitHub:

Yeah, seems like we need to add no-cache directives to the actual content being served.

Additionally, the purging does only delete the files after the set retention time, not if the max-downloads has been reached. We could implement an automatic deletion, if we reach this point and a user keeps requesting it. The last download that sets download count to the same value as max-downloads would then delete it afterwards.

@stefanbenten commented on GitHub: Yeah, seems like we need to add no-cache directives to the actual content being served. Additionally, the purging does only delete the files after the set retention time, not if the `max-downloads` has been reached. We could implement an automatic deletion, if we reach this point and a user keeps requesting it. The last download that sets download count to the same value as `max-downloads` would then delete it afterwards.
Author
Owner

@stefanbenten commented on GitHub:

To me its more of the question whats the point in keeping the file on the service/server if it can not be downloaded anymore. Either its missing a feature of something like an override for the owner or to change the value upwards?
But at that point its likely more a CDN type deal than temporary file share.
For the operator of the server it might also be good to save cost/space, but that should not be a deciding factor here.

I do agree that likely fixing the documentation is sufficient, but i fully understand the confusion that users might run into.

Lets fix the caching in the browser first, then we can discuss the second part?
I'll put up a draft for the no-cache piece.

@stefanbenten commented on GitHub: To me its more of the question whats the point in keeping the file on the service/server if it can not be downloaded anymore. Either its missing a feature of something like an override for the owner or to change the value upwards? But at that point its likely more a CDN type deal than temporary file share. For the operator of the server it might also be good to save cost/space, but that should not be a deciding factor here. I do agree that likely fixing the documentation is sufficient, but i fully understand the confusion that users might run into. Lets fix the caching in the browser first, then we can discuss the second part? I'll put up a draft for the no-cache piece.
Author
Owner

@paolafrancesca commented on GitHub:

To me its more of the question whats the point in keeping the file on the service/server if it can not be downloaded anymore. Either its missing a feature of something like an override for the owner or to change the value upwards?

good point, but I was more on decoupling the deletion part from the expiration part.

not easy to implement without keeping a state of the files past max-downloads and having a goroutine regularly purging them. the problem is how to implement the state to persist across restart of the service (indeed there is already: metadata file) and/or not having to scan every file uploaded to purge the expired ones asynchronously.

so maybe yes, in the end the best solution is to delete those files as soon as they receive a GET request and we know they are expired: beware that even in this case if there are no "exceeding" GET requests they won't be purged anyway. but if this is the case then why not relying only on the already existing purging mechanism?

@paolafrancesca commented on GitHub: >To me its more of the question whats the point in keeping the file on the service/server if it can not be downloaded anymore. Either its missing a feature of something like an override for the owner or to change the value upwards? good point, but I was more on decoupling the deletion part from the expiration part. not easy to implement without keeping a state of the files past max-downloads and having a goroutine regularly purging them. the problem is how to implement the state to persist across restart of the service (indeed there is already: metadata file) and/or not having to scan every file uploaded to purge the expired ones asynchronously. so maybe yes, in the end the best solution is to delete those files as soon as they receive a GET request and we know they are expired: beware that even in this case if there are no "exceeding" GET requests they won't be purged anyway. but if this is the case then why not relying only on the already existing purging mechanism?
Author
Owner

@paolafrancesca commented on GitHub:

Additionally, the purging does only delete the files after the set retention time, not if the max-downloads has been reached

yes, but indeed I'm quite positive about this behaviour: it is max-donwloads, not delete-after-downloads ;)

The last download that sets download count to the same value as max-downloads would then delete it afterwards.

as consequence of my opinion above, I would say we must consider thoroughly befor adding extra complexity that might not be necessary.
what we will achieve with such behaviour?

  • prevent for sure unwanted download (like the "bug" we have now since we have no caching directive)
  • somehow it will prevent any transfer.sh host to store the uploads after the max-downloads value is reached: this is a matter of trust and expectations between the host and the users

has this behaviour really an impact on the two achievements?

  • I have to investigate on circumvention of no-cache directive. the problem I can see is that if a circumvention is possibile, no matter we delete the upload or not, if the content was already cached the circumention could happen.
  • unless the transfer.sh host is using a custom frontend, the storing period of upload is clearly stated on the homepage: for me it is enough to set expectations. we could add documentation on the readme in the repo about the fact the reaching the max-downloads value don't delete the file.
    max-days documentation is indeed wrong and must be fixed (https://github.com/dutchcoders/transfer.sh#max-days)

what do you think, @stefanbenten ?

@paolafrancesca commented on GitHub: >Additionally, the purging does only delete the files after the set retention time, not if the `max-downloads` has been reached yes, but indeed I'm quite positive about this behaviour: it is `max-donwloads`, not `delete-after-downloads` ;) > The last download that sets download count to the same value as max-downloads would then delete it afterwards. as consequence of my opinion above, I would say we must consider thoroughly befor adding extra complexity that might not be necessary. what we will achieve with such behaviour? - prevent for sure unwanted download (like the "bug" we have now since we have no caching directive) - somehow it will prevent any transfer.sh host to store the uploads after the `max-downloads` value is reached: this is a matter of trust and expectations between the host and the users has this behaviour really an impact on the two achievements? - I have to investigate on circumvention of no-cache directive. the problem I can see is that if a circumvention is possibile, no matter we delete the upload or not, if the content was already cached the circumention could happen. - unless the transfer.sh host is using a custom frontend, the storing period of upload is clearly stated on the homepage: for me it is enough to set expectations. we could add documentation on the readme in the repo about the fact the reaching the `max-downloads` value don't delete the file. `max-days` documentation is indeed wrong and must be fixed (https://github.com/dutchcoders/transfer.sh#max-days) what do you think, @stefanbenten ?
Author
Owner

@paolafrancesca commented on GitHub:

hello @fnpanic , max-donwloads header doesn't delete the file, but rather just block the access to it

to delete a file you have two options:

  1. DELETE request (https://github.com/dutchcoders/transfer.sh#deleting)
  2. purging rules (PURGE_DAYS/PURGE_INTERVAL)

the fact that navigating the browser history back trigger the file download is probably due to caching, either in nginx or in the browser.
can you share the nginx configuration?
also please check the network tab in the browser if the request is cached or not

@stefanbenten we might want to add headers to prevent caching to solve this, what do you think?

@paolafrancesca commented on GitHub: hello @fnpanic , `max-donwloads` header doesn't delete the file, but rather just block the access to it to delete a file you have two options: 1. `DELETE` request (https://github.com/dutchcoders/transfer.sh#deleting) 2. purging rules (`PURGE_DAYS`/`PURGE_INTERVAL`) the fact that navigating the browser history back trigger the file download is probably due to caching, either in nginx or in the browser. can you share the nginx configuration? also please check the network tab in the browser if the request is cached or not @stefanbenten we might want to add headers to prevent caching to solve this, what do you think?
Author
Owner

@paolafrancesca commented on GitHub:

to summarise I see to different scenarios:

  1. the person who runs the service has purging enabled: in the end the files that "expired" due to max number of requests will be deleted anyway and we do not need to handle a special case trigger for deletion
  2. the person who runs the service has not purging enabled: can we assume as an admin saving space for no more "reachable" files is not their concern? if so no need here for extra feature as well

the specific scenario the discussion started from was from the users point of view: cache header missing that prevented the settings about blocking downloads to be fulfilled

@paolafrancesca commented on GitHub: to summarise I see to different scenarios: 1. the person who runs the service has purging enabled: in the end the files that "expired" due to max number of requests will be deleted anyway and we do not need to handle a special case trigger for deletion 2. the person who runs the service has not purging enabled: can we assume as an admin saving space for no more "reachable" files is not their concern? if so no need here for extra feature as well the specific scenario the discussion started from was from the users point of view: cache header missing that prevented the settings about blocking downloads to be fulfilled
Author
Owner

@stefanbenten commented on GitHub:

I agree, the header should fix this scenario!

@stefanbenten commented on GitHub: I agree, the header should fix this scenario!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: dutchcoders/transfer.sh#120