mirror of
https://github.com/dutchcoders/transfer.sh.git
synced 2026-02-03 06:03:25 +00:00
max-downloads seems not to delete the file #120
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @fnpanic on GitHub.
Hi,
Great tool! Thank you very much.
i am using max downloads 1 header on the files. It works for non images perfect. Then i noticed that the image is embedded in the page and displayed but the download button is still there. When i click it, it says 404 which is desired. When i press back in the browser it downloads the file as often as i want.
I am running transfer.sh behind a nginx under the patch /transfer with the latest docker. this works great.
Why is the file not removed after download?
Thanks in advance.
@stefanbenten commented on GitHub:
Yeah, seems like we need to add no-cache directives to the actual content being served.
Additionally, the purging does only delete the files after the set retention time, not if the
max-downloadshas been reached. We could implement an automatic deletion, if we reach this point and a user keeps requesting it. The last download that sets download count to the same value asmax-downloadswould then delete it afterwards.@stefanbenten commented on GitHub:
To me its more of the question whats the point in keeping the file on the service/server if it can not be downloaded anymore. Either its missing a feature of something like an override for the owner or to change the value upwards?
But at that point its likely more a CDN type deal than temporary file share.
For the operator of the server it might also be good to save cost/space, but that should not be a deciding factor here.
I do agree that likely fixing the documentation is sufficient, but i fully understand the confusion that users might run into.
Lets fix the caching in the browser first, then we can discuss the second part?
I'll put up a draft for the no-cache piece.
@paolafrancesca commented on GitHub:
good point, but I was more on decoupling the deletion part from the expiration part.
not easy to implement without keeping a state of the files past max-downloads and having a goroutine regularly purging them. the problem is how to implement the state to persist across restart of the service (indeed there is already: metadata file) and/or not having to scan every file uploaded to purge the expired ones asynchronously.
so maybe yes, in the end the best solution is to delete those files as soon as they receive a GET request and we know they are expired: beware that even in this case if there are no "exceeding" GET requests they won't be purged anyway. but if this is the case then why not relying only on the already existing purging mechanism?
@paolafrancesca commented on GitHub:
yes, but indeed I'm quite positive about this behaviour: it is
max-donwloads, notdelete-after-downloads;)as consequence of my opinion above, I would say we must consider thoroughly befor adding extra complexity that might not be necessary.
what we will achieve with such behaviour?
max-downloadsvalue is reached: this is a matter of trust and expectations between the host and the usershas this behaviour really an impact on the two achievements?
max-downloadsvalue don't delete the file.max-daysdocumentation is indeed wrong and must be fixed (https://github.com/dutchcoders/transfer.sh#max-days)what do you think, @stefanbenten ?
@paolafrancesca commented on GitHub:
hello @fnpanic ,
max-donwloadsheader doesn't delete the file, but rather just block the access to itto delete a file you have two options:
DELETErequest (https://github.com/dutchcoders/transfer.sh#deleting)PURGE_DAYS/PURGE_INTERVAL)the fact that navigating the browser history back trigger the file download is probably due to caching, either in nginx or in the browser.
can you share the nginx configuration?
also please check the network tab in the browser if the request is cached or not
@stefanbenten we might want to add headers to prevent caching to solve this, what do you think?
@paolafrancesca commented on GitHub:
to summarise I see to different scenarios:
the specific scenario the discussion started from was from the users point of view: cache header missing that prevented the settings about blocking downloads to be fulfilled
@stefanbenten commented on GitHub:
I agree, the header should fix this scenario!