I work on an HPC and often I have to share files with other users. The most approachable solution is to have an external cloud storage and recline back and forth. However there’s some projects that are quite heavy (several TB) and that is unfeasible. We do not have a shared group. The following is the only solution I found which is not to just set al permissions to 777, and I still don’t like it.
Create a directory and set ACL to give access to the selected users. This works fine if the users create new files in there, but it does not work if they copy from somewhere else as default umask is 022. Thus the only appropriate solution is to change default umask to 002, which however affects file creation system wide. The alternative is to change permissions every time you copy something, but you all know very well that is not going to happen.
Does it really have to be such a pain in the ass?


I’m grateful for all the help and advice in here. Duplicating the data is not a problem, we can have several copies of the data on the server, not an issue.
Having the data on an external server on the other hand may be a problem, because that would require quite a large amount of storing capacity.
I’m unsure what you mean by accessing on demand: data already is on the server and people can access it. My main pain point is that if people copy stuff in there rather than creating it in place I do not get write access by default.
The copy party software looks interesting for other applications, and I may pick it up for something else, but it is not something that would work in this case. As I have explained extensively, while spawning a file server would not be impossible, it would be a huge hassle with no real advantages.