Ability to get a segment from video asset (not whole) at order processing.
Rendition concurrency. Depending on how much CPUs server has, ffmpeg doesn’t utilize full CPU power. Though two or more renditions could work on modern servers at time.
It seems version control doesn’t work. I couldn’t make Razuna add new version neither of DOC nor video file.
One more idea: add some kind of check-in - check-out functionality. User shouldn’t care about it, but when user uploads a file, system should check if this user or other users within same group(s) used to work with file of same name and type. If such file (of same name and type) and such activity was found, system should ask whether file being uploaded should be saved as new version of existing one.
This would be useful for many assets such as editing projects, images etc. Eg. remote graphic designer downloads some files, works with them and uploads edited versions.
I don’t see why you say the ffmpeg doesn’t utilize full CPU. It sure works with many renditions at the same on our system (Linux) were we see 100 renditions at the same time. Furthermore, if you run Razuna, MySQL and ffmpeg on the same machine, you don’t want nay of these processes to take up the CPU or else the other application will be slow to respond.
Please state, why version control doesn’t work for you? A simple “doesn’t work” is a useless statement. Check-in/Check-out is planned for Version 2.0.
About renditions: I meant if one makes batch upload with rendition, every file uploaded is rendered sequentially. I expected this to work in threads.
About versions: I simply couldn’t add a version of any type of asset. I’m surprised if this does work for you.
Renditions: They are rendered sequentially because imagine throwing 1000 uploads at the server and then run from each one three renditions (something that many of our customers do), this means the server needs to create 3000 renditions. Since an agency might not only run one client, but many clients on the same server, this can easily multiply. It is designed this way in order not to crash the server.
Ability to retry an upload to AWS if it fail.
Files are stored on incoming directory and could be retried. it could be interesting for large files.
There is retry for uploading build in, in the latest release.
Possibility to use Microsoft Azure Blob Storage ?
I don’t know the impact on dev but that could be interesting to get more users who have allready an account in Microsoft Azure.
This could be a plugin in the Razuna 2.0 version. I’m not saying we will, but it is doable. Also, Razuna 2.0 comes with an extensible plugin architecture and anyone can contribute.
Thanks.
As new feature, I propose to be able to upload folder without zip archive.
I don’t think all these 1000 files should be rendered at time. As I am one of those, who uploads terabytes of files, I thought there should be one rendition queue per server with tunable concurrency and each rendition (as I found razuna generates shell script for it) should be FIFOed to this queue and executed when time comes. Admin should have control of how many renditions may work simultaneously.