Hi,
We are testing Razuna to store Media and manage them with it.
Everything looks ok but when we try to transfer large files to S3 (more than 4GB), then they stay on incoming directory and they are not transfered to S3. I think it’s due to the limit on HTTP POST we have also the same limitation with tools like cloudberry utility tool to browse S3 buckets.
Is there any other solution to transfert large files to S3 ?
We are running Razuna on AWS, the VM is a r3-xlarge (30 GB memory / 4vCPU).
Razuna version is 1.6.5
The upcoming 1.7.1 release will have this fixed as we use multipart-upload for all transfers to S3. If you want to test it out before the release (in a couple of weeks) you can clone the “develop” branch from our Github repository as it contains the updated S3 code already.
I upgrade to last version from Github.
We try to transfert a video file with 16GB. we get this error message :
TypeInternalTag ContextCFSET (/data/razuna_tomcat_1_7/tomcat/webapps/razuna/global/cfc/amazon.cfc, Line=98, Column=5)
|
±- CFFUNCTION (/data/razuna_tomcat_1_7/tomcat/webapps/razuna/global/cfc/amazon.cfc, Line=87, Column=2)
|
±- CFINVOKE (/data/razuna_tomcat_1_7/tomcat/webapps/razuna/global/cfc/assets.cfc, Line=3786, Column=7)Source95 : bucket=arguments.awsbucket,
96 : key=arguments.key,
97 : file=arguments.theasset
98 : )>
99 : <!— Return —>^ Snippet from underlying CFML sourceStack Tracejava.lang.OutOfMemoryError: Requested array size exceeds VM limit
at java.util.Arrays.copyOf(Arrays.java:2271)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
at sun.net.www.http.PosterOutputStream.write(PosterOutputStream.java:78)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
On the JVM, we have these stats:
Memory Allocation
Memory Total : 7993 MB
Memory Free : 4239 MB
Memory Max : 12185 MB
Memory Used : 3753 MB
We can increase value but it looks allready very high
Guessing from the path it is Linux. You should not have a higher -Xmx and -Xms setting then 6GB.
We might have a too high chunk size for Java to release memory properly. We will adjust this and commit the changes. Once done, we would appreciate a test again.
Access denied errors usually mean the bucket is not configured properly in Razuna. Please see this article for proper AWS setup and make sure the AWS settings are validated.
I double check again, the AWS settings are ok.
I restart the application, try to upload a new file and I got the same error message.
I use the same credentials to follow the files on AWS S3 with cloudberry utility tool. I have access to S3 bucket witht this key.
The error message on Razuna is “Error in function amazon.upload”
Explanation: Forbidden; Status Code: 403; Filecontent: <?xml
version=“1.0” encoding=“UTF-8”?> AccessDeniedAccess
Denied693C76F88CB30605ck5/dH4BMhiGr1g2l45Rd+vMUhqv619aU/EQv398nABMY3JA+0LwcMe3i0b5xTvr
Line 365
Hi,
Yes, I see it, the AWS key has full access to the bucket
Before the upgrade on last update we got from github, we were able to upload on AWS but it failed for large file (more tahn 4GB)
Now, withe the last version, small or large files are no more uploaded to AWS.
I think it could be a error in the URL sent to AWS but I don’t know howto to get it
We deployed the very same code to many of our managed servers. We know as a fact that the code works reliable and with very large files now.
What you are seeing must be related to your setup. Make sure that you have the correct bucket location, the correct bucket name and especially the correct permissions set.
Also make sure to flush the template cache and even restart Tomcat after the update to the storage.
I create a new bucket and and new keys, and in razuna I reinstall a fresh version with last developer version.
We are now able to upload file.
But we still have an error with large file (16 GB) :
All AWS parts could not be successfully uploaded for file 103332303.mxf.
Please see ‘AWS_Errors log’ in OpenBD admin for more details.
I don’t know where the OpenBD admin log is.
We try to transfert smaller files, these files size are 4GB. The size on AWS is 2,7 GB and on razuna side, we can’t see the video, the filse size on razuna wb page is 0,01MB.
We don’t have any error log (Razuna Admin ou rtelogs on server)