Large file transfer to AWS S3 bucket

Hi,
We are testing Razuna to store Media and manage them with it.
Everything looks ok but when we try to transfer large files to S3 (more than 4GB), then they stay on incoming directory and they are not transfered to S3. I think it’s due to the limit on HTTP POST we have also the same limitation with tools like cloudberry utility tool to browse S3 buckets.
Is there any other solution to transfert large files to S3 ?

We are running Razuna on AWS, the VM is a r3-xlarge (30 GB memory / 4vCPU).
Razuna version is 1.6.5

Thanks

Hi,

The upcoming 1.7.1 release will have this fixed as we use multipart-upload for all transfers to S3. If you want to test it out before the release (in a couple of weeks) you can clone the “develop” branch from our Github repository as it contains the updated S3 code already.

Hope this helps.

Hi,

I upgrade to last version from Github.
We try to transfert a video file with 16GB. we get this error message :
TypeInternalTag ContextCFSET (/data/razuna_tomcat_1_7/tomcat/webapps/razuna/global/cfc/amazon.cfc, Line=98, Column=5)
|
±- CFFUNCTION (/data/razuna_tomcat_1_7/tomcat/webapps/razuna/global/cfc/amazon.cfc, Line=87, Column=2)
|
±- CFINVOKE (/data/razuna_tomcat_1_7/tomcat/webapps/razuna/global/cfc/assets.cfc, Line=3786, Column=7)Source95 : bucket=arguments.awsbucket,
96 : key=arguments.key,
97 : file=arguments.theasset
98 : )>
99 : <!— Return —>^ Snippet from underlying CFML sourceStack Tracejava.lang.OutOfMemoryError: Requested array size exceeds VM limit
at java.util.Arrays.copyOf(Arrays.java:2271)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
at sun.net.www.http.PosterOutputStream.write(PosterOutputStream.java:78)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)

On the JVM, we have these stats:
Memory Allocation
Memory Total : 7993 MB
Memory Free : 4239 MB
Memory Max : 12185 MB
Memory Used : 3753 MB

We can increase value but it looks allready very high

Guessing from the path it is Linux. You should not have a higher -Xmx and -Xms setting then 6GB.

We might have a too high chunk size for Java to release memory properly. We will adjust this and commit the changes. Once done, we would appreciate a test again.

Thank you.

Yes, we are under linux.

I get the update code from develop branch, put on the server and try again.
We have now an error message on browser side :

I don’t find any error on server side
I double check the Amazon config, everything looks OK.

Regards

When we try with small files, we have this error message on server side :smile :

44: <!— Save content —>
45:
46:
47:

An error occurred: http://#cgi.server_name##cgi.script_name#?#cgi.query_string#


48: Time: #dateFormat(now(), “short”)# #timeFormat(now(), “short”)#

and this also :
TypeExpressionFunction(s)download (/data/razuna_tomcat_1_7/tomcat/webapps/razuna/global/cfc/amazon.cfc, Line=134, Column=2)
Tag ContextCFSET (/data/razuna_tomcat_1_7/tomcat/webapps/razuna/global/cfc/amazon.cfc, Line=144, Column=5)
|
±- CFFUNCTION (/data/razuna_tomcat_1_7/tomcat/webapps/razuna/global/cfc/amazon.cfc, Line=134, Column=2)
|
±- CFINVOKE (/data/razuna_tomcat_1_7/tomcat/webapps/razuna/global/cfc/videos.cfc, Line=1474, Column=5)Source141: bucket=arguments.awsbucket,
142: key=arguments.key,
143: file=arguments.theasset
144: )>
145: <!— Return —>^ Snippet from underlying CFML source

Access denied errors usually mean the bucket is not configured properly in Razuna. Please see this article for proper AWS setup and make sure the AWS settings are validated.

http://wiki.razuna.com/display/ecp/Amazon+Web+Services

I double check again, the AWS settings are ok.
I restart the application, try to upload a new file and I got the same error message.
I use the same credentials to follow the files on AWS S3 with cloudberry utility tool. I have access to S3 bucket witht this key.

Flush the template cache, too.

Same issue after flush the cache.

Please make sure the AWS user account has proper LIST and GET access permissions.
See http://docs.aws.amazon.com/redshift/latest/dg/s3serviceexception-error.html

I’m sure about the key, this key was working before last upgrade

The error message on Razuna is “Error in function amazon.upload”
Explanation: Forbidden; Status Code: 403; Filecontent: <?xml
version=“1.0” encoding=“UTF-8”?>
AccessDeniedAccess
Denied693C76F88CB30605ck5/dH4BMhiGr1g2l45Rd+vMUhqv619aU/EQv398nABMY3JA+0LwcMe3i0b5xTvr
Line 365

Hi,
Did you see my last message? You need to ensure that the AWS user account has proper LIST and GET access permissions.

Best
Harkirat

Hi,
Yes, I see it, the AWS key has full access to the bucket
Before the upgrade on last update we got from github, we were able to upload on AWS but it failed for large file (more tahn 4GB)
Now, withe the last version, small or large files are no more uploaded to AWS.
I think it could be a error in the URL sent to AWS but I don’t know howto to get it

We deployed the very same code to many of our managed servers. We know as a fact that the code works reliable and with very large files now.

What you are seeing must be related to your setup. Make sure that you have the correct bucket location, the correct bucket name and especially the correct permissions set.

Also make sure to flush the template cache and even restart Tomcat after the update to the storage.

We can’t say more than that.

Ok, I will try to restart Tomcat, and if not better, I will try to deploy a new fresh version and let you know

Thanks for your quick answers

Hi,

I create a new bucket and and new keys, and in razuna I reinstall a fresh version with last developer version.
We are now able to upload file.
But we still have an error with large file (16 GB) :
All AWS parts could not be successfully uploaded for file 103332303.mxf.
Please see ‘AWS_Errors log’ in OpenBD admin for more details.

I don’t know where the OpenBD admin log is.

We try to transfert smaller files, these files size are 4GB. The size on AWS is 2,7 GB and on razuna side, we can’t see the video, the filse size on razuna wb page is 0,01MB.
We don’t have any error log (Razuna Admin ou rtelogs on server)

The test with a 100MB file is OK.

Is it possible to access your DAM? If so please send the DAM URL and we can take a look at the issue.

informations sent to you directly