TransferManager upload times out after 6 to 7 minutes, stops at 50.1%

0

Process ends with an exception and this error message: "Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed. (Service: S3, Status Code: 400, " File I'm trying to upload is 3GB. Ran this four times. First time it stopped at about 35%, but then last 3 times stopped at 50.1%. Here's the code and gradle file

S3AsyncClient s3AsyncClient;
    S3TransferManager transferManager;


	
	public S3Upload() {
        s3AsyncClient =
                S3AsyncClient.crtBuilder()
                        .credentialsProvider(DefaultCredentialsProvider.create())
                        .region(Region.US_EAST_1)
                        .targetThroughputInGbps(20.0)
                        .minimumPartSizeInBytes(8 * MB)
                        .build();

        transferManager =
                S3TransferManager.builder()
                        .s3Client(s3AsyncClient)
                        .build();
	}
	
	
    public void writeFileToS3(String bucketName, String fileName,String filePath) {

        UploadFileRequest uploadFileRequest =
                UploadFileRequest.builder()
                        .putObjectRequest(b -> b.bucket(bucketName).key(fileName))
                        .addTransferListener(LoggingTransferListener.create())
                        .source(Paths.get(filePath))
                        .build();

        FileUpload fileUpload = transferManager.uploadFile(uploadFileRequest);
        CompletedFileUpload uploadResult = fileUpload.completionFuture().join();
        System.out.println(uploadResult);

    }
    
       public static void main(String [] args)
	   {
          S3Upload fileUpload = new S3Upload();
          String bucket = args[0];
          String destFile = args[1];
          String srcFile = args[2];

          System.out.println("Start date and time: " + LocalDateTime.now().toString());
          fileUpload.writeFileToS3(bucket, destFile, srcFile);
          System.out.println("End date and time: " + LocalDateTime.now().toString());
	  }

plugins {
    id 'java'
}

group 'org.example'
version '1.0-SNAPSHOT'

repositories {
    mavenCentral()
}

dependencies {
    implementation platform('software.amazon.awssdk:bom:2.20.56')
    implementation 'software.amazon.awssdk:s3'
    implementation 'software.amazon.awssdk:s3-transfer-manager'
    implementation 'software.amazon.awssdk.crt:aws-crt:0.21.14'
    implementation 'org.slf4j:slf4j-api:'
    implementation 'org.slf4j:slf4j-simple:2.0.6'
    testImplementation 'org.junit.jupiter:junit-jupiter-api:5.7.0'
    testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.7.0'
}

test {
    useJUnitPlatform()

}

asked a year ago837 views
2 Answers
0
Accepted Answer

Don't know if this is the only solution, but I ended up setting the throughput target to a value closer to my network bandwidth (home internet), instead of using the 20 Gbps from the AWS example

s3AsyncClient =
                S3AsyncClient.crtBuilder()
                        .credentialsProvider(DefaultCredentialsProvider.create())
                        .region(Region.US_EAST_1)
                        .targetThroughputInGbps(.2)  <================
                        .minimumPartSizeInBytes(8 * MB)
                        .build();

Upload completed in 43 minutes for a 3GB file

answered a year ago
profile picture
EXPERT
reviewed a month ago
0
answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions