Process ends with an exception and this error message: "Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed. (Service: S3, Status Code: 400, " File I'm trying to upload is 3GB. Ran this four times. First time it stopped at about 35%, but then last 3 times stopped at 50.1%. Here's the code and gradle file
S3AsyncClient s3AsyncClient;
S3TransferManager transferManager;
public S3Upload() {
s3AsyncClient =
S3AsyncClient.crtBuilder()
.credentialsProvider(DefaultCredentialsProvider.create())
.region(Region.US_EAST_1)
.targetThroughputInGbps(20.0)
.minimumPartSizeInBytes(8 * MB)
.build();
transferManager =
S3TransferManager.builder()
.s3Client(s3AsyncClient)
.build();
}
public void writeFileToS3(String bucketName, String fileName,String filePath) {
UploadFileRequest uploadFileRequest =
UploadFileRequest.builder()
.putObjectRequest(b -> b.bucket(bucketName).key(fileName))
.addTransferListener(LoggingTransferListener.create())
.source(Paths.get(filePath))
.build();
FileUpload fileUpload = transferManager.uploadFile(uploadFileRequest);
CompletedFileUpload uploadResult = fileUpload.completionFuture().join();
System.out.println(uploadResult);
}
public static void main(String [] args)
{
S3Upload fileUpload = new S3Upload();
String bucket = args[0];
String destFile = args[1];
String srcFile = args[2];
System.out.println("Start date and time: " + LocalDateTime.now().toString());
fileUpload.writeFileToS3(bucket, destFile, srcFile);
System.out.println("End date and time: " + LocalDateTime.now().toString());
}
plugins {
id 'java'
}
group 'org.example'
version '1.0-SNAPSHOT'
repositories {
mavenCentral()
}
dependencies {
implementation platform('software.amazon.awssdk:bom:2.20.56')
implementation 'software.amazon.awssdk:s3'
implementation 'software.amazon.awssdk:s3-transfer-manager'
implementation 'software.amazon.awssdk.crt:aws-crt:0.21.14'
implementation 'org.slf4j:slf4j-api:'
implementation 'org.slf4j:slf4j-simple:2.0.6'
testImplementation 'org.junit.jupiter:junit-jupiter-api:5.7.0'
testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.7.0'
}
test {
useJUnitPlatform()
}
Unfortunately, I think the link is to a Java 1.x sdk version. My original upload program was using the 1.x version but the upload would get stuck at 99%. So based on https://stackoverflow.com/questions/65207720/multipart-upload-using-aws-java-sdk-hangs-at-99, I looked for Java 2 flavor of TransferManager. Example found at https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/transfer-manager.html. The documentation for the .targetThroughputInGbsp() method indicates the default is 10, and that hung. Documentation also says the target should be set to max network bandwidth of the host, so I set the target to .2 It has managed run for more than 8 minutes so far, but progressing more slowly. Will continue to monitor