I am trying to read gzip file in two stages. The first lambda creates byte ranges and passes it on to second lambda. The second lambda uses the byte ranges and creates GetObjectRequest object and I call GetObjectAsync method of s3Client. I am using SharpZipLib library. Below is my code in .Net where I try to write it in a json stream.
private static async Task DownloadZippedFileRangeToStreamAsync(
IAmazonS3 s3Client,
Stream jsonStream,
GetObjectRequest objectRequest
)
{
var getObjectResponse = await s3Client.GetObjectAsync(objectRequest);
using var memoryStream = new MemoryStream();
getObjectResponse.ResponseStream.CopyTo(memoryStream);
memoryStream.Seek(0, SeekOrigin.Begin);
using var decompressedStream = new GZipInputStream(memoryStream);
using var streamReader = new StreamReader(decompressedStream);
using var jsonWriter = new Utf8JsonWriter(jsonStream);
jsonWriter.WriteStartArray();
while (!streamReader.EndOfStream)
{
// Read each line of the JSONL file
string jsonLine = streamReader.ReadLine();
// Parse and write the line as a JSON item
if(jsonLine != null)
{
using JsonDocument doc = JsonDocument.Parse(jsonLine);
doc.WriteTo(jsonWriter);
}
}
jsonWriter.WriteEndArray();
}
The above code works for reading the file if the byte range starts from 0. It gives an exception "Error GZIP header, first magic byte doesn't match" while reading the gzip file partially in byte ranges specified in the GetObjectRequest model.
I understand there is another object to read gzip file using SelectObjectContentRequest object and specify the CompressionType in inputserialization parameter but it doesnt support scanRange with gzip.
Can you please help ?