Is Athena Iceberg is ~10 slower comparing to usual external table?

0

We have time series data. And we always need only the last item for certain ID. When data is collected we usually extracting a single record by it's ID.
We have about 5*10^8 records per day in ~1K batches. Monthly size of data is ~100TB compressed. I was thought that Iceberg will be a perfect candidate if we will use MERGE INTO for each new portion of data.

However, after basic benchmark I figured out that Iceberg is about ~10x slower than usual external table. For the sake of simplicity I minimized my benchmark as much as I can. And it seams like my assumption is correct even with 12-rows-table.

Here is the benchmark

**Iceberg **

CREATE  TABLE my_test (
   my_partition bigint,
   my_data string
) 
PARTITIONED BY (
  my_partition
)
LOCATION
    's3://my-data/iceberg'
TBLPROPERTIES ( 
    'table_type' = 'ICEBERG',
    'format'='parquet'
)


insert into my_test 
values 
(1, 'Some data'),
(1, 'Some data'),
(1, 'Some data'),
(1, 'Some data'),
(1, 'Some data'),
(1, 'Some data'),
(1, 'Some data'),
(1, 'Some data'),
(1, 'Some data'),
(2, 'Some data'),
(2, 'Some data'),
(2, 'Some data'),
(2, 'Matching data')
-- Run time: 3.586 sec


select count(*) from my_test
-- Run time: 8.293 sec

select count(*)
from my_test
where my_partition = 2
-- Run time: 7.485 sec

select *
from my_test
where my_partition = 2 and my_data = 'Matching data'
-- Run time: 7.766 sec

**External table **

CREATE EXTERNAL TABLE my_test_non_ice (
   my_data string
) 
PARTITIONED BY (
     my_partition int
)
STORED AS PARQUET
LOCATION 's3://ma-data/parquet'


insert into my_test_non_ice 
values 
( 'Some data', 1),
( 'Some data', 1),
( 'Some data', 1),
( 'Some data', 1),
( 'Some data', 1),
( 'Some data', 1),
( 'Some data', 1),
( 'Some data', 1),
( 'Some data', 1),
( 'Some data' ,2),
( 'Some data' ,2),
( 'Some data' ,2),
( 'Matching data' ,2)
-- Run time: 1.422 sec


select count(*) from my_test_non_ice
-- Run time: 1.087 sec

select count(*)
from my_test_non_ice
where my_partition = 2
-- Run time: 1.18 sec

select *
from my_test_non_ice
where my_partition = 2 and my_data = 'Matching data'
-- Run time: 790 ms

As you can see, every single query is much faster. The last one (the most interesting for me) is x10 slower. Our goal is to extract a single record in a 3 digits milliseconds time. Is it achievable at all with Iceberg tables?

To me, counting 12 rows for 8+ seconds is a kind of strange.

Any suggestions about what I'm doing wrong will be highly appreciated.

profile picture
Smotrov
asked a year ago2344 views
1 Answer
1
Accepted Answer

An "ICEBERG" table works with snapshots and metadatas files.

Globally, it distributes the files differently than the standard EXTERNAL tables, even duplicates the data, and thanks to a set of metadatas in json+avro knows how to query the data correctly. (so there is more complexity to read all these files than reading a simple EXTERNAL TABLE through a simple Parquet Serde (that only read the Parquet file internal metadata for queries))

With a single example dataset, this is not too noticeable but you have to take into account that the use of S3 storage can quickly increase depending on the process with ICEBERG. If you perform a MERGE that applies many updates, and some deletes, you can quickly have the disk space on S3 increase by a factor, and the performance will be even less efficient (because data will be split into multiples snapshots/files).

(However using the OPTIMIZE command ... can then bring the performance back to the initial state of the ICEBERG table, by combining files into a new one, and update metadata to ignore old files)

There are advantages (much simplification of maintenance, query partitioned data without making explicit conditions on these partitions, ...) and disadvantages (less efficient performance in reading, storage space that increases rapidly, ...) to ICEBERG.

I have not managed to see an automatic disk space optimization at work using VACUUM and OPTIMIZE personally though, I just know that it uses less files afterwards and that deleting files that are no longer used does not break my iceberg table.

If ever the goal is to try to gain some performance, I would advise to set the compression of the PARQUET differently... But this would probably not change everything. (also if you already perform multiples delete/insert on your table, try an OPTIMIZE to fix performances)

answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions