Hi,
I am trying to use a custom plugin with a Debezium postgres source connector to capture database changes into my MSK serverless cluster. From my reading, I can see that the maximum amount of partitions per serverless cluster is 2400. However, I am getting a connect error
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.InvalidRequestException: Quota exceeded for maximum number of partitions
I am finding it hard to believe that I am going over the allotted number of partitions in my cluster- especially when I have just freshly made the cluster and there are no other topics in it. I am also using a provisioned connect config with 1 worker.
Here is my connect configuration:
connector.class=io.debezium.connector.postgresql.PostgresConnector
value.converter.schemaAutoRegistrationEnabled=true
transforms.unwrap.delete.handling.mode=rewrite
topic.creation.default.partitions=5
transforms.extractKeyFromStruct.type=org.apache.kafka.connect.transforms.ExtractField$Key
auto.create.topics.enable=true
tasks.max=1
database.history.consumer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
database.history.kafka.topic=dbhistory.omitted
transforms=unwrap,extractKeyFromStruct,copyIdToKey,AddNamespace
transforms.extractKeyFromStruct.field=id
database.history.producer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.history.consumer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
transforms.AddNamespace.type=org.apache.kafka.connect.transforms.SetSchemaMetadata$Value
database.history.consumer.security.protocol=SASL_SSL
transforms.copyIdToKey.type=org.apache.kafka.connect.transforms.ValueToKey
topic.prefix=omitted
transforms.topicRename.type=org.apache.kafka.connect.transforms.RegexRouter
transforms.topicRename.replacement=$1
transforms.unwrap.drop.tombstones=false
transforms.copyIdToKey.fields=id
transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState
value.converter=io.confluent.connect.avro.AvroConverter
key.converter=org.apache.kafka.connect.storage.StringConverter
database.history.producer.sasl.mechanism=AWS_MSK_IAM
database.history.producer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
database.user=omitted
database.dbname=omitted
database.history.producer.security.protocol=SASL_SSL
database.history.kafka.bootstrap.servers=omitted
database.server.name=omitted
database.port=omitted
plugin.name=pgoutput
value.converter.schema.registry.url=omitted
key.converter.schemas.enable=false
database.hostname=omitted
database.password=omitted
value.converter.schemas.enable=true
transforms.unwrap.add.fields=op,source.ts_ms
table.include.list=purchased_tickets
database.history.consumer.sasl.mechanism=AWS_MSK_IAM
As you can see, I even set
"topic.creation.default.partitions=5". Note that this is in a development environment.
So I do not know how I am exceeding the quota. Has anyone else ran into this issue? Please advise, thank you.