Resolution
"Last operation failed" error
When MSK Connect can't create the connector and the connector moves to the Failed state, you receive the following error message:
"There is an issue with the connector Code: UnknownError.UnknownMessage: The last operation failed. Retry the operation."
To find the cause for the failure, review the log events for MSK Connect.
"Required field is missing" error
If you use a carriage return (/r) character at the end of a configuration, then you receive the following error message:
"Invalid parameter connectorConfiguration: The following required field is missing or has invalid value: tasks.max"
To resolve this issue, take the following actions:
- Manually enter the configuration information in the connector configuration dialog box. Don't copy and paste the configuration information from another source.
- For Windows operating systems (OSs), use a text editor to remove the carriage return and line feed (CRLF) characters and end-of-line (EOL) characters. To remove the carriage return, copy and paste the configuration into a text editor. On your text editor, choose View, and then choose Show Symbol. Then, choose Show All Characters. Replace all the CRLF characters, \r\n, with line feed (LF) characters, \n.
"Invalid parameter" error
When you use the MSK Connect service-linked role to create a connector, you receive the following error message:
"Invalid parameter serviceExecutionRoleArn: A service linked role ARN cannot be provided as service execution role ARN."
You can't use the AWSServiceRoleForKafkaConnect service-linked role as the service execution role. Instead, you must create a separate service role. Then, specify the role that you want the connector to use.
"Failed to find any class that implements Connector" error
You receive the following error message:
"org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector and which name matches..."
To resolve this issue, take the following actions:
- Remove CRLF characters that are in the connector configuration.
- If the connector plugin requires multiple files, then include the files in your zipped file. The JAR files in the zipped file must also have the expected file structure for the plugin. It's a best practice to turn on logs for MSK Connect and review the logs to confirm that the file structure is correct.
"TimeOutException" error
If the connector can't reach your MSK cluster, then you receive the following error message:
"org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call. Call: fetchMetadata"
To resolve the preceding error message, take the following actions:
"SaslAuthenticationException" error
If your MSK cluster runs on a kafka.t3.small broker type with AWS Identity and Access Management (IAM) access control, then review your connection quota. The kafka.t3.small instance type accepts only one TCP connection for each broker per second.
When you exceed your connection quota, your creation test fails and you receive the following error message:
"org.apache.kafka.common.errors.SaslAuthenticationException: Too many connects"
For more information about MSK clusters and IAM access control, see How Amazon MSK works with IAM.
To resolve the "SaslAuthenticationException" error, take one of the following actions:
- In your MSK Connect worker configuration, update the values for reconnect.backoff.ms and reconnect.backoff.max.ms to 1000 or higher.
- Upgrade to a larger broker instance type, such as kafka.m5.large or higher. For more information, see Amazon MSK broker types and Best practices for Standard brokers.
"Unable to connect to S3" error
When the connector can't connect to Amazon Simple Storage Service (Amazon S3), you receive the following error message:
"org.apache.kafka.connect.errors.ConnectException: com.amazonaws.SdkClientException: Unable to execute HTTP request: Connect to s3.us-east-1.amazonaws.com:443 failed: connect timed out"
To resolve this issue, you must create an Amazon Virtual Private Cloud (Amazon VPC) endpoint from the cluster's VPC to Amazon S3.
Complete the following steps:
- Open the Amazon VPC console.
- In the navigation pane, choose Endpoints.
- Choose Create endpoint.
- For Type, select AWS services.
- Under Services, choose the Service Name filter, and then select com.amazonaws.region.s3.
Note: Replace region with your AWS Region.
- Choose the Type filter, and then choose Gateway.
- For VPC, select the cluster's VPC.
- Under Route tables, select the route table that's associated with the cluster's subnets.
- Choose Create endpoint.
"Unable to execute HTTP request" error for Firehose
When the connector can't connect to Amazon Data Firehose, you receive the following error message:
"org.apache.kafka.connect.errors.ConnectException: com.amazonaws.SdkClientException: Unable to execute HTTP request: Connect to firehose.us-east-2.amazonaws.com:443 failed: connect timed out"
To resolve this issue, follow the steps in the preceding section to create a VPC endpoint from the cluster's VPC to Amazon Data Firehose. Use the Service name filter com.amazonaws.region.kinesis-firehose.
"Access denied" error
When the IAM user for MSK Connect doesn't have the required permissions to create a connector, you receive the following error message:
"Connection to node - 1 (b1.<cluster>.<region>.amazonaws.com) failed authentication due to : Access Denied"
When you create a connector with MSK Connect, you must specify an IAM role to use with it. Your service execution role must have the following trust policy so that MSK Connect can assume the role:
{ "Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": "kafkaconnect.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"aws:SourceAccount": "Account-ID"
},
"ArnLike": {
"aws:SourceArn": "MSK-Connector-ARN"
}
}
}]
}
If the MSK cluster that you want to use with your connector uses IAM authentication, then you must add the following permissions policy to the connector's service execution role:
{ "Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"kafka-cluster:Connect",
"kafka-cluster:DescribeCluster"
],
"Resource": [
"cluster-arn"
]
},
{
"Effect": "Allow",
"Action": [
"kafka-cluster:ReadData",
"kafka-cluster:DescribeTopic"
],
"Resource": [
"ARN of the topic that you want a sink connector to read from"
]
},
{
"Effect": "Allow",
"Action": [
"kafka-cluster:WriteData",
"kafka-cluster:DescribeTopic"
],
"Resource": [
"ARN of the topic that you want a source connector to write to"
]
},
{
"Effect": "Allow",
"Action": [
"kafka-cluster:CreateTopic",
"kafka-cluster:WriteData",
"kafka-cluster:ReadData",
"kafka-cluster:DescribeTopic"
],
"Resource": [
"arn:aws:kafka:region:account-id:topic/cluster-name/cluster-uuid/__amazon_msk_connect_*"
]
},
{
"Effect": "Allow",
"Action": [
"kafka-cluster:AlterGroup",
"kafka-cluster:DescribeGroup"
],
"Resource": [
"arn:aws:kafka:region:account-id:group/cluster-name/cluster-uuid/__amazon_msk_connect_*",
"arn:aws:kafka:region:account-id:group/cluster-name/cluster-uuid/connect-*"
]
}
]
}
For more information, see Authorization policy resources.
"Failed to find AWS IAM Credentials" error
When the IAM role that you used to create the connector doesn't have the required permissions, you receive the following error message:
"ERROR Connection to node -3 (b-1.<cluster>.<region>.amazonaws.com/INTERNAL_IP ) failed authentication due to: An error: (java.security .PrivilegedActionException: javax.security .sasl.SaslException: Failed to find AWS IAM Credentials [Caused by aws_msk_iam_auth_shadow.com.amazonaws.SdkClientException: Unable to load AWS credentials from any ...........Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))"
To troubleshoot the preceding error message, review the access policies and trust relationship of the IAM role for the connector. For more information, see Understand service execution role.
Related information
How do I use the Kafka-Kinesis-Connector to connect to my Amazon MSK cluster?
Understand MSK Connect
Troubleshoot your Amazon MSK cluster