I was trying to follow the "Using SMB CSI Driver on Amazon EKS Windows nodes" guide posted from here, but ran into an issue with getting the SMB CSI driver to connect.
If I watch the events for the deployment I am trying to use, I get the following output (command below):
kubectl events --for pod/app-774664b66d-rktdc --watch
LAST SEEN TYPE REASON OBJECT MESSAGE
29s Normal Scheduled Pod/app-774664b66d-rktdc Successfully assigned default/app-774664b66d-rktdc to ip-XX-XX-XX-XX.ec2.internal
29s Normal ResourceAllocated Pod/app-774664b66d-rktdc Allocated Resource vpc.amazonaws.com/PrivateIPv4Address: XX-XX-XX-XX/20 to the pod
0s Warning FailedAttachVolume Pod/app-774664b66d-rktdc AttachVolume.Attach failed for volume "pv-smb-server" : timed out waiting for external-attacher of smb.csi.k8s.io CSI driver to attach volume amznfsxexample.my.ad/share-server##
0s Warning FailedMount Pod/app-774664b66d-rktdc Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data kube-api-access-wnqk2]: timed out waiting for the condition
I had conducted a reachability analysis and I can verify that the nodes within the cluster are able to reach the network interface of the FSx server, so it should work. So taking a step back, I decided to re-create my node group so that it has the right security group to access the FSx Server. After doing this, I can hop on the nodes and manually set up a network share that connects to the FSx Server. This being said, I am still unable to get the cluster to connect using the SMB CSI driver.
Seeing as the security group gave access to the nodes, does the cluster itself need to use the same security group to give it access? Maybe that's what I've been missing?