- Le plus récent
- Le plus de votes
- La plupart des commentaires
Zombie pods are usually caused by containers that have Zombie processes that won't stop. If you've recently experienced more of these than usual, then I would look at what has changed in the applications/processes that you are running in the containers. The --init option is a Docker setting, that sets the ENTRYPOINT to tini. This is an init processes that becomes PID 1, and then handles your apps as process children. This is usually done when signals (SIGTERM, SIGKILL) are not being properly handled by applications. There was another option with dumb-init from Yelp.
It is always a good idea to make sure that processes, especially PID 1, will properly handle signals. Most of the time this is a non-issue, however, several things can cause processes to enter Zombie states, like duplicated calls, improper error handling, nested calls, especially with bash.
In troubleshooting this issue, the first thing I would make sure of is that your application properly handles signals, and decide if you need to update the signal handling or even use a separate init process. Is your application or process created orphaned processes (processes that have lost connection to the parent process)?
Contenus pertinents
- demandé il y a un an
- demandé il y a un an
- demandé il y a un an
- AWS OFFICIELA mis à jour il y a un an
- AWS OFFICIELA mis à jour il y a 5 mois
- Comment choisir des sous-réseaux IP spécifiques à utiliser pour les pods de mon cluster Amazon EKS ?AWS OFFICIELA mis à jour il y a un an
Thanks for your response. Yes our application is an orphaned process and it was going into this state because we were terminate the parent application first.