This paper discusses the use of Explainable Artificial Intelligence (XAI) techniques, specifically Shapley Additive Explanation (SHAP), to enhance the performance and interpretability of autoencoder-based models for detecting anomalies in computer networks. The authors demonstrate how SHAP can identify the critical features causing anomalies, leading to the development of a model (SHAP_model) that achieved an accuracy of 94% and an Area Under the Curve (AUC) of 0.969 on the CICIDS2017 dataset. This approach addresses the challenges of the opaque nature of machine learning models by providing insights into the decision-making process, thereby improving trust in the model's outcomes.