Machine learning has significantly enhanced prediction performance; however, the estimation of uncertainty in these predictions is still a challenge. This issue is particularly pronounced in Artificial Neural Networks (ANNs), where predictions often suffer from poor calibration. Although some methods are available for recalibration, choosing and implementing the appropriate one can be challenging. To address this issue, we introduce the R package recalibratiNN that provides a computational implementation of a quantile-based post-processing technique for recalibration. The current version of the package includes functions specifically designed for recalibrating Gaussian models (i.e., where the ANN was trained with the Mean Squared Error (MSE) loss function). The method can be applied at any representation layer of the network. The package is based on the technique presented in the recent study "Model-Free Recalibration of Neural Networks" (
https://arxiv.org/abs/2403.05756) by the co-authors Ricardo Torres, Gabriel Reis and Guilherme Rodrigues, among other authors. It leverages information from cumulative probabilities, enabling the generation of Monte Carlo samples from the recalibrated predictive distribution and facilitating both local and global recalibration efforts. The recalibratriNN package also features diagnostic functions to help visualize miscalibration issues. It is readily available on both GitHub (
https://github.com/cmusso86/recalibratiNN) and CRAN (
https://cran.r-project.org/web/packages/recalibratiNN/).