Başağaoğlu, HakanChakraborty, DebadityaLago, Cesar DoGutierrez, LiliannaŞahinli, Mehmet ArifGiacomoni, MarcioFurl, ChadMirchi, AliMoriasi, Daniel N.Şengör, Sema Sevinç2022-04-212022-04-212022-04-11Water 14 (8): 1230 (2022)https://hdl.handle.net/20.500.12588/848This review focuses on the use of Interpretable Artificial Intelligence (IAI) and eXplainable Artificial Intelligence (XAI) models for data imputations and numerical or categorical hydroclimatic predictions from nonlinearly combined multidimensional predictors. The AI models considered in this paper involve Extreme Gradient Boosting, Light Gradient Boosting, Categorical Boosting, Extremely Randomized Trees, and Random Forest. These AI models can transform into XAI models when they are coupled with the explanatory methods such as the Shapley additive explanations and local interpretable model-agnostic explanations. The review highlights that the IAI models are capable of unveiling the rationale behind the predictions while XAI models are capable of discovering new knowledge and justifying AI-based results, which are critical for enhanced accountability of AI-driven predictions. The review also elaborates the importance of domain knowledge and interventional IAI modeling, potential advantages and disadvantages of hybrid IAI and non-IAI predictive modeling, unequivocal importance of balanced data in categorical decisions, and the choice and performance of IAI versus physics-based modeling. The review concludes with a proposed XAI framework to enhance the interpretability and explainability of AI models for hydroclimatic applications.Attribution 4.0 United Stateshttps://creativecommons.org/licenses/by/4.0/explainable artificial intelligencemultidimensional datanonlinearityexplanatory methodshydroclimatic applicationsA Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic ApplicationsArticle2022-04-21