In this paper, we present ComplAI, a unique framework to enable, observe, analyze and quantify explainability, robustness, performance, fairness, and model's behavior in drift scenarios, and to provide a single Trust Factor that evaluates different supervised ML models from an overall responsibility perspective. It helps users to (a) connect their models and enable explanations, (b) assess and visualize different aspects of the model and (c) compare different models from an overall perspective thereby facilitating actionable recourse for improvement of the models. ComplAI is model agnostic and works with different supervised machine learning scenarios and frameworks, seamlessly integrable with any ML life-cycle framework. Thus, this already deployed framework aims to unify critical aspects of Responsible AI systems for regulating the development process of such real systems. The theory version of the paper and a demo version is available in this link (use the Zip Password: Welcome2022!). An detailed version of the paper is available in Arxiv. © 2023 Owner/Author(s).