摘要
Evaluating artificial intelligence(AI)systems is crucial for their successful deployment and safe operation in real-world applications.The assessor meta-learning model has been recently introduced to assess AI system behaviors developed from emergent characteristics of AI systems and their responses on a test set.The original approach lacks covering continuous ranges,for example,regression problems,and it produces only the probability of success.In this work,to address existing limitations and enhance practical applicability,we propose an assessor feedback mechanism designed to identify and learn from AI system errors,enabling the system to perform the target task more effectively while concurrently correcting its mistakes.Our empirical analysis demonstrates the efficacy of this approach.Specifically,we introduce a transition methodology that converts prediction errors into relative success,which is particularly beneficial for regression tasks.We then apply this framework to both neural network and support vector machine models across regression and classification tasks,thoroughly testing its performance on a comprehensive suite of 30 diverse datasets.Our findings highlight the robustness and adaptability of the assessor feedback mechanism,showcasing its potential to improve model accuracy and reliability across varied data contexts.
基金
supported by BK21 Four Project,AI-Driven Convergence Software Education Research Program 41999902143942
also supported by National Research Foundation of Korea 2020R1A2C1012196.