Machine learning models are widely used but can also often be wrong. Users would benefit from a reliable indication of whether a given output from a given model should be trusted, so a rational decision can be made whether to use the output or not. For example, outputs can be associated with a \emph{confidence measure}; if this confidence measure is strongly associated with \emph{likelihood of correctness}, then the model is said to be \emph{well-calibrated}.
In this case, the confidence measure can serve as a basis for rational graduated decision making on how much review and care is needed. \emph{Calibration} has so far been studied in mostly non-generative (\emph{e.g.}, classification) settings, especially in Software Engineering. However, generated code can quite often be wrong: Given generated code developers must decide whether to directly use, use after varying intensity of careful review, or discard model-generated code; thus calibration is vital in generative settings.
In this paper we make several contributions. We develop a framework for evaluating the calibration of code-generating models. We consider several tasks, correctness criteria, datasets, and approaches, and find that by and large generative code models are \textbf{\textit{\underline{not}}} well-calibrated out of the box. We then show how calibration can be improved, using standard methods such as Platt scaling. Since Platt scaling relies on the prior availability of correctness data, we evaluate the applicability and generalizability of Platt scaling in Software Engineering, discuss settings where it has good potential for practical use, and settings where it does not. Our contributions will lead to better-calibrated decision-making in the current use of code generated by language models, and offers a framework for future research to further improve calibration methods for generative models in Software Engineering.
Wed 30 AprDisplayed time zone: Eastern Time (US & Canada) change
11:00 - 12:30 | |||
11:00 15mTalk | Calibration and Correctness of Language Models for Code Research Track Claudio Spiess University of California, Davis, David Gros University of California, Davis, Kunal Suresh Pai UC Davis, Michael Pradel University of Stuttgart, Rafiqul Rabin UL Research Institutes, Amin Alipour University of Houston, Susmit Jha SRI, Prem Devanbu University of California at Davis, Toufique Ahmed IBM Research Pre-print | ||
11:15 15mTalk | An Empirical Study on Commit Message Generation using LLMs via In-Context Learning Research Track Yifan Wu Peking University, Yunpeng Wang Ant Group, Ying Li School of Software and Microelectronics, Peking University, Beijing, China, Wei Tao Independent Researcher, Siyu Yu The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Haowen Yang The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Wei Jiang , Jianguo Li Ant Group | ||
11:30 15mTalk | Instruct or Interact? Exploring and Eliciting LLMs’ Capability in Code Snippet Adaptation Through Prompt Engineering Research Track Tanghaoran Zhang National University of Defense Technology, Yue Yu PengCheng Lab, Xinjun Mao National University of Defense Technology, Shangwen Wang National University of Defense Technology, Kang Yang National University of Defense Technology, Yao Lu National University of Defense Technology, Zhang Zhang Key Laboratory of Software Engineering for Complex Systems, National University of Defense Technology, Yuxin Zhao Key Laboratory of Software Engineering for Complex Systems, National University of Defense Technology | ||
11:45 15mTalk | Search-Based LLMs for Code OptimizationAward Winner Research Track Shuzheng Gao The Chinese University of Hong Kong, Cuiyun Gao Harbin Institute of Technology, Wenchao Gu The Chinese University of Hong Kong, Michael Lyu The Chinese University of Hong Kong | ||
12:00 15mTalk | Towards Better Answers: Automated Stack Overflow Post Updating Research Track Yubo Mai Zhejiang University, Zhipeng Gao Shanghai Institute for Advanced Study - Zhejiang University, Haoye Wang Hangzhou City University, Tingting Bi The University of Melbourne, Xing Hu Zhejiang University, Xin Xia Huawei, JianLing Sun Zhejiang University | ||
12:15 15mTalk | Unseen Horizons: Unveiling the Real Capability of LLM Code Generation Beyond the FamiliarAward Winner Research Track Yuanliang Zhang National University of Defense Technology, Yifan Xie , Shanshan Li National University of Defense Technology, Ke Liu , Chong Wang National University of Defense Technology, Zhouyang Jia National University of Defense Technology, Xiangbing Huang National University of Defense Technology, Jie Song National University of Defense Technology, Chaopeng Luo National University of Defense Technology, Zhizheng Zheng National University of Defense Technology, Rulin Xu National University of Defense Technology, Yitong Liu National University of Defense Technology, Si Zheng National University of Defense Technology, Liao Xiangke National University of Defense Technology |