Presentation
LLM4VV: Exploring LLM-as-a-Judge for Validation and Verification Testsuites
DescriptionLarge Language Models (LLM) are evolving and have significantly revolutionized the landscape of software development. If done right, they can significantly accelerate the software development cycle. At the same time, the community is very cautious of the models being trained on biased or sensitive data, which can lead to biased outputs along with the inadvertent release of confidential information. Additionally, the carbon footprints and the un-explainability of these "black box" models continue to raise questions about the usability of LLMs.
With the abundance of opportunities LLMs have to offer, this paper explores the idea of "judging" tests used to evaluate compiler implementations of directive-based programming models as well as probe into the “black box” of LLMs. Based on our results, utilizing an agent-based prompting approach and setting up a validation pipeline structure drastically increased the quality of DeepSeek Coder, the LLM chosen for the evaluation purposes.
With the abundance of opportunities LLMs have to offer, this paper explores the idea of "judging" tests used to evaluate compiler implementations of directive-based programming models as well as probe into the “black box” of LLMs. Based on our results, utilizing an agent-based prompting approach and setting up a validation pipeline structure drastically increased the quality of DeepSeek Coder, the LLM chosen for the evaluation purposes.