Presentation
Profiling and Bottleneck Identification for Large Language Model Optimizations
DescriptionLarge language models (LLMs) have shown they can perform scientific tasks. They are capable of assisting researchers in data interpretation, instrument operation, knowledge synthesis, and hypothesis generation. However, LLMs must first be trained on a large dataset of scientific tasks and data. Training these models requires a substantial amount of time, energy, and computational resources, as the process of altering a model’s parameters through each iteration is expensive. Researchers have developed optimizations that can speed up the process of training LLMs with new data. In our research, we aim to profile LLMs with optimizations during the steps of fine-tuning to identify bottlenecks or improvements in runtime. Some of the optimizations we utilized include Low-Rank Adaptation (LoRA), BitFit, and Adapter. From our visual diagrams and runtime charts, we can gain a better understanding of their performance and profile breakdown during training and fine-tuning.

Event Type
ACM Student Research Competition: Graduate Poster
ACM Student Research Competition: Undergraduate Poster
Doctoral Showcase
Posters
TimeTuesday, 19 November 202412pm - 5pm EST
LocationB302-B305
TP
XO/EX