Whetting All Your Appetites for Financial Tasks with One Meal from GPT? A Comparison of GPT, FinBERT, and Dictionaries in Evaluating Sentiment Analysis
The emergence of large language models (LLMs), such as Generative Pre-trained Transformers (GPTs), presents new interdisciplinary research opportunities for accounting and finance scholars. Unlike standard supervised machine learning (ML) approaches, which require large volumes of labeled data, LLMs offer few-shot capabilities that enable learning from a small amount of labeled data and generalizing to new, unseen examples. To explore the viability of adopting few-shot learning in the accounting and finance domain, we conduct a series of experiments and empirical analyses to assess the sentiment of management discussion and analysis (MD&A) disclosures using the GPT algorithm (i.e.., GPT-3), FinBERT (i.e., a pre-trained BERT with financial corpus and further fine-turned with labeled sentiment data), and a dictionary-based method (i.e., Loughran and McDonald (L&M) dictionaries). Our results first show that the GPT-3 algorithm outperforms the dictionary-based approach in financial sentiment classification tasks. We further find that FinBERT achieves superior performance over the GPT-3 algorithm, despite having only 110 million parameters compared to GPT-3’s 175 billion parameters. Last, we investigate the economic significance of sentiment classified by GPT-3, FinBERT, and the L&M dictionaries. We document that both GPT-3 and FinBERT demonstrate stronger explanatory power than the L&M dictionaries in predicting future stock returns and financial performance, with FinBERT outperforms GPT-3. Overall, our study highlights the potential of adopting few-shot learning based on LLMs in the accounting and finance domain, but more importantly, we shed light on the scaling law of LLMs and propose that more is not necessarily better. Our results reveal the importance of domain-specific knowledge for tasks that require high levels of accuracy
Year of publication: |
[2023]
|
---|---|
Authors: | Hu, Nan ; Liang, Peng ; Yang, Xu |
Publisher: |
[S.l.] : SSRN |
Saved in:
freely available
Saved in favorites
Similar items by person
-
Hu, Nan, (2018)
-
Hu, Nan, (2018)
-
Hu, Nan, (2023)
- More ...