In the dynamic landscape of Natural Language Processing (NLP), a transformative revolution is underway, marked by the rapid evolution and proliferation of pre-trained language models that have irrevocably reshaped the boundaries of text understanding and classification. Electra, a language model introduced by Clark et al. in 2020, stands out as an innovation with a distinctive pre-training approach, particularly in finegrained text classification tasks. This research aims to rigorously evaluate Electra’s performance in fine-grained text classification, primarily focusing on sentiment analysis tasks within the SST2 dataset. Additionally, the study seeks to provide invaluable guidance to researchers and practitioners by elucidating the most effective fine-tuning strategies and configuration settings. The results highlight the significance of gradual fine-tuning, with increased layer unfreezing positively impacting model accuracy. This underscores Electra’s vast potential for NLP tasks and the importance of thoughtful fine-tuning processes.