In recent years, the threat of megathrust earthquakes has intensified concern among scientists and the public, especially in seismically active countries like Indonesia. As people increasingly turn to social media to express fears and opinions about such disasters, these platforms offer a rich, real-time resource for gauging public sentiment. This study introduces a sentiment-classification system built on IndoBERT, an Indonesian-language adaptation of the renowned BERT architecture. Our model was trained on a custom-labeled dataset of social-media posts categorized as positive, negative, or neutral. Preprocessing involved tokenizing the text, truncating or padding inputs to 64 tokens, and converting sentiment labels into PyTorch tensor format to facilitate efficient training. We fine-tuned the IndoBERT model using the AdamW optimizer with a learning rate of 1e-5, a dropout rate of 0.1, and early stopping criteria to guard against overfitting, training for a maximum of seven epochs. Notably, the IndoBERT classifier achieved a validation accuracy of 93.33% on a hold-out test set representing 20% of the data, with this peak occurring in the very first epoch. This rapid convergence likely reflects both the strong pretrained language representations inherent in IndoBERT and the specific characteristics of the dataset. While early stopping effectively prevented overfitting, the immediate peak suggests that the model required minimal additional fine-tuning to adapt to this sentiment classification task. These findings demonstrate that advanced natural-language-processing tools like IndoBERT can reliably interpret sentiment in the context of sensitive topics and have the potential to be integrated into disaster-response frameworks, equipping officials with timely, data-driven insights into public opinion and concerns during emergencies.