Hoax news, often known as fake news, is a serious concern because it misleads the public and undermines confidence in information obtained from social media. To protect the integrity of information, this study investigates the classification of fake news using Bidirectional Encoder Representations from Transformers (BERT). Modern natural language processing models like BERT have proven effective in various linguistic problems. The model was refined using false news datasets from social media, using its extensive word representation extraction capabilities and broad language understanding. This studies tests the performance of BERT in carrying out classification with different data set ratios. The highest performance value was obtained with a data split ratio of 90:10 with a precision value of 0.538, recall of 0.515, accuracy of 0.521, and F1-Score of 0.519. These results show that increasing the proportion of training data in the training data and test data sharing ratio contributes to improving the overall model performance. The accuracy obtained in this studies was higher than the model which had the highest accuracy value in previous studies, with a CNN model accuracy of 0.260. This studies highlights the difficulties in accurately categorizing fake news as data complexity increases, emphasizing the necessity for continuous model and method improvement for trustworthy fake news detection on social media.