25.04.1385
000 - General Works
Karya Ilmiah - Skripsi (S1) - Reference
Natural Language Processing (nlp)
32 kali
The exponential growth of digital content demands advanced tools to efficiently manage and extract meaningful information, with Automatic Text Summarization (ATS) increasingly emerging as a crucial solution. This study presents a comparative evaluation of two ATS architectures: LSTM-based models (LSTM-based Seq2Seq with Attention and LSTM-based Seq2Seq with Attention and GloVe) and Transformer-based models (BERT and RoBERTa). Experiments are conducted on the CNN/DailyMail and XSum benchmark datasets utilizing standard ROUGE metrics for performance evaluation. Results indicate that Transformer-based models outperform LSTM-based counterparts, with BERT achieving a ROUGE-1 score of 98.25% on the complex XSum dataset, demonstrating superior contextual understanding and abstraction capabilities. In contrast, LSTM models exhibit strong sequential processing capabilities but face challenges in handling long-range dependencies and complex data structures. This study also highlights dataset-specific biases, such as topical overrepresentation, and their impact on model performance. While Transformer-based models demonstrate robustness and adaptability, occasional factual inconsistencies remain a challenge. Future research should address these limitations by integrating reinforcement learning, advanced fine-tuning methods for enhanced factual accuracy, and data augmentation strategies to mitigate biases. Additionally, exploring other Transformer variants and conducting ablation studies on LSTM attention mechanisms could provide deeper insights and further advancing the effectiveness and reliability of ATS systems.
Tersedia 1 dari total 1 Koleksi
Nama | I MADE DENIS MAHARDITHA |
Jenis | Perorangan |
Penyunting | Kemas Muslim Lhaksmana |
Penerjemah |
Nama | Universitas Telkom, S1 Informatika |
Kota | Bandung |
Tahun | 2025 |
Harga sewa | IDR 0,00 |
Denda harian | IDR 0,00 |
Jenis | Non-Sirkulasi |