Tangent Search Long Short Term Memory with Aadaptive Reinforcement Transient Learning based Extractive and Abstractive Document Summarization

Full Text (PDF, 1024KB), PP.56-72

Views: 0 Downloads: 0

Author(s)

Reshmi P Rajan 1,* Deepa V. Jose 2 Roopashree Gurumoorthy 3

1. Department of Computer Science, Christ (Deemed to be University), S.G. Palya, Bengaluru, Karnataka 560029, India

2. Department of Computer Science, Christ (Deemed to be University), Bangalore, India

3. NTT DATA Services, Winnipeg, Canada

* Corresponding author.

DOI: https://doi.org/10.5815/ijmecs.2023.06.05

Received: 14 Jul. 2023 / Revised: 13 Aug. 2023 / Accepted: 25 Sep. 2023 / Published: 8 Dec. 2023

Index Terms

Summarization, Knowledge Graph, Feature Selection, Optimization Technique, Improved Reinforcement Learning, ROUGE Scores

Abstract

Text summarization is the process of creating a shorter version of a longer text document while retaining its most important information. There have been a number of methods proposed for text summarization, but the existing method does not provide better results and has a problem with sequence classification. To overcome these limitations, a tangent search long short term memory with adaptive reinforcement transient learning-based extractive and abstractive document summarization is proposed in this manuscript. In abstractive phase, the features of the extractive summary are extracted and then the optimal features are selected by Adaptive Flamingo Optimization (AFO). With these optimal features, the abstractive summary is generated. The proposed method is implemented in python. For extractive text summarization, the proposed method attains 42.11% ROUGE-1 Score, 23.55% ROUGE-2 score and 41.05% ROUGE-L score using Gigaword. Additionally, 57.13% ROUGE-1 Score, 28.35% ROUGE-2 score and 52.85% ROUGE-L score using DUC-2004 dataset. For abstractive text summarization the proposed method attains 47.05% ROUGE-1 Score, 22.02% ROUGE-2 score and 48.96% ROUGE-L score using Gigaword. Also, 35.13% ROUGE-1 Score, 20.35% ROUGE-2 score and 35.25% ROUGE-L score using DUC-2004 dataset.

Cite This Paper

Reshmi P Rajan, Deepa V. Jose, Roopashree Gurumoorthy, "Tangent Search Long Short Term Memory with Aadaptive Reinforcement Transient Learning based Extractive and Abstractive Document Summarization", International Journal of Modern Education and Computer Science(IJMECS), Vol.15, No.6, pp. 56-72, 2023. DOI:10.5815/ijmecs.2023.06.05

Reference

[1]R. Bhargava and Y. Sharma, “Deep Extractive Text summarization,” Proc. Comp. Sci., vol. 167, pp. 138–146, 2020.
[2]A. K. Yadav, A. Singh, M. Dhiman, Vineet, R. Kaundal, A. Verma, and D. Yadav, “Extractive text summarization using Deep Learning Approach,” Inter J. Inf Technol., vol. 14, no. 5, pp. 2407–2415, 2022.
[3]H. Aliakbarpour, M. T. Manzuri, A. M. Rahmani, “Improving the readability and saliency of abstractive text summarization using combination of deep neural networks equipped with auxiliary attention mechanism,” J. Supercomput. vol. 78, no. 2, pp. 2528–2555, 2021.
[4]D. Debnath, R. Das, and S. Rafi, “Sentiment-based abstractive text summarization using attention oriented LSTM model,” Intell. Data Eng. Anal., pp. 199–208, 2022.
[5]H. Kumar, G. Kumar, S. Singh, and S. Paul, “Text summarization of articles using LSTM and attention-based LSTM,” Mach. Learn. Auto. Syst., pp. 133–145, 2022.
[6]Q. Wang, P. Liu, Z. Zhu, H. Yin, Q. Zhang, and L. Zhang, “A text abstraction summary model based on Bert word embedding and reinforcement learning,” Appl. Sci., vol. 9, no. 21, p. 4701, 2019.
[7]F. Ertam and G. Aydin, “Abstractive text summarization using Deep Learning with a new Turkish summarization benchmark dataset,” Concur. Comput: Pract. Exp., vol. 34, no. 9, 2021.
[8]D. Suleiman and A. Awajan, “Multilayer encoder and single-layer decoder for abstractive Arabic text summarization,” Knowledge-Based Syst. vol. 237, p. 107791, 2022.
[9]A. Kumar, S. Seth, S. Gupta, and S. Maini, “Sentic computing for aspect-based opinion summarization using multi-head attention with feature pooled pointer generator network,” Cogn. Comput., vol. 14, no. 1, pp. 130–148, 2021.
[10]R. C. Belwal, S. Rai, and A. Gupta, “Text summarization using topic-based vector space model and semantic measure,” Inf. Process. Manag., vol. 58, no. 3, p. 102536, 2021.
[11]H. P. Chan and I. King, “A condense-then-select strategy for text summarization,” Knowl. Based Syst., vol. 227, p. 107235, 2021.
[12]M. Sultana, P. Chakraborty, and T. Choudhury, “Bengali abstractive news summarization using Seq2Seq Learning with attention,” Lect. Notes Netw. Syst. pp. 279–289, 2021.
[13]T. Vo, “SE4ExSum: An integrated semantic-aware neural approach with graph convolutional network for Extractive Text summarization,” ACM Trans. Asian Low-Resour. Lang. Inf. Process., vol. 20, no. 6, pp. 1–22, 2021.
[14]W. Liao, Y. Ma, Y. Yin, G. Ye, and D. Zuo, “Improving abstractive summarization based on dynamic residual network with reinforce dependency,” Neurocomp., vol. 448, pp. 228–237, 2021.
[15]B. Mohan Bharath, B. Aravindh Gowtham, and M. Akhil, "Neural Abstractive Text Summarizer for Telugu Language," in Soft Computing and Signal Processing. Advances in Intelligent Systems and Computing, V.S. Reddy, V.K. Prasad, J. Wang, K.T.V. Reddy, Eds. Singapore: Springer, 2022, vol. 1340.
[16]T. Ma, Q. Pan, H. Rong, Y. Qian, Y. Tian, and N. Al-Nabhan, “T-bertsum: Topic-aware text summarization based on bert,” IEEE Trans. Comput. Soc., vol. 9, no. 3, pp. 879–890, 2022.
[17]A. Abdi, S. Hasan, S. M. Shamsuddin, N. Idris, and J. Piran, “A hybrid deep learning architecture for opinion-oriented multi-document summarization based on multi-feature fusion,” Knowl. Based Syst., vol. 213, p. 106658, 2021.
[18]I. Cachola, K. Lo, A. Cohan, and D. S. Weld, “TLDR: Extreme summarization of scientific documents,” 2020. arXiv preprint arXiv:2004.15011.
[19]Ankita, S. Rani, A. K. Bashir, A. Alhudhaif, D. Koundal, and E. S. Gunduz, “An efficient CNN-LSTM model for sentiment detection in #BlackLivesMatter,” Expert Syst. Appl., vol. 193, p. 116256, 2022.
[20]V. Vaissnave, and P. Deepalakshmi, "A Keyword-Based Multi-label Text Categorization in the Indian Legal Domain Using Bi-LSTM," in Soft Computing: Theories and Applications. Advances in Intelligent Systems and Computing, T.K.Sharma, C.W. Ahn, O.P. Verma, B.K. Panigrahi, Eds. Singapore: Springer, vol. 1380, 2022.
[21]T. Shi, Y. Keneshloo, N. Ramakrishnan, and C. K. Reddy, “Neural abstractive text summarization with sequence-to-sequence models,” ACM Transact Data Sci., vol. 2, no. 1, pp. 1-37, 2021.
[22]R. Rani, and D. K. Lobiyal, “An extractive text summarization approach using tagged-LDA based topic modeling,” Multimed. Tools Appl., vol. 80, pp. 3275-3305, 2021.
[23]S.K. Mishra, N. Saini, and S. Saha S, “Bhattacharyya P. Scientific document summarization in multi-objective clustering framework,” Appl. Intell., vol. 52, no. 2, pp. 1520-1543, 2022.
[24]S. Song, H. Huang, and T. Ruan, “Abstractive text summarization using LSTM-CNN based Deep Learning,” Multi. Tools Appl., vol. 78, no. 1, pp. 857–875, 2018.
[25]Y. M. Wazery, M. E. Saleh, A. Alharbi, and A. A. Ali, “Abstractive Arabic text summarization based on Deep Learning,” Comp. Intell. Neuro., vol. 2022, pp. 1–14, 2022.
[26]C. Yuan, Z. Bao, M. Sanderson, and Y. Tang, “Incorporating word attention with convolutional neural networks for abstractive summarization,” World Wide Web, pp. 267-287, 2020.
[27]W. Wenbo, G. Yang, H. Heyan, Z. Yuxiang, “Concept pointer network for abstractive summarization,” arXiv preprint arXiv:1910.08486. 2019.
[28]Y. Gao, Y. Wang, L. Liu, Y. Guo, and H. Huang, “Neural abstractive summarization fusing by global generative topics,” Neural Comput. Appl., vol. 32, pp. 5049-5058, 2020.
[29]P. Mahalakshmi, and N. S. Fatima, “Summarization of Text and Image Captioning in Information Retrieval Using Deep Learning Techniques,” IEEE Access, vol. 10, pp. 18289-18297, 2022.
[30]K. Yao, L. Zhang, T. Luo, and Y. Wu, “Deep reinforcement learning for extractive document summarization,” Neurocomput., vol. 284, pp. 52-62, 2018.
[31]Z. Jiang, M. Srivastava, S. Krishna, D. Akodes, and R. Schwartz, “Combining word embeddings and n-grams for unsupervised document summarization,” 2020. arXiv preprint arXiv:2004.14119.
[32]D. Y. Sakhare, “A Sequence-to-Sequence Text Summarization Using Long Short-Term Memory Based Neural Approach,” Int. J. Intell. Syst., vol. 16, no. 2, pp. 142-151, 2023.
[33]M. Tomer, and M. Kumar, “Improving text summarization using ensembled approach based on fuzzy with LSTM,” Arab. J. Sci. Eng., vol. 45, pp. 10743-10754, 2020.
[34]C. Hark, and A. Karcı, “Karcı summarization: A simple and effective approach for automatic text summarization using Karcı entropy,” Inf Process & Manag., vol. 57, no. 3, p. 102187, 2020.
[35]https://huggingface.co/datasets/gigaword
[36]https://paperswithcode.com/dataset/duc-2004
[37]A. Y. Muaad, M. A. Al-antari, S. Lee, and H. J. Davanagere, “A novel deep learning arcar system for Arabic text recognition with character-level representation,” IOCA 2021, 2021.
[38]J. Jiang, H. Zhang, C. Dai, Q. Zhao, H. Feng, Z. Ji, and I. Ganchev, “Enhancements of attention-based bidirectional LSTM for hybrid automatic text summarization,” IEEE Access, vol. 9, pp. 123660–123671, 2021.
[39]Z. Liu, P. Jiang, J. Wang, and L. Zhang, “Ensemble system for short term carbon dioxide emissions forecasting based on multi-objective tangent search algorithm,” J. Environ. Manag., vol. 302, p. 113951, 2022.
[40]L. Raamesh, S. Radhika, and S. Jothi, “Generating optimal test case generation using shuffled Shepherd Flamingo Search Model,” Neural Process. Lett., vol. 54, no. 6, pp. 5393–5413, 2022.
[41]K. Miao, X. Wang, M. Zhu, S. Yang, X. Pei, and Z. Jiang, “Transient controller design based on Reinforcement Learning for a turbofan engine with Actuator Dynamics,” Sym., vol. 14, no. 4, p. 684, 2022.