Vladimir Y. Mariano

Work place: National University, College of Computing and Information Technologies, Manila, 1008, Philippine

E-mail: vymariano@national-u.edu.ph

Website:

Research Interests: Image Processing, Image and Sound Processing, Computer Vision, Graph and Image Processing

Biography

Vladimir Y. Mariano is currently working as a professor of faculty of Computing and Information Technologies at National University in Philippines. He received the B.S. degree in statistics and the M.S. degree in computer science from the University of the Philippines Los Banos, and the Ph.D. degree in computer science and engineering from The Pennsylvania State University. His research interests include computer vision, digital image processing, and machine learning.

Author Articles
Explainable Fake News Detection Based on BERT and SHAP Applied to COVID-19

By Xiuping Men Vladimir Y. Mariano

DOI: https://doi.org/10.5815/ijmecs.2024.01.02, Pub. Date: 8 Feb. 2024

Fake news detection has become a significant research top in natural language processing. Since the outbreak of the covid-19 epidemic, a large amount of fake news about covid-19 has spread on social media, making the detection of fake news a challenging task. Applying deep learning models may improve predictions. However, their lack of explainability poses a challenge to their widespread adoption and use in practical applications. This work aims to design a deep learning framework for accurate and explainable prediction of covid-19 fake news. First, we choose BiLSTM as the base model and improve the classification performance of the BiLSTM model by incorporating BERT-based distillation. Then, a post-hoc interpretation method SHAP is used to explain the classification results of the model to improve the transparency of the model and increase people's confidence in the practical application. Finally, utilizing visual interpretation methods, such as significance plots, to analyze specific sample classification results for gaining insights into the key terms that influence the model’s decisions. Ablation experiments demonstrated the reliability of the explainable method.

[...] Read more.
Other Articles