DETECTING SARCASM IN SOCIAL MEDIA POSTS USING TRANSFORMER-BASED LANGUAGE MODELS WITH CONTEXTUAL AND SENTIMENT-AWARE FEATURES
Keywords:
Sarcasm Detection, Transformer Models, Sentiment Polarity, Contextual Embedding, Social Media NLP, BERTAbstract
Sarcasm is a frequently used figurative language, which requires multidimensional classification based on context and is often employed on social media with difficulties for NLP solutions. Classifying sarcastic content: That is the problem with most of the traditional sentiment analysis methods where even positive or negative words are misunderstood and misinterpreted. This research aims to fill this gap by developing a sarcasm detection model using the transformer-based architecture integrated with sentiment-aware and contextual features. For semantic encoding, BERT is used as a basis for more accurate classification; sentiment polarity vectors from VADER; conversational context using previous messages and user information. Also, to compare this work to other models from the literature, we focus on two datasets: SARC (Reddit) and Twitter Sarcasm. We compare the proposed model to strong baselines from 2024 and onwards: BiLSTM, Multichannel CNN, and vanilla BERT classifiers. The experiments conducted here also revealed that the proposed model has an F1-score of 87.8%, thereby outcompeting all the baselines with respect to all the metrics introduced above. These analyses reveal that sentiment reversal and dialogue context are important for distinguishing between sarcastic and genuine positive polarity. This work not only provides a detection approach but brings new opportunities and challenges to real-time, multilingual and multimodal sarcasm understanding in online social media.