The objective of this study was to enhance the performance of language models in identifying political violence by fine-tuning BERT, RoBERTa using prompting techniques. Two types of prompting were employed: tuning-free prompting and fixed prompt language model tuning. The results showed that fixed prompt language model tuning was the most effective approach, achieving an average F1-score of 0.784 across both models on the BBC violence classification dataset. These findings highlight the potential of prompting techniques in enhancing the ability of language models to analyze and identify political violence in news articles. The study also applied the same fine-tuning techniques on a Arabic language dataset for fake news classification and demonstrated that fixed prompt language model tuning was more effective, resulting in an average F1-score of 0.62.