Sara Jerin Prithila

Research Interests:

  • Privacy-Preserving & Secure Machine Learning
  • Natural Language Processing

Graduate Student, University of Alberta

Sara is currently pursuing her MSc in Computing Science at the University of Alberta. She is under the supervision of Dr. Bailey Kacsmar in PUPS: Practical Usable Privacy and Security Lab.

Email  /  LinkedIn  /  Education  /  Research   

profile photo

Education

Master of Science in Computing Science [Winter 2026 - Present]
University of Alberta

Bachelor of Science in Computer Science [Spring 2021 - Fall 2024]
Brac University

Research

Your Image Description
Proposing a New Attack Vector for LLM Poisoning Attacks
2026

Poisoning Attacks, LLMs, Machine Learning, Privacy
This work examines whether large language models can be poisoned to produce recommendations that lead users to install malware, and evaluates whether such attacks can circumvent existing alignment and safety techniques.
Supervisor: Dr. Bailey Kacsmar
Collaborators: Miriam Bakija, Samuel Feldman

Your Image Description
Evaluating Theory-Informed Linguistic Features and Large Language Models for Reading Comprehension Question Classification
2026

Learning Analytics, Educational Data Mining, Natural Language Processing
Guided by the Simple View of Reading, which conceptualizes reading comprehension as the interaction between decoding and language comprehension, the analysis focuses on linguistic features related to word recognition and semantic understanding.
Supervisor: Dr. Carrie Demmans Epp

Your Image Description
Encrypting Sentiments: A Study on Integrating Encryption Module with NLP Pipeline to Analyze Emotions While Ensuring Security
2024

Natural Language Processing, Privacy, and Cryptography
She completed her undergraduate thesis under the supervision of Dr. Farig Yousuf Sadeque and Md Faisal Ahmed. The proposed protocol uses encryption to protect sender-receiver interactions, using LSTM and GRU models to assess emotional states by utilizing both symmetric and asymmetric encryption. Training the Word2Vec embedding model with encrypted words not only preserves vital syntactic and semantic relationships but also maintains data privacy. The research delves into potential vulnerabilities and strategies for their management at every stage of the protocol.

Your Image Description
Automated Detection of Online Comments Using Transformers
2023 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT)

Natural Language Processing, Fine-tuning Transformers
Under the supervision of Annajiat Alim Rasel, contributed to a research project. Developed a dataset focused on gender-biased defamation in Bengali, a low-resourced language, by extracting data from various datasets. The models included state-of-the-art transformer-based NLP models, BanglaBERT, XLM-RoBERTa, m-BERT, and DistilBERT, to detect gender-specific defamation on Social Media.

Your Image Description
Automated Image Caption Generation using Deep Learning
2023 26th International Conference on Computer and Information Technology (ICCIT)

Natural Language Processing, Image Captioning, Computer Vision
Under the supervision of Annajiat Alim Rasel, contributed to a research project. The VGG-16 model, a Convolutional Neural Network (CNN), is employed to extract features from images. Long Short-Term Memory (LSTM) RNN is then utilized to generate descriptive captions. The entire system is trained on the Flickr8k dataset, ensuring standard performance in generating contextually relevant captions for diverse images.