|
Sara Jerin Prithila
Research Interests:
- Privacy-Preserving & Secure Machine Learning
- Natural Language Processing
Graduate Student, University of Alberta
Sara is currently pursuing her MSc in Computing Science at the University of Alberta. She is under the supervision of Dr. Bailey Kacsmar in PUPS: Practical Usable Privacy and Security Lab.
Email /
LinkedIn /
Education /
Research
|
|
|
|
Proposing a New Attack Vector for LLM Poisoning Attacks
2026
Poisoning Attacks, LLMs, Machine Learning, Privacy
This work examines whether large language models can be poisoned to produce recommendations that lead users to install malware, and evaluates whether such attacks can circumvent existing alignment and safety techniques.
Supervisor: Dr. Bailey Kacsmar
Collaborators: Miriam Bakija, Samuel Feldman
|
|
|
Evaluating Theory-Informed Linguistic Features and Large Language Models for Reading Comprehension Question Classification
2026
Learning Analytics, Educational Data Mining, Natural Language Processing
Guided by the Simple View of Reading, which conceptualizes reading comprehension as the interaction between decoding and language comprehension, the analysis focuses on linguistic features related to word recognition and semantic understanding.
Supervisor: Dr. Carrie Demmans Epp
|
|
|
Automated Detection of Online Comments Using Transformers
2023 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT)
Natural Language Processing, Fine-tuning Transformers
Under the supervision of Annajiat Alim Rasel, contributed to a research project. Developed a dataset focused on gender-biased defamation in Bengali,
a low-resourced language, by extracting data from various datasets. The models included state-of-the-art transformer-based NLP models,
BanglaBERT, XLM-RoBERTa, m-BERT, and DistilBERT, to detect gender-specific defamation on Social Media.
|
|
|
Automated Image Caption Generation using Deep Learning
2023 26th International Conference on Computer and Information Technology (ICCIT)
Natural Language Processing, Image Captioning, Computer Vision
Under the supervision of Annajiat Alim Rasel, contributed to a research project.
The VGG-16 model, a Convolutional Neural Network (CNN), is employed to extract features from images. Long
Short-Term Memory (LSTM) RNN is then utilized to generate descriptive captions. The entire system is trained on
the Flickr8k dataset, ensuring standard performance in generating contextually relevant captions for diverse
images.
|
|