2025-26:fall

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
2025-26:fall [2025/08/05 17:51] sallin2025-26:fall [2025/08/20 13:28] (current) sallin
Line 8: Line 8:
   * You can **subscribe** to this page in order to receive an email whenever the project listing page updates. Only logged in users have access to the “Manage Subscriptions” page tool.  See [[https://www.dokuwiki.org/subscription]]   * You can **subscribe** to this page in order to receive an email whenever the project listing page updates. Only logged in users have access to the “Manage Subscriptions” page tool.  See [[https://www.dokuwiki.org/subscription]]
   * The Table of Contents can be collapsed/expanded.   * The Table of Contents can be collapsed/expanded.
 +  * As a general rule, projects from the past are good indicators as to what faculty are interested in; [[https://wiki.eecs.yorku.ca/dept/project-courses/projects.|see this link for an archive]].  In addition, [[https://lassonde.yorku.ca/research/lura-and-usra-research-at-lassonde#browse-projects|check out some of prior LURA/USRA project descriptions at this link]]; these are also good indicators as to the kind of work faculty will want to mentor.
  
 /** DO NOT EDIT ABOVE THIS LINE PLEASE **/ /** DO NOT EDIT ABOVE THIS LINE PLEASE **/
Line 14: Line 15:
  
 ==== Computer Security Projects ==== ==== Computer Security Projects ====
- 
-  
  
 ** [added 2025-07-21] **  ** [added 2025-07-21] ** 
Line 32: Line 31:
  
 ** Instructions:** Reach out to security faculty to see if they have the capacity to supervise this term.  For questions about eligible security projects, contact the CSec Coordinator (Yan Shvartzshnaider). ** Instructions:** Reach out to security faculty to see if they have the capacity to supervise this term.  For questions about eligible security projects, contact the CSec Coordinator (Yan Shvartzshnaider).
 +
 +----
 +
 +==== Emotion-Aware Analysis of EECS Course Feedback for Instructional Improvement ====
 +
 +** [added 2025-08-08] ** 
 + 
 +** Course:**  {EECS4080} 
 +
 +** Supervisors:**  Pooja Vashith
 + 
 +** Supervisor's email address: ** vashistp@yorku.ca
 +
 +** Project Description: ** This project aims to uncover meaningful insights from EECS course evaluations by applying natural language processing (NLP) techniques to student feedback. While most universities collect large volumes of student comments in course evaluations, these are typically underused, especially when embedded in PDF files. Qualitative feedback is often reviewed manually or averaged superficially, leaving behind rich emotional and experiential data that could inform course improvement.
 +
 +The primary goal is to build a processing pipeline that extracts, cleans, and analyzes this feedback using both basic sentiment analysis tools (e.g., VADER) and advanced emotion classification models (e.g., GoEmotions). The emotional tone expressed in the feedback will be mapped to different course components such as the instructor, teaching assistant, assessments, and course content. NB: These are already separated in the evaluation structure.
 +
 +By comparing the expressiveness and usefulness of simple versus fine-grained emotional analysis, this research will help determine which approaches are more effective at surfacing actionable insights. These insights will be visualized to highlight recurring patterns of sentiment or emotion across course components, such as whether students consistently express frustration about assessments or admiration for certain instructors.
 +
 +This project is educational in nature as it equips the student with skills in text analytics, NLP tools, and data visualization while contributing to a broader understanding of how data-driven analysis can support evidence-based teaching and curriculum refinement in academic institutions.
 +
 +** Required skills or prerequisites: **EECS 4412 or EECS4404
 +
 +Data Analysis, Report Writing, Python programming, web app development, appetite for research
 +
 +** Instructions:** sen a CV, transcript, statement of interest, and skills to the instructor (Pooja).
 +
 +----
 +
 +==== Deep Learning and AI in Incident Management ====
 +
 +** [added 2025-08-20] ** 
 + 
 +** Course:**  {EECS4070 | EECS4080 | EECS4090} 
 +
 +** Supervisors:**  Marios Fokaefs
 + 
 +** Supervisor's email address: ** fokaefs@yorku.ca
 +
 +** Project Description: ** "Large scale complex software systems generate immense amounts of event data. This creates a significant cognitive and work load for reliability engineers and a number of different challenges. First, the detection of problems becomes problematic and delayed due to the sheer amount of data. When problems are finally detected, their analysis and resolution may take even more time, which translates in loss of revenue. After resolution, the whole cycle must be well-documented, otherwise reproducibility is reduced and unnecessary effort may be invested. 
 +
 +
 +** Required skills or prerequisites: **
 +
 +Student must have:
 +
 +Excellent programming skills (preferably python)
 +Good software design skills (must have at least a B+ in EECS3311 or similar courses)
 +Some experience with the use of LLM models as a user and as a developer
 +
 +
 +** Instructions:** Interested students must submit to the instructor (Marios):
 +
 +- CV
 +
 +- A statement of interest
 +
 +- Latest transcript
 +
 +- Other evidence (e.g., software repositories) as proof of skills
 +
 +-----
 +
 +==== Beyond the Mask: Reimagining Facial Recognition with Deep Transfer Learning ====
 +
 +** [added 2025-08-21] ** 
 + 
 +** Course:**  {EECS4480} 
 +
 +** Supervisors:**  Sunila Akbar
 + 
 +** Supervisor's email address: ** sunila@yorku.ca
 + 
 +** Project Description: ** "The project involves adapting a state-of-the-art, pretrained deep learning model for facial recognition to accurately identify individuals wearing masks. The student will utilize publicly available datasets and apply data augmentation techniques to simulate mask-wearing scenarios. Transfer learning will be employed to fine-tune the model for this specific task. The performance of the resulting model will be rigorously evaluated against established benchmarks.
 +
 +Application Domain: The proposed solution has relevance in environments where mask-wearing is mandatory, such as healthcare facilities, long-term care homes, food service industries, and chemical or pharmaceutical plants. Accurate masked facial recognition can enhance access control, attendance tracking, and safety compliance in these critical settings."
 +
 +** Required skills or prerequisites: ** 
 +
 +Python, PyTorch, NumPy, Scikit-learn, OpenCV
 +Knowledge of any deep learning model is a plus
 +Hyperparameter tuning and optimization
 +Understanding of image processing techniques and object detection evaluation metrics
 +General interest in computer vision algorithms and applications
 +
 +** Instructions:** Send CV, Transcript to the instructor (Sunila).
 +
 +----
 +
 +
 +==== Smart Tools for Smarter Brain Scans: Motion Correction in fMRI  ==== 
 +
 +**[added 2025-08-08]**
 +
 +**Course:** {EECS4080 | EECS4088}
 +
 +**Supervisor:** Sima Soltanpour
 +
 +** Supervisor's email address:** simasp@yorku.ca
 +
 +** Project Description: ** Functional Magnetic Resonance Imaging (fMRI) is a widely used technique for studying brain function, but its accuracy is often limited by motion caused by head movement during scanning. The artifacts can distort signal measurements and reduce the reliability of data analysis. This project aims to investigate and implement motion correction techniques for fMRI data using both traditional preprocessing pipelines and emerging AI-based approaches. Students will explore how image quality and signal stability can be improved through algorithmic correction. This research-focused project provides an opportunity to gain experience in neuroimaging, signal processing, and the application of machine learning to real-world biomedical data.
 +
 +** Recommended skills or prerequisites: **   
 +
 +  * Python programming
 +  * Interest in AI and machine learning for biomedical applications
 +
 +** Instructions: ** P Please email your CV and unofficial transcript to the professor (Sima).
 +
 +----
 +
 +==== Fairness and Prediction for Online Algorithms  ==== 
 +
 +**[added 2025-08-05]**
 +
 +**Course:** {EECS4080}
 +
 +**Supervisor:** Shahin Kamali
 +
 +** Supervisor's email address:** kamalis@yorku.ca
 +
 +** Lab Link: ** [[https://sites.google.com/view/shahinkamali/home|here]]
 +
 +** Project Description: ** In this course, we will explore recent advances in algorithm design that incorporate fairness considerations. Achieving fairness often requires tools such as randomization and prediction. A typical setting involves scenarios where different groups or agents provide parts of the input, and the goal is to design algorithms that produce solutions that are fair across these groups. Typical applications include data structures (where different groups issue queries) and scheduling and packing problems. 
 +
 +** Recommended skills or prerequisites: **   
 +
 +  * Online Computation and Competitive Analysis (Allan Borodin, Ran El-Yaniv)
 +
 +** Instructions: ** Please email your CV and unofficial transcript to the supervisor (Shahin).
 +
 +----
 +
 +====  Seeing Code: Image Processing for Software Engineering  ====
 +
 +**[added 2025-08-09]**
 +
 +
 +**Course:**  {EECS4088/4080}
 +
 +**Supervisor:**  Maleknaz Nayebi (Research Faculty/Associate Director of CIFAL York)
 +
 +**Supervisor's email address:**  mnayebi@yorku.ca
 +
 +**Required skills or prerequisites:**  
 +  * Proficient in Python programming
 +
 +**Recommended skills or prerequisites:**
 +Understanding of Machine Learning and Image Processing
 +
 +** Project Description: ** Software development is no longer just about text-based code. Developers increasingly share screenshots, diagrams, whiteboard sketches, and UI mockups in forums, documentation, and collaborative tools. But while humans can glance at an image and instantly understand what’s there, most software engineering tools ignore this visual goldmine. This project will explore how image processing and computer vision can be applied to help developers work smarter. Imagine tools that can:
 +(i) Automatically read and interpret code snippets from screenshots on Stack Overflow or GitHub issues
 +(ii) Detect UI elements and workflows from mobile app screenshots for automated testing
 +(iii) Extract architecture diagrams from PDFs and turn them into editable models
 +(iv) Identify errors, warnings, or environment details from IDE screenshots to improve bug reports
 +You’ll work with a small dataset of real-world images from developer communities, apply OCR (Optical Character Recognition), object detection, and layout analysis, and experiment with AI techniques to transform images into structured, machine-readable insights.
 +
 +**Why This is Cool:**
 +(a) You’ll be working at the intersection of computer vision and software engineering — an emerging research frontier.
 +(b) You will work along with MSc and PhD students who were starting from where you are right now ... being my undergrad student for 4080/4088
 +(c) The project is grounded in real developer problems and could lead to tools that people actually use, and you may get to work with some of our industry partners. 
 +(d) You’ll gain experience with image processing libraries (like OpenCV, Tesseract), Python-based pipelines, and possibly even fine-tuning vision-language models.
 +(e) There’s potential for research publication or open-source release if results are promising. 
 +
 +**Instructions:**
 +Please email your CV and Transcripts to the professor (Maleknaz).
 +
 +----
 +
 +==== Using Generative AI for Compliance Analysis in Health Care ====
 +
 +**[added 2025-08-09]**
 +
 +
 +**Course:**  {EECS4080/4088}
 +
 +**Supervisor:**  Maleknaz Nayebi (Research Faculty/Associate Director of CIFAL York)
 +
 +**Supervisor's email address:**  mnayebi@yorku.ca
 +
 +**Required skills or prerequisites:**  
 +  * Proficient in Python programming
 +
 +**Recommended skills or prerequisites:**
 +Understanding of Machine Learning, prompt engineering, and GenAI
 +
 +**Project Description:** Health care is one of the most highly regulated industries in the world. Every new medical device, digital health tool, or clinical process must comply with complex rules and standards — from privacy laws like HIPAA to advertising regulations and medical ethics guidelines. The challenge? These rules are buried in long, dense, and ever-changing documents that are hard for humans to keep up with. This project will explore how Generative AI can act as an intelligent assistant for compliance analysis. Imagine a system that can:
 +(i) Read hundreds of pages of regulatory text and highlight the exact rules relevant to a given health care product or service
 +(ii) Compare a draft document or ad campaign against regulatory requirements to spot potential violations
 +(iii) Provide plain-language summaries of compliance risks for non-experts in health care teams
 +(iv) Learn from feedback to improve over time
 +
 +You’ll work with real-world health care regulations and guidance documents, build AI pipelines that integrate text extraction, retrieval-augmented generation (RAG), and natural language understanding, and evaluate how well AI can assist compliance officers and health care innovators.
 +
 +
 +**Why This is Cool:**
 +(a) You’ll be applying AI to a real-world, high-impact domain where mistakes can affect patient safety and legal outcomes
 +(b) You’ll learn to work with state-of-the-art Generative AI tools (like OpenAI, Hugging Face models) for specialized, high-stakes tasks
 +(c) The project bridges machine learning, information retrieval, and domain-specific knowledge — skills that are highly sought after in industry
 +(d) Your work could inform research papers, prototypes, and real tools that help make health care safer and more efficient
 +
 +**Instructions:**
 +Please email your CV and Transcripts to the professor (Maleknaz).
 +
 +----
 +
 +
 +==== The impact of quantity and quality of feedback on RLHF  ==== 
 +
 +**[added 2025-08-08]**
 +
 +**Course:** {EECS4080}
 +
 +**Supervisor:** Ines Arous
 +
 +** Supervisor's email address:** inesar@yorku.ca
 +
 +** Lab Link:** [[https://inesarous.github.io/|here]]
 +
 +** Project Description: ** Reinforcement learning with human feedback (RLHF) has become widely used to enhance the performance of large language models. These methods rely heavily on the availability of large amounts of high-quality human feedback. Yet, it is unclear how the quantity and quality of feedback influence the performance of language models. This project aims to address these gaps by analyzing the relationship between the properties of human feedback and the framework of RLHF, with a particular focus on its core component—the reward model. The student will conduct an empirical evaluation on a summarization task, exploring how different quantities and qualities of feedback impact the effectiveness of the reward model in RLHF. The student will also investigate various sampling strategies to identify the minimum feedback needed for comparable performance with a reward model trained on a large dataset. To examine the impact of feedback quality, the student will simulate scenarios where the feedback is noisy and evaluate the reward model's accuracy as the quality of annotations is varied. 
 +
 +** Required skills or prerequisites: **   
 +
 +  * Major in Computer Science/Software Engineering/Computer Engineering
 +  * Third year and up
 +  * You must have completed a Machine Learning/ Artificial Intelligence course. 
 +  * Total GPA over B+ (Preferably A/A+)
 +
 +** Instructions: ** Please email your CV and Transcripts to the professor (Ines).
 +
 +----
 +
 +==== Guidelines for Human Evaluation of Generated Answers by LLMs  ==== 
 +
 +**[added 2025-08-05]**
 +
 +**Course:** {EECS4080}
 +
 +**Supervisor:** Ines Arous
 +
 +** Supervisor's email address:** inesar@yorku.ca
 +
 +** Lab Link:** [[https://inesarous.github.io/|here]]
 +
 +** Project Description: ** The project will use theories from behavioral science and psychology to derive guidelines for human evaluation of generated answers by LLMs. The goal is to leverage theories such as the power analysis to quantify the number of participants. Other theories, such as construct validity (measuring intended personalization traits), content validity (ensuring coverage of relevant personalization dimensions), and ecological validity (reflecting real-world use cases), will be explored.
 +
 +** Required skills or prerequisites: **   
 +
 +  * You must have completed a Machine Learning/NLP course. 
 +  * Total GPA over B+ (Preferably A/A+)
 +
 +** Instructions: ** Please send your CV, transcript and statement of interest to the professor (Ines).
 +
 +----
 +
 +==== Comparison of LLM personalization techniques on domain specific applications  ==== 
 +
 +**[added 2025-08-05]**
 +
 +**Course:** {EECS4088}
 +
 +**Supervisor:** Ines Arous
 +
 +** Supervisor's email address:** inesar@yorku.ca
 +
 +** Lab Link:** [[https://inesarous.github.io/|here]]
 +
 +** Project Description: ** The project will compare between current LLM personalization techniques, such as chain of thought prompting, retrieval augmented generation (RAG), and reinforcement learning with human feedback (RLHF), on domain-specific tasks using existing datasets.
 +
 +** Required skills or prerequisites: **   
 +
 +  * You must have completed a Machine Learning or a deep learning course. 
 +  * Total GPA over B+ (Preferably A/A+)
 +
 +** Instructions: ** Send your CV, transcripts, and previous ML-related code to the professor (Ines).
  
 ---- ----
Line 509: Line 783:
  
 ---- ----
- 
-==== Image Processing for Software Engineering ==== 
- 
-**[added 2025-07-15]** 
- 
- 
-**Course:**  {EECS4088/4080} 
- 
-**Supervisor:**  Maleknaz Nayebi 
- 
-**Supervisor's email address:**  mnayebi@yorku.ca 
- 
-**Required skills or prerequisites:**   
-  * Proficient in Python programming 
- 
-**Recommended skills or prerequisites:** 
-Understanding of Machine Learning and Image Processing 
- 
- 
-**Instructions:** 
-Please email your CV and Transcripts to the professor. 
- 
----- 
- 
-==== Using Generative AI for Compliance Analysis in Health Care ==== 
- 
-**[added 2025-07-15]** 
- 
- 
-**Course:**  {EECS4080/4088} 
- 
-**Supervisor:**  Maleknaz Nayebi 
- 
-**Supervisor's email address:**  mnayebi@yorku.ca 
- 
-**Required skills or prerequisites:**   
-  * Proficient in Python programming 
- 
-**Recommended skills or prerequisites:** 
-Understanding of Machine Learning and Image Processing 
- 
- 
-**Instructions:** 
-Please email your CV and Transcripts to the professor. 
- 
----- 
- 
  
 ==== LLM-augmented Software Quality Assurance Techniques ==== ==== LLM-augmented Software Quality Assurance Techniques ====
2025-26/fall.1754416290.txt.gz · Last modified: by sallin