User Tools

Site Tools


project

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
project [2019/08/23 19:59] caldenproject [2019/11/27 15:45] (current) calden
Line 11: Line 11:
 ===Component Deadlines=== ===Component Deadlines===
  
-    * White Paper +    * White Paper - September 23rd 
-    * Proposal +    * Proposal - October 9th 
-    * Site Visit +    * Site Visit - November 4th/6th (in lab) 
-    * Demo +    * Demo - November 20th, 25th, 27th, December 2nd (in class) 
-    * Final Report+    * Final Report - original deadline: December 2nd, revised deadline: December 13th
  
 ===Engineering Stream=== ===Engineering Stream===
Line 27: Line 27:
     * Site Visit: Each student will make a brief verbal report of their progress; students are encouraged to provide preliminary demonstrations of any software that has been developed at this time.     * Site Visit: Each student will make a brief verbal report of their progress; students are encouraged to provide preliminary demonstrations of any software that has been developed at this time.
     * Demo: Each student will be expected to provide a demonstration of their project results in front of the class. This should describe to the class the goal and function of the implemented algorithm, as well as provide an example of its execution.      * Demo: Each student will be expected to provide a demonstration of their project results in front of the class. This should describe to the class the goal and function of the implemented algorithm, as well as provide an example of its execution. 
-    * Final Report: A technical report of the software (approximately 3-5 pages) should explain the purpose of the work, describe the student's implementation, note any decisions the student made which may lead to potential differences with the original work, and present the results of any tests or evaluations performed. Additionally, the student must provide the instructor with access to a version control repository (see details on the Resources link on the side bar to this page) containing the project code. A portion of the grade will be assigned based on the quality of the code (in terms of both functionality and ease of use) and its user documentation. For students enrolled in EECS 5323 only, the final written report also will be required to have an annotated bibliography of references to the primary literature that is related to the project.+    * Final Report: A technical report of the software (approximately 3-5 pages) should explain the purpose of the work, describe the student's implementation, note any decisions the student made which may lead to potential differences with the original work, and present the results of any tests or evaluations performed. Additionally, the student must provide the instructor with access to a version control repository (see details on the Resources link on the side bar to this page) containing the project code. A portion of the grade will be assigned based on the quality of the code (in terms of both functionality and ease of use) and its user documentation. The final written report also will be required to have an annotated bibliography which includes the original paper being re-implemented. For students enrolled in EECS 5323 only, this bibliography must also include additional references to the primary literature that is related to the project.
  
 ===Scientific Stream=== ===Scientific Stream===
Line 58: Line 58:
 Note that if a student would like to claim one of these models for their project, they are encouraged to speak to the instructor early to avoid duplication of work with another student. Note that if a student would like to claim one of these models for their project, they are encouraged to speak to the instructor early to avoid duplication of work with another student.
  
-  * Discriminant Saliency - Gao et al., 2008, [[https://jov.arvojournals.org/article.aspx?articleid=2193585|Paper Link]]. Code was released as a binary, and does not appear to run anymore. +  * Discriminant Saliency - Gao et al., 2008, [[https://jov.arvojournals.org/article.aspx?articleid=2193585|Paper Link]]. Code was released as a binary, and does not appear to run anymore. **Project selected by a student.** 
-  * Discriminative Correlation Filter with Channel and Spatial Reliability - Lukežič et al., 2017, [[http://openaccess.thecvf.com/content_cvpr_2017/html/Lukezic_Discriminative_Correlation_Filter_CVPR_2017_paper.html|Paper Link]]. Code was released for MATLAB, and could be converted to Python or C++. [[https://github.com/alanlukezic/csr-dcf|GitHub Repository of MATLAB Code]] +  * Discriminative Correlation Filter with Channel and Spatial Reliability - Lukežič et al., 2017, [[http://openaccess.thecvf.com/content_cvpr_2017/html/Lukezic_Discriminative_Correlation_Filter_CVPR_2017_paper.html|Paper Link]]. Code was released for MATLAB, and could be converted to Python or C++. [[https://github.com/alanlukezic/csr-dcf|GitHub Repository of MATLAB Code]] - **Project selected by a student.** 
-  * Learning Background-Aware Correlation Filters for Visual Tracking - Galoogahi et al., 2017, [[http://openaccess.thecvf.com/content_iccv_2017/html/Galoogahi_Learning_Background-Aware_Correlation_ICCV_2017_paper.html|Paper Link]]. Code was released for MATLAB, and could be converted to Python or C++. [[http://www.hamedkiani.com/bacf.html|Project page with link to code]] +  * Learning Background-Aware Correlation Filters for Visual Tracking - Galoogahi et al., 2017, [[http://openaccess.thecvf.com/content_iccv_2017/html/Galoogahi_Learning_Background-Aware_Correlation_ICCV_2017_paper.html|Paper Link]]. Code was released for MATLAB, and could be converted to Python or C++. [[http://www.hamedkiani.com/bacf.html|Project page with link to code]] - **Project selected by a student.** 
-  * Remote Sensing Image Scene Classification Using Multi-Scale Completed Local Binary Patterns and Fisher Vectors - Huang et al., 2016, [[https://www.mdpi.com/2072-4292/8/6/483|Paper Link]]. Model appears to have been released without code.+  * Remote Sensing Image Scene Classification Using Multi-Scale Completed Local Binary Patterns and Fisher Vectors - Huang et al., 2016, [[https://www.mdpi.com/2072-4292/8/6/483|Paper Link]]. Model appears to have been released without code. - **Project selected by a student.**
   * Compositional Model Based Fisher Vector Coding for Image Classification - Liu et al., 2017, [[https://ieeexplore.ieee.org/abstract/document/7812753|Paper Link]]. Model appears to have been released without code.   * Compositional Model Based Fisher Vector Coding for Image Classification - Liu et al., 2017, [[https://ieeexplore.ieee.org/abstract/document/7812753|Paper Link]]. Model appears to have been released without code.
 +  * Person Following Robot Using Selected Online Ada-Boosting with Stereo Camera - Chen et al., 2017 [[http://jtl.lassonde.yorku.ca/wp-content/uploads/2017/02/pfr_paper_crv2017.pdf|Paper Link]]. Model released without code, but the dataset is publicly available: [[http://jtl.lassonde.yorku.ca/2017/02/person-following/|Project page]]. Note that this is a robotics paper; only the vision component would be expected to be completed, and evaluation on existing data would be sufficient.
 +  * Early Recurrence Improves Edge Detection - Shi et al., 2013, [[http://www.bmva.org/bmvc/2013/Papers/paper0022/paper0022.pdf|Paper Link]]. Model appears to have been released without code. **Project selected by a student.**
  
 ==A Prior Example from the Literature== ==A Prior Example from the Literature==
Line 81: Line 83:
   * Adaptive stereo vision: In our study of stereopsis, we will learn that a useful strategy is to begin our estimation procedures with coarse image data (e.g., imagery with low spatial resolution) and subsequently refine our solution through systematic incorporation of more refined image data (e.g., imagery with higher spatial resolution). We will refer to this paradigm as course-to-fine refinement. An interesting question that arises in this paradigm is how to decide on the level of refinement that is appropriate for a given image or even a given image region. For this project, the student will explore methods for automatically adapting the coarse-to-fine refinement of stereo estimates based on the input binocular image data and implement as well as test at least one such procedure.    * Adaptive stereo vision: In our study of stereopsis, we will learn that a useful strategy is to begin our estimation procedures with coarse image data (e.g., imagery with low spatial resolution) and subsequently refine our solution through systematic incorporation of more refined image data (e.g., imagery with higher spatial resolution). We will refer to this paradigm as course-to-fine refinement. An interesting question that arises in this paradigm is how to decide on the level of refinement that is appropriate for a given image or even a given image region. For this project, the student will explore methods for automatically adapting the coarse-to-fine refinement of stereo estimates based on the input binocular image data and implement as well as test at least one such procedure. 
  
-  * Primitive event recognition: Image sequences comprise a vast amount of data. For timely processing, it is critical that early operations begin to recognize significant patterns in this data. For example, when processing a video sequence, it might be desirable to make distinctions regarding those portions of the imagery that correspond to moving objects so that subsequent processing might attend to such regions. For this project, the student will suggest and investigate a set of simple, dynamic patterns or events that it might be advantageous to support through early processing (e.g., what is moving, what is not, what is noise, etc.). The student will develop an analysis that shows how the patterns can be distinguished on the basis of simple vision operations as well as implement and test corresponding algorithms. +  * Primitive event recognition: The combinatorics of some approaches prove challenging either from a required hardware perspective or to achieve real-time processing. For timely or efficient processing, it is critical that early operations begin to recognize significant patterns in this data. For example, when processing a video sequence, it might be desirable to make distinctions regarding those portions of the imagery that correspond to moving objects so that subsequent processing might attend to such regions. For this project, the student will suggest and investigate a set of simple patterns or events that it might be advantageous to support through early processing (e.g. for videos, what is moving, what is not, what is noise, etc.). The student will develop an analysis that shows how the patterns can be distinguished on the basis of simple vision operations as well as implement and test corresponding algorithms. 
  
   * Module combination: Typically, a solitary computer vision module provides incomplete information about the world; however, taken in tandem (two or more) modules might combine forces to provide a better understanding of the environment. For example, binocular stereo typically performs best in regions with well define features or texture patterns, but poorly in smoothly shaded regions; in contrast, shape from shading can perform reasonably in smoothly shaded regions, but less well in highly textured regions. For this project, students will select a pair of complimentary vision modules (e.g., stereo and shading, feature-based and area-based image correspondence/matching for stereo, binocular stereo and motion, etc.) and study how they can be combined in an advantageous fashion. The student will develop an analysis that shows how the modules can be combined as well as implement and test corresponding algorithms.    * Module combination: Typically, a solitary computer vision module provides incomplete information about the world; however, taken in tandem (two or more) modules might combine forces to provide a better understanding of the environment. For example, binocular stereo typically performs best in regions with well define features or texture patterns, but poorly in smoothly shaded regions; in contrast, shape from shading can perform reasonably in smoothly shaded regions, but less well in highly textured regions. For this project, students will select a pair of complimentary vision modules (e.g., stereo and shading, feature-based and area-based image correspondence/matching for stereo, binocular stereo and motion, etc.) and study how they can be combined in an advantageous fashion. The student will develop an analysis that shows how the modules can be combined as well as implement and test corresponding algorithms. 
Line 87: Line 89:
   * Algorithm comparisons: For any given research or application topic in computer vision there is more than one possible approach. For example, many different approaches to optical flow estimation have been developed. For this project, students will consider a particular topic (e.g., binocular stereo correspondence, optical flow estimation, shape from shading, etc.) and select at least two algorithms that have been developed for this topic. The student will compare the selected algorithms both analytically (to develop a theoretical understanding of their relationships) and empirically (to develop a practical understanding of their relationships). A restriction on this topic is that comparison of algorithms for edge detection is not an allowable topic; there has been a great deal of research on this topic, which will make it too difficult for students to make a novel contribution.    * Algorithm comparisons: For any given research or application topic in computer vision there is more than one possible approach. For example, many different approaches to optical flow estimation have been developed. For this project, students will consider a particular topic (e.g., binocular stereo correspondence, optical flow estimation, shape from shading, etc.) and select at least two algorithms that have been developed for this topic. The student will compare the selected algorithms both analytically (to develop a theoretical understanding of their relationships) and empirically (to develop a practical understanding of their relationships). A restriction on this topic is that comparison of algorithms for edge detection is not an allowable topic; there has been a great deal of research on this topic, which will make it too difficult for students to make a novel contribution. 
  
 +  * Principled probing of algorithm behaviour: It is often instructive to examine algorithm failure cases or scenarios which are particularly challenging within a given problem domain. For this project, students will consider how to design or select a dataset which will test the behaviour of a model or set of models in a new way. The student must be able to clearly explain what their chosen dataset is designed to reveal about the model, and draw conclusions from the empirical results obtained.
project.1566590366.txt.gz · Last modified: 2019/08/23 19:59 by calden

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki