User Tools

Site Tools


projects

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
projects [2010/04/21 18:13] bilprojects [2010/08/24 15:46] (current) bil
Line 1: Line 1:
 ====== Available projects ====== ====== Available projects ======
  
 +The following projects are presented in alphabetical order on the supervisor's last name:
  
-====== The Algorithmics Animation Workshop ======+====== Simulation for Forest Fire Detection ======
  
 +**Supervisor**: Rob Allison
  
-**Supervisor**: Andy Mirzaian+**Required Background**: General CSE408x prerequisites
  
-**Required background**: General prerequisites+**Recommended Background**: CSE3431 or CSE4471 or equivalent
  
-**Recommended background**: CSE 3101+__Description__ 
 + 
 +Detection of forest fires is a challenging activity that requires considerable training. The objective of this project is to implement a virtual reality simulation to incorporate key aspects of this task and then to perform an evaluation with a small user study. 
 + 
 + 
 +====== Study of self-motion perception in microgravity ====== 
 + 
 +**Supervisor**: Rob Allison 
 + 
 +**Required Background**: General CSE408x prerequisites 
 + 
 +**Recommended Background**: CSE3431 or CSE4471 or equivalent
  
 __Description__ __Description__
  
-The URL for Algorithmics Animation Workshop (AAW) is http://www.cs.yorku.ca/~aaw The main purpose of AAW is to be a pedagogical tool by providing animation of important algorithms and data structures in computer science, especially those studied in courses CSE 3101, 4101, 5101, 6114, 6111. This is an open ended project in the sense that more animations can be added to this site over time.+This is a computer graphics project to present visual motion stimuli to an observer. The software will experimentally control scene content, collect user responses and control the camera trajectory to simulate the desired self-motion profile. 
 + 
 + 
 +====== Stereoscopic cinema calculator ====== 
 + 
 +**Supervisor**: Rob Allison 
 + 
 +**Required Background**: General CSE408x prerequisites 
 + 
 +**Recommended Background**: CSE3431 or CSE4471 or equivalent 
 + 
 +__Description__
  
 +Directors of three-dimensional movies sometimes use 'stereo calculators' to compute the simulated depth of objects in the film show to the viewer in order to maximize the stereoscopic effects and maintain comfortable viewing. However current calculators have limited ability to visualize the results of the calculations. This project will combine stereo calculations with visualization software to assist the director in artistic and technical decisions. 
  
  
Line 32: Line 57:
  
 More specifically, the deliverables of this project include a digital signage system for Bethune College. Some of the technologies that you will be expected to learn/use include Javascript, JQuery, HTML, CSS, and ical/CalDAV. We expect to go open source with this software so that others can use it as well. The deliverables will also include an analysis of what it takes to scale this type of signage campus wide, including provisions for campus alerts/emergency announcements. More specifically, the deliverables of this project include a digital signage system for Bethune College. Some of the technologies that you will be expected to learn/use include Javascript, JQuery, HTML, CSS, and ical/CalDAV. We expect to go open source with this software so that others can use it as well. The deliverables will also include an analysis of what it takes to scale this type of signage campus wide, including provisions for campus alerts/emergency announcements.
 +
 +
 +====== Three-Dimensional Context from Linear Perspective for Video Surveillance Systems ======
 +
 +**Supervisor**:  James Elder
 +
 +**Requirements**:  Good facility with applied mathematics 
 +
 +__Description__
 +
 +To provide visual surveillance over a large environment, many surveillance cameras are typically deployed at widely dispersed locations.  Making sense of activities within the monitored space requires security personnel to map multiple events observed on two-dimensional security monitors to the three-dimensional scene under surveillance.  The cognitive load entailed rises quickly as the number of cameras, complexity of the scene and amount of traffic increases.
 +
 +This problem can be addressed by automatically pre-mapping two-dimensional surveillance video data into three-dimensional coordinates.  Rendering the data directly in three dimensions can potentially lighten the cognitive load of security personnel and make human activities more immediately interpretable.  
 +
 +Mapping surveillance video to three-dimensional coordinates requires construction of a virtual model of the three-dimensional scene.  Such a model could be obtained by survey (e.g., using LIDAR), but the cost and time required for each site would severely limit deployment.  Wide-baseline uncalibrated stereo methods are developing and have potential utility, but require careful sensor placement, and the difficulty of the correspondence problem limits reliability.
 +
 +This project will investigate a monocular method for inferring three-dimensional context for video surveillance.  The method will make use of the fact that most urban scenes obey the so-called “Manhattan-world” assumption, viz., a large proportion of the major surfaces in the scene are rectangles aligned with a three-dimensional Cartesian grid (Coughlan & Yuille, 2003).  This regularity provides strong linear perspective cues that can potentially be used to automatically infer three-dimensional models of the major surfaces in the scene (up to a scale factor).  These models can then be used to construct a virtual environment in which to render models of human activities in the scene.
 +
 +Although the Manhattan world assumption provides powerful constraints, there are many technical challenges that must be overcome before a working prototype can be demonstrated.  The prototype requires six stages of processing:    1)The major lines in each video frame are detected.  2)  These lines are grouped into quadrilaterals projecting from the major surface rectangles of the scene.  3)  The geometry of linear perspective and the Manhattan world constraint are exploited to estimate the three-dimensional attitude of the rectangles from which these quadrilaterals project.  4)  Trihedral junctions are used to infer three-dimensional surface contact and ordinal depth relationships between these surfaces.  5)  The estimated surfaces are rendered in three-dimensions.  6)  Human activities are tracked and rendered within this virtual three-dimensional world.
 +
 +The student will work closely with graduate students and postdoctoral fellows at York University, as well as researchers at other institutions involved in the project.  The student will develop skills in using MATLAB, a very useful mathematical programming environment, and develop an understanding of basic topics in image processing and vision.
 +
 +For more information on the laboratory: [[http://www.elderlab.yorku.ca]]
 +
 +
 +
 +====== Estimating Pedestrian and Vehicle Flows from Surveillance Video ======
 +
 +**Supervisor**:  James Elder
 +
 +**Requirements**:  Good facility with applied mathematics 
 +
 +__Description__
 +
 +Facilities planning at both city (e.g., Toronto) and institutional (e.g., York University) scales requires accurate data on the flow of people and vehicles throughout the environment.  Acquiring these data can require the costly deployment of specialized equipment and people, and this effort must be renewed at regular intervals for the data to be relevant.  
 +
 +The density of permanent urban video surveillance camera installations has increased dramatically over the last several years.  These systems provide a potential source of low-cost data from which flows can be estimated for planning purposes.
 +
 +This project will explore the use of computer vision algorithms for the automatic estimation of pedestrian and vehicle flows from video surveillance data.  The ultimate goal is to provide planners with accurate, continuous, up-to-date information on facility usage to help guide planning.
 +
 +The student will work closely with graduate students and postdoctoral fellows at York University, as well as researchers at other institutions involved in the project.  The student will develop skills in using MATLAB, a very useful mathematical programming environment, and develop an understanding of basic topics in image processing and vision.
 +
 +For more information on the laboratory: [[http://www.elderlab.yorku.ca]]
 + 
 +
 +
  
  
Line 59: Line 130:
  
  
 +====== The Algorithmics Animation Workshop ======
  
-====== Estimating registration Error ====== 
  
- +**Supervisor**: Andy Mirzaian
-**Supervisor**: Burton Ma+
  
 **Required background**: General prerequisites **Required background**: General prerequisites
  
-**Recommended background**: N/A+**Recommended background**: CSE 3101
  
 __Description__ __Description__
  
-A fundamental step in computer-assisted surgery is registration where the anatomy of the patient is matched to an image or model of the anatomy. For some types of orthopaedic procedures, registration is performed by digitizing the locations of points on the surface of a bone and matching the point locations to the surface of a model of the boneHere, a surgeon uses a pointer that is tracked using an optical tracking system to measure registration point locations on a patient. A registration algorithm is used to compute the transformation that best matches the points to a model of the anatomy.+The URL for Algorithmics Animation Workshop (AAW) is [[http://www.cs.yorku.ca/~aaw]].  The main purpose of AAW is to be a pedagogical tool by providing animation of important algorithms and data structures in computer science, especially those studied in courses CSE 3101, 4101, 5101, 6114, 6111This is an open ended project in the sense that more animations can be added to this site over time.
  
  
-Virtual navigational information (such as where to drill or cut the bone) can be provided to the surgeon after the registration transformation has been established. Here, a surgeon is using a tracked surgical drill to drill a hole along a pre-operatively defined path. Notice that the surgeon looks at the virtual navigational information instead of the patient when performing this task. 
- 
- 
-Computer-assisted surgical navigation depends on having an accurate registration. If the estimated registration is inaccurate then the navigational information will also be inaccurate, which may lead to errors in the surgical procedure. It is of great interest to know the accuracy of the estimated registration. 
- 
-Further details on the project can be found [[http://www.cse.yorku.ca/~burton/4080/4080.html|here]]. 
- 
- 
-  
  
 ====== Robotic tangible user interface for large tabletops ====== ====== Robotic tangible user interface for large tabletops ======
Line 116: Line 177:
  
 Many graphics programs implement snapping to facilitate drawing. Snapping ensures that end-points of lines meet, that the endpoint of one line correctly "touches" another, that objects align side-to-side, etc. One problem of simple snapping techniques is that one cannot position objects arbitrarily close together - otherwise the snapping technique interferes. A novel snapping technique "Snap-and-Go" circumvents this problem by slowing the cursor over the line, instead of snapping it close to the line. The objective of this project is to implement several snapping techniques for two-dimensional drawing systems and then to perform an evaluation with a small user study. Many graphics programs implement snapping to facilitate drawing. Snapping ensures that end-points of lines meet, that the endpoint of one line correctly "touches" another, that objects align side-to-side, etc. One problem of simple snapping techniques is that one cannot position objects arbitrarily close together - otherwise the snapping technique interferes. A novel snapping technique "Snap-and-Go" circumvents this problem by slowing the cursor over the line, instead of snapping it close to the line. The objective of this project is to implement several snapping techniques for two-dimensional drawing systems and then to perform an evaluation with a small user study.
- 
- 
- 
-====== Simulation of a 6dof virtual reality tracker ====== 
- 
-**Supervisor**: Wolfgang Stuerzlinger 
- 
-**Required Background**:  General CSE4080 prerequisites 
- 
-**Recommended Background**: CSE3431 or equivalent 
- 
-__Description__ 
- 
-Previous work by the supervisor resulted in a novel and highly accurate Virtual Reality tracking system that matches or exceeds the specifications of all competing systems. However, this system works only in 5 or 6-sided immersive display environment. 
- 
-This project is the first step towards an adaptation of the technology for more general environments. In particular we target normal rooms and immersive displays with less than 5 screens. The technical work involves adapting the simulation software for the previous device to simulate a new design, and iteratively optimizing that design based on the results obtained. 
- 
  
  
  
  
projects.1271873608.txt.gz · Last modified: 2010/04/21 18:13 by bil