projects
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
projects [2010/04/21 18:55] – bil | projects [2010/08/24 15:46] (current) – bil | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Available projects ====== | ====== Available projects ====== | ||
- | ====== Programming Multi-Core GPUs with CUDA ====== | + | The following projects are presented in alphabetical order on the supervisor' |
- | **Supervisor**: | + | ====== Simulation for Forest Fire Detection ====== |
- | **Required background**: General prerequisites | + | **Supervisor**: Rob Allison |
- | **Recommended | + | **Required Background**: |
+ | |||
+ | **Recommended | ||
__Description__ | __Description__ | ||
- | CUDA stands for " | + | Detection of forest fires is a challenging activity that requires considerable training. The objective of this project |
- | The aim of this project is to get familiar with GPUs and to study how to program them. | ||
- | More details can be found at: [[http:// | + | ====== Study of self-motion perception in microgravity ====== |
- | (This link is only accessible from machines within the domain yorku.ca.) | + | |
- | ====== The Algorithmics Animation Workshop ====== | + | **Supervisor**: |
+ | **Required Background**: | ||
- | **Supervisor**: Andy Mirzaian | + | **Recommended Background**: CSE3431 or CSE4471 or equivalent |
- | **Required background**: | + | __Description__ |
- | **Recommended background**: | + | This is a computer graphics project to present visual motion stimuli to an observer. The software will experimentally control scene content, collect user responses and control the camera trajectory to simulate the desired self-motion profile. |
- | __Description__ | ||
- | The URL for Algorithmics Animation Workshop (AAW) is http:// | + | ====== Stereoscopic cinema calculator ====== |
+ | |||
+ | **Supervisor**: Rob Allison | ||
+ | |||
+ | **Required Background**: | ||
+ | |||
+ | **Recommended Background**: | ||
+ | |||
+ | __Description__ | ||
+ | Directors of three-dimensional movies sometimes use ' | ||
Line 48: | Line 57: | ||
More specifically, | More specifically, | ||
+ | |||
+ | |||
+ | ====== Three-Dimensional Context from Linear Perspective for Video Surveillance Systems ====== | ||
+ | |||
+ | **Supervisor**: | ||
+ | |||
+ | **Requirements**: | ||
+ | |||
+ | __Description__ | ||
+ | |||
+ | To provide visual surveillance over a large environment, | ||
+ | |||
+ | This problem can be addressed by automatically pre-mapping two-dimensional surveillance video data into three-dimensional coordinates. | ||
+ | |||
+ | Mapping surveillance video to three-dimensional coordinates requires construction of a virtual model of the three-dimensional scene. | ||
+ | |||
+ | This project will investigate a monocular method for inferring three-dimensional context for video surveillance. | ||
+ | |||
+ | Although the Manhattan world assumption provides powerful constraints, | ||
+ | |||
+ | The student will work closely with graduate students and postdoctoral fellows at York University, as well as researchers at other institutions involved in the project. | ||
+ | |||
+ | For more information on the laboratory: [[http:// | ||
+ | |||
+ | |||
+ | |||
+ | ====== Estimating Pedestrian and Vehicle Flows from Surveillance Video ====== | ||
+ | |||
+ | **Supervisor**: | ||
+ | |||
+ | **Requirements**: | ||
+ | |||
+ | __Description__ | ||
+ | |||
+ | Facilities planning at both city (e.g., Toronto) and institutional (e.g., York University) scales requires accurate data on the flow of people and vehicles throughout the environment. | ||
+ | |||
+ | The density of permanent urban video surveillance camera installations has increased dramatically over the last several years. | ||
+ | |||
+ | This project will explore the use of computer vision algorithms for the automatic estimation of pedestrian and vehicle flows from video surveillance data. The ultimate goal is to provide planners with accurate, continuous, up-to-date information on facility usage to help guide planning. | ||
+ | |||
+ | The student will work closely with graduate students and postdoctoral fellows at York University, as well as researchers at other institutions involved in the project. | ||
+ | |||
+ | For more information on the laboratory: [[http:// | ||
+ | |||
+ | |||
+ | |||
Line 75: | Line 130: | ||
+ | ====== The Algorithmics Animation Workshop ====== | ||
- | ====== Estimating Registration Error ====== | ||
- | + | **Supervisor**: | |
- | **Supervisor**: | + | |
**Required background**: | **Required background**: | ||
- | **Recommended background**: | + | **Recommended background**: |
__Description__ | __Description__ | ||
- | A fundamental step in computer-assisted surgery | + | The URL for Algorithmics Animation Workshop (AAW) is [[http:// |
- | + | ||
- | + | ||
- | Virtual navigational information (such as where to drill or cut the bone) can be provided | + | |
- | + | ||
- | + | ||
- | Computer-assisted surgical navigation depends on having an accurate registration. If the estimated registration is inaccurate then the navigational information will also be inaccurate, which may lead to errors in the surgical procedure. It is of great interest to know the accuracy of the estimated registration. | + | |
- | Further details on the project can be found [[http:// | ||
- | |||
- | |||
====== Robotic tangible user interface for large tabletops ====== | ====== Robotic tangible user interface for large tabletops ====== | ||
Line 132: | Line 177: | ||
Many graphics programs implement snapping to facilitate drawing. Snapping ensures that end-points of lines meet, that the endpoint of one line correctly " | Many graphics programs implement snapping to facilitate drawing. Snapping ensures that end-points of lines meet, that the endpoint of one line correctly " | ||
- | |||
- | |||
- | |||
- | ====== Simulation of a 6dof virtual reality tracker ====== | ||
- | |||
- | **Supervisor**: | ||
- | |||
- | **Required Background**: | ||
- | |||
- | **Recommended Background**: | ||
- | |||
- | __Description__ | ||
- | |||
- | Previous work by the supervisor resulted in a novel and highly accurate Virtual Reality tracking system that matches or exceeds the specifications of all competing systems. However, this system works only in 5 or 6-sided immersive display environment. | ||
- | |||
- | This project is the first step towards an adaptation of the technology for more general environments. In particular we target normal rooms and immersive displays with less than 5 screens. The technical work involves adapting the simulation software for the previous device to simulate a new design, and iteratively optimizing that design based on the results obtained. | ||
- | |||
projects.1271876117.txt.gz · Last modified: 2010/04/21 18:55 by bil