User Tools

Site Tools


projects

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
projects [2010/04/21 19:11] bilprojects [2011/04/27 15:31] (current) bil
Line 1: Line 1:
 ====== Available projects ====== ====== Available projects ======
 +<html><!--
  
 The following projects are presented in alphabetical order on the supervisor's last name: The following projects are presented in alphabetical order on the supervisor's last name:
  
-====== Three-Dimensional Context from Linear Perspective for Video Surveillance Systems ======+--></html>
  
-**Supervisor**:  James Elder+====== Localizing nodes and tracking targets in wireless ad hoc networks securely ======
  
-**Requirements**:  Good facility with applied mathematics +**Supervisor**: Suprakash Datta
  
-__Description__ +**Required Background**: CSE4480 prerequisites
- +
-To provide visual surveillance over a large environment, many surveillance cameras are typically deployed at widely dispersed locations.  Making sense of activities within the monitored space requires security personnel to map multiple events observed on two-dimensional security monitors to the three-dimensional scene under surveillance.  The cognitive load entailed rises quickly as the number of cameras, complexity of the scene and amount of traffic increases. +
- +
-This problem can be addressed by automatically pre-mapping two-dimensional surveillance video data into three-dimensional coordinates.  Rendering the data directly in three dimensions can potentially lighten the cognitive load of security personnel and make human activities more immediately interpretable.   +
- +
-Mapping surveillance video to three-dimensional coordinates requires construction of a virtual model of the three-dimensional scene.  Such a model could be obtained by survey (e.g., using LIDAR), but the cost and time required for each site would severely limit deployment.  Wide-baseline uncalibrated stereo methods are developing and have potential utility, but require careful sensor placement, and the difficulty of the correspondence problem limits reliability. +
- +
-This project will investigate a monocular method for inferring three-dimensional context for video surveillance.  The method will make use of the fact that most urban scenes obey the so-called “Manhattan-world” assumption, viz., a large proportion of the major surfaces in the scene are rectangles aligned with a three-dimensional Cartesian grid (Coughlan & Yuille, 2003).  This regularity provides strong linear perspective cues that can potentially be used to automatically infer three-dimensional models of the major surfaces in the scene (up to a scale factor).  These models can then be used to construct a virtual environment in which to render models of human activities in the scene. +
- +
-Although the Manhattan world assumption provides powerful constraints, there are many technical challenges that must be overcome before a working prototype can be demonstrated.  The prototype requires six stages of processing:    1)The major lines in each video frame are detected.  2)  These lines are grouped into quadrilaterals projecting from the major surface rectangles of the scene.  3)  The geometry of linear perspective and the Manhattan world constraint are exploited to estimate the three-dimensional attitude of the rectangles from which these quadrilaterals project.  4)  Trihedral junctions are used to infer three-dimensional surface contact and ordinal depth relationships between these surfaces.  5)  The estimated surfaces are rendered in three-dimensions.  6)  Human activities are tracked and rendered within this virtual three-dimensional world. +
- +
-The student will work closely with graduate students and postdoctoral fellows at York University, as well as researchers at other institutions involved in the project.  The student will develop skills in using MATLAB, a very useful mathematical programming environment, and develop an understanding of basic topics in image processing and vision. +
- +
-For more information on the laboratory: [[http://www.elderlab.yorku.ca]] +
- +
- +
- +
-====== Estimating Pedestrian and Vehicle Flows from Surveillance Video ====== +
- +
-**Supervisor**:  James Elder +
- +
-**Requirements**:  Good facility with applied mathematics +
  
 __Description__ __Description__
  
-Facilities planning at both city (e.g., Torontoand institutional (e.g., York University) scales requires accurate data on the flow of people and vehicles throughout the environment.  Acquiring these data can require the costly deployment of specialized equipment and people, and this effort must be renewed at regular intervals for the data to be relevant +A key infrastructural problem in wireless networks is localization (or the 
 +determination of geographical locationsof nodesA related problem is the 
 +tracking of mobile targets as they move through the radio ranges of the 
 +wireless nodes.
  
-The density of permanent urban video surveillance camera installations has increased dramatically over the last several years.  These systems provide a potential source of low-cost data from which flows can be estimated for planning purposes. +If security is not concernthen any of numerous existing algorithms can be 
- +implemented to get reasonably accurate location estimates of nodes or targets
-This project will explore the use of computer vision algorithms for the automatic estimation of pedestrian and vehicle flows from video surveillance data.  The ultimate goal is to provide planners with accurate, continuous, up-to-date information on facility usage to help guide planning. +These algorithms typically involve nodes sharing locations and assume that 
- +there are no malicious nodes and no privacy issues in sharing locations
-The student will work closely with graduate students and postdoctoral fellows at York University, as well as researchers at other institutions involved in the project.  The student will develop skills in using MATLAB, a very useful mathematical programming environment, and develop an understanding of basic topics in image processing and vision. +Howeverlocalization or target tracking in the presence of malicious nodes 
- +or nodes that do not wish to disclose their locations is much more difficult.
-For more information on the laboratory: [[http://www.elderlab.yorku.ca]] +
-  +
- +
-====== Low-Cost Three-Dimensional Face Scanning System ====== +
- +
-**Supervisor**:  James Elder +
- +
-**Requirements**:  Interest in both hardware and software design at the systems level.  +
- +
-__Description__ +
- +
-Low-cost three-dimensional face-scanning systems have a large range of potential applications in security and retail markets.  Our laboratory at York University has recently developed a prototype face-scanning system that has the potential for very low-cost mass production.  This project involves the development of a second-stage prototype that is one-step closer to commercialization. +
- +
-The project will involve systems design and development of specialized real-time 3D face scanner.  A combination of hardware and software design will be required.  The student will work closely with graduate students and postdoctoral fellows at York Universityas well as researchers at other institutions involved in the project.  The student will develop skills in both hardware and software design, as well as computer-vision techniques. +
- +
-For more information on the laboratory: [[http://www.elderlab.yorku.ca]] +
- +
-     +
- +
-====== Programming Multi-Core GPUs with CUDA ====== +
- +
-**Supervisor**: Franck van Breugel +
- +
-**Required background**: General prerequisites +
- +
-**Recommended background**: N/A +
- +
-__Description__ +
- +
-CUDA stands for "compute unified device architecture."  It is an architecture to program multicore graphical processing units (GPUs for short).  In the past, these GPUs were only used for graphics. However, CUDA allows us to use these GPUs for other types of computation. Since today's GPUs have hundreds of cores, algorithms can be parallelized and, hence, run often much faster. +
- +
-The aim of this project is to get familiar with GPUs and to study how to program them+
- +
-More details can be found at: [[http://www.cse.yorku.ca/~franck/projects/cuda.html]] +
-(this link is only accessible from machines within the domain yorku.ca.)  +
- +
- +
- +
-====== The Algorithmics Animation Workshop ====== +
- +
- +
-**Supervisor**: Andy Mirzaian +
- +
-**Required background**: General prerequisites +
- +
-**Recommended background**: CSE 3101 +
- +
-__Description__ +
- +
-The URL for Algorithmics Animation Workshop (AAW) is [[http://www.cs.yorku.ca/~aaw]].  The main purpose of AAW is to be a pedagogical tool by providing animation of important algorithms and data structures in computer science, especially those studied in courses CSE 3101, 4101, 5101, 6114, 6111. This is an open ended project in the sense that more animations can be added to this site over time. +
- +
- +
- +
-====== Web-based digital signage ====== +
- +
-**Supervisor**: John Amanatides +
- +
-**Required background**: General prerequisites +
- +
-**Recommended background**: CSE 3221, CSE 3214 +
- +
-__Description__ +
- +
-Digital signs are increasingly used in many modern buildings to direct people to appropriate rooms for meetings, services, etc. Unfortunately, "programming" them is non-trivial, especially for non-technical people such as administrative staff. The goal of this project is to make using digital signs much easier for such people. +
- +
-One way to do this is to utilize what administrative staff are really good at: dealing with calendars. By assigning calendars to individual rooms/organizations/events, and having the digital signage software interpret this calendar data to display the day's events, an easier-to-use signage system can be developed. +
- +
-More specifically, the deliverables of this project include a digital signage system for Bethune College. Some of the technologies that you will be expected to learn/use include Javascript, JQuery, HTML, CSS, and ical/CalDAV. We expect to go open source with this software so that others can use it as well. The deliverables will also include an analysis of what it takes to scale this type of signage campus wide, including provisions for campus alerts/emergency announcements. +
- +
- +
- +
-====== Computer pointing devices and the speed-accuracy tradeoff ====== +
- +
- +
-**Supervisor**: Scott MacKenzie +
- +
-**Required Background**: General 4080 prerequisites, CSE3461, and (preferably) CSE4441 +
- +
-**Recommended Background**: Interest in user interfaces and human-computer interaction (HCI). Understanding of experiment design.  Experience in doing user studies+
- +
-Please click [[http://www.cse.yorku.ca/~mack/4080/ComputerPointingDevices.pdf|here]] for full description. +
- +
- +
-====== One key text entry ====== +
- +
- +
-**Supervisor**: Scott MacKenzie +
- +
-**Required Background**: General 4080 prerequisitesCSE3461, and (preferably) CSE4441 +
- +
-**Recommended Background**: Interest in user interfaces and human-computer interaction (HCI). Understanding of experiment design.  Experience in doing user studies. +
- +
-Please click [[http://www.cse.yorku.ca/~mack/4080/OneKeyTextEntry.pdf|here]] for full description. +
- +
- +
- +
-====== Estimating Registration Error ====== +
- +
- +
-**Supervisor**: Burton Ma +
- +
-**Required background**: General prerequisites +
- +
-**Recommended background**: N/A +
- +
-__Description__ +
- +
-A fundamental step in computer-assisted surgery is registration where the anatomy of the patient is matched to an image or model of the anatomy. For some types of orthopaedic procedures, registration is performed by digitizing the locations of points on the surface of a bone and matching the point locations to the surface of a model of the bone. Here, a surgeon uses a pointer that is tracked using an optical tracking system to measure registration point locations on a patient. A registration algorithm is used to compute the transformation that best matches the points to a model of the anatomy. +
- +
- +
-Virtual navigational information (such as where to drill or cut the bone) can be provided to the surgeon after the registration transformation has been established. Here, a surgeon is using a tracked surgical drill to drill a hole along a pre-operatively defined path. Notice that the surgeon looks at the virtual navigational information instead of the patient when performing this task. +
- +
- +
-Computer-assisted surgical navigation depends on having an accurate registration. If the estimated registration is inaccurate then the navigational information will also be inaccurate, which may lead to errors in the surgical procedure. It is of great interest to know the accuracy of the estimated registration. +
- +
-Further details on the project can be found [[http://www.cse.yorku.ca/~burton/4080/4080.html|here]]. +
- +
- +
-  +
- +
-====== Robotic tangible user interface for large tabletops ====== +
- +
- +
- +
- +
-**Supervisor**: Wolfgang Stuerzlinger +
- +
-**Required Background**:  General CSE4080 prerequisites +
- +
-**Recommended Background**: CSE3431 or equivalent +
- +
-__Description__ +
- +
- +
-Tangible user interfaces provide the user with object that they can touch and use as input devices. One example is the use of (tracked) toy houses to perform a city planning task on a large surface. This project implements a new form of tracking/identification scheme for tangible objects via LED arrays mounted on them. Furthermore, and using robotic components, the tangible objects will have the ability to move around autonomously, which enables important functionalities such as undo and replay. +
- +
- +
- +
- +
-====== Different "snapping" techniques in drawing systems ====== +
- +
- +
-**Supervisor**: Wolfgang Stuerzlinger +
- +
- +
-**Required Background**: General CSE4080 prerequisites +
- +
-**Recommended Background**: CSE3461 +
- +
-__Description__ +
- +
-Many graphics programs implement snapping to facilitate drawing. Snapping ensures that end-points of lines meet, that the endpoint of one line correctly "touches" another, that objects align side-to-side, etc. One problem of simple snapping techniques is that one cannot position objects arbitrarily close together - otherwise the snapping technique interferesA novel snapping technique "Snap-and-Go" circumvents this problem by slowing the cursor over the line, instead of snapping it close to the line. The objective of this project is to implement several snapping techniques for two-dimensional drawing systems and then to perform an evaluation with a small user study. +
- +
- +
- +
-====== Simulation of a 6dof virtual reality tracker ====== +
- +
-**Supervisor**: Wolfgang Stuerzlinger +
- +
-**Required Background**:  General CSE4080 prerequisites +
- +
-**Recommended Background**: CSE3431 or equivalent +
- +
-__Description__+
  
-Previous work by the supervisor resulted in a novel and highly accurate Virtual Reality tracking system that matches or exceeds the specifications of all competing systems. However, this system works only in 5 or 6-sided immersive display environment.+This project will look at current research on localization algorithms. The 
 +student will read papers to learn about existing work and then implement 
 +a few algorithms to compare their performance. Then, with assistance from the 
 +supervisor, (s)he will attempt to propose improvements and/or combinations of 
 +ideas from the papers in a Java/C/C++/MatLab simulator.
  
-This project is the first step towards an adaptation of the technology for more general environments. In particular we target normal rooms and immersive displays with less than 5 screens. The technical work involves adapting the simulation software for the previous device to simulate a new design, and iteratively optimizing that design based on the results obtained.+Expected learning outcomes: Apart from familiarity with the current literature, 
 +the project will provide the student an introduction to scientific research 
 +and analysis of experimental data.
  
 +Skills required: Proficiency with one of Java, C, C++, MatLab; interest in
 +developing algorithms for distributed systems; interest in experimental
 +approaches to problems.
  
 +References:
  
 +1. Multiple target localisation in sensor networks with location privacy,
 +Matthew Roughan, Jon Arnold· Proceedings of the 4th European conference on
 +Security and privacy in ad-hoc and sensor networks (ESAS'07), Springer-Verlag,
 +2007
  
 +2. Defending Wireless Sensor Networks against Adversarial Localization,
 +Neelanjana Dutta, Abhinav Saxena, Sriram Chellappan, Proceedings of the 2010
 +Eleventh International Conference on Mobile Data Management (MDM '10).
  
projects.1271877070.txt.gz · Last modified: 2010/04/21 19:11 by bil

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki