User Tools

Site Tools


projects:g1:start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
projects:g1:start [2014/04/24 12:03] cse83125projects:g1:start [2014/04/29 16:47] (current) cse83125
Line 12: Line 12:
  
  
- +====== Project Video ====== 
 + 
 +https://www.youtube.com/watch?v=PLyIsMSes5U&feature=youtu.be
  
 ====== Team Members ====== ====== Team Members ======
Line 37: Line 39:
 We created a simulation environment with 2 cameras: Camera 1 and Camera 2. The cameras were calibrated to create an overlapping field of view to ensure that the object appears in both cameras. In order to find maximum point correspondences to reconstruct the global scene, we inserted patterned objects in the monitored area. As soon as the target is detected in a camera, it is constantly analysed using blob analysis technique coupled with background subtraction wherein the object is seen as a white entity and the rest is black. The object's trajectory is simultaneously calculated and eventually fused for the global reference frame.   We created a simulation environment with 2 cameras: Camera 1 and Camera 2. The cameras were calibrated to create an overlapping field of view to ensure that the object appears in both cameras. In order to find maximum point correspondences to reconstruct the global scene, we inserted patterned objects in the monitored area. As soon as the target is detected in a camera, it is constantly analysed using blob analysis technique coupled with background subtraction wherein the object is seen as a white entity and the rest is black. The object's trajectory is simultaneously calculated and eventually fused for the global reference frame.  
  
-====== Offline Simulation and Results ======+====== Offline Simulation ======
  
 After performing the controlled experiment where we had most of the aspects known to use such as less coverage area, the speed of the object, maximum overlap between camera nodes and least amount of shadows we turned to a more realistic environment. The conditions were drastically different in that the coverage area was widened to account for the height of a human thereby causing us to place constraint on the installation height of the cameras. Unlike a can, we now had no control over the velocity of the target in question. The following figures attest to our capability in overcoming many changes in a scene and maintaining our objective. After performing the controlled experiment where we had most of the aspects known to use such as less coverage area, the speed of the object, maximum overlap between camera nodes and least amount of shadows we turned to a more realistic environment. The conditions were drastically different in that the coverage area was widened to account for the height of a human thereby causing us to place constraint on the installation height of the cameras. Unlike a can, we now had no control over the velocity of the target in question. The following figures attest to our capability in overcoming many changes in a scene and maintaining our objective.
Line 67: Line 69:
  
 ====== Achieved Results ====== ====== Achieved Results ======
 +
 +Our results and findings closely matched our expectations:
 +
 +  * Successfully identified and tracked a moving target in a single camera view with limited influence of external noise i.e. shadows.
 +  * Kept track of the chosen target even if 2 or more targets are close to one another at any time
 +  * Extended the algorithm to handle multiple camera views without losing important information
 +  * Created global reference frame which merges two different camera views  resulting in flexibility for the end user
 +  * Transition of targets into different views are smooth and lag-free
 +
 +We also found out that the more the spatial distance between two targets, the higher is our accuracy in detecting and tracking the same target. If the two targets are in close proximity of each other, the accuracy with which we distinguish the targets from one another reduces. The following graph represents these observations:
 +
 +{{:projects:g1:graph_results.png|}}
 +
 +
 +
  
 ====== Controlled Experiment and Results ====== ====== Controlled Experiment and Results ======
Line 121: Line 138:
 ====== Industrial Partners ====== ====== Industrial Partners ======
  
-There are no industrial partners for our project but the following companies may have similar interests as us –+There are no industrial partners for our project however the following companies may have similar interests as us –
  
   - AXIS Communication (http://www.axis.com)   - AXIS Communication (http://www.axis.com)
Line 127: Line 144:
   - Cobra Integrated Systems (http://www.cobraintegratedsystems.com)   - Cobra Integrated Systems (http://www.cobraintegratedsystems.com)
   - Genetech (http://www.genetec.com)   - Genetech (http://www.genetec.com)
- 
-====== Project Videos ====== 
- 
-  * Camera 1 Video Feed [[http://www.youtube.com/watch?v=rOdjk2sAjCw]] 
- 
-  * Camera 2 Video Feed [[http://www.youtube.com/watch?v=pnOEgo8Bjm4]] 
- 
-  * Global Video with fused tracks[[http://www.youtube.com/watch?v=hKMd9mf3gGY]] 
  
 ====== Funding ====== ====== Funding ======
Line 142: Line 151:
 {{:projects:g1:index.jpg|}} {{:projects:g1:index.jpg|}}
  
-===== Latest Presentation ===== 
- 
-**5th Presentation: PD 1** {{:projects:g1:eng4000_5th_presentation.pdf|}} 
- 
-**4th Presentation** {{:projects:g1:eng4000_4th_presentation.pdf|}} 
- 
-**3rd Presentation** {{:projects:g1:eng4000_3rd_presentation.pdf|}} 
- 
-**2nd Presentation** {{:projects:g1:eng4000_2nd_presentation.pdf|}} 
  
-**1st Presentation** {{:projects:g1:eng4000_1st_presentation.pdf|}} 
projects/g1/start.1398341038.txt.gz · Last modified: 2014/04/24 12:03 by cse83125