User Tools

Site Tools


projects:g3:start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
projects:g3:start [2014/04/30 03:33] cse93178projects:g3:start [2014/05/19 03:08] (current) cse93178
Line 118: Line 118:
  
  
-{{http://i.imgur.com/kuJ7QLg.png?700x600|Flow chart voice recognition}} \\+{{http://i.imgur.com/kuJ7QLg.png?700x500|Flow chart voice recognition}} \\
  
 The above flow chart shows how we get the speech data from (from the mic) and what we do with it. We get the speech from the mic, we compare it with our acoustic, language, sentence model, and then we send the data back to our common data bus, which is received by all the other modules. Now, depending on the source and destination, the appropriate module will consume this data. As you can see this is the entry point in our system. The images below shows how voice recognition in general works. \\ The above flow chart shows how we get the speech data from (from the mic) and what we do with it. We get the speech from the mic, we compare it with our acoustic, language, sentence model, and then we send the data back to our common data bus, which is received by all the other modules. Now, depending on the source and destination, the appropriate module will consume this data. As you can see this is the entry point in our system. The images below shows how voice recognition in general works. \\
Line 149: Line 149:
 Shape tracking with only one camera produces problematic errors with 3D objects. For instance, a cube viewed on the top looks like a square, but when it is viewed on one of its corner, it looks like a hexagon. The good news is that we only aim to process simple shapes and most of the time, the robotic arm most of the time will view the object at the top. The bad news is that since the design decision that we will only use 1 camera to do computer vision prevents me from doing computer vision in order to process depth. So why did we use only one camera if depth is needed to reach an object? The answer to that is that the computer vision only gives the (x,y) coordinates and the depth will be given by a laser sensor. Therefore, no image processing is done when the robot wants to find how far the object is. The image below shows how our vision module works.\\ Shape tracking with only one camera produces problematic errors with 3D objects. For instance, a cube viewed on the top looks like a square, but when it is viewed on one of its corner, it looks like a hexagon. The good news is that we only aim to process simple shapes and most of the time, the robotic arm most of the time will view the object at the top. The bad news is that since the design decision that we will only use 1 camera to do computer vision prevents me from doing computer vision in order to process depth. So why did we use only one camera if depth is needed to reach an object? The answer to that is that the computer vision only gives the (x,y) coordinates and the depth will be given by a laser sensor. Therefore, no image processing is done when the robot wants to find how far the object is. The image below shows how our vision module works.\\
  
-{{http://i.imgur.com/M1Kcpd0.png?700x600|How our Veesion module works}} \\+{{http://i.imgur.com/M1Kcpd0.png?700x500|How our Veesion module works}} \\
  
 A flow chart below shows how the CV (color vision) algorithms in general work. \\ A flow chart below shows how the CV (color vision) algorithms in general work. \\
Line 202: Line 202:
 As you can see, the last figure shows the solution to the inverse kinematics.  As you can see, the last figure shows the solution to the inverse kinematics. 
  
-=== Arm Design and Hardware Integration ===+=== Arm Design and Hardware Integration (Motion Control) ===
  
-Our robotic arm is different than any other arms in the sense that it has more flexibility due to more number of links. The total length of the arm is 1.3 meter!. Most robotic arms are half of that and contains only two links (3 joints). Our arm offers 6 DOF and uses AX-18 servos for better performance.+Our robotic arm is different than any other arms in the sense that it has more flexibility due to more number of links. The total length of the arm is 1.3 meter!. Most robotic arms are half of that and contains only two links (3 joints). Our arm offers 6 DOF and uses AX-18 servos for better performance. The flows diagram of the ICCE system (Motion control) is shown below: \\ 
 + 
 +{{http://i.imgur.com/hcRPxt0.png?700x500|ICCE}}
  
  
Line 216: Line 218:
  
 {{http://i.imgur.com/1pbuSzL.jpg?700x500|Voice recognition, improved}} \\ {{http://i.imgur.com/1pbuSzL.jpg?700x500|Voice recognition, improved}} \\
 +
  
 The figure below shows communication between two blocks (the vision module and the voice recognition module) and inverse kinematics calculation.  \\ The figure below shows communication between two blocks (the vision module and the voice recognition module) and inverse kinematics calculation.  \\
Line 228: Line 231:
 {{http://i.imgur.com/E83MV9D.jpg?700x500| Solved kinematics model}} \\ {{http://i.imgur.com/E83MV9D.jpg?700x500| Solved kinematics model}} \\
  
-In the figure we can see the range of the arm (possible positions that it can reach) for different links. We can see that as we increase the number of links (hence joints) of the arms, the arm can reach more position, meaning it has more flexibility. A superimposed image of the three combinations is also shown on the fourth figure of the above figure. Our robotic arm have more than 3 linksSo, our design can reach positions that a 3-linked planar robotic arm can't reach+In the figure we can see the range of the arm (possible positions that it can reach) for different links. We can see that as we increase the number of links (hence joints) of the arms, the arm can reach more position, meaning it has more flexibility. A superimposed image of the three combinations is also shown on the fourth figure of the above figure. The figure below shows the two trajectories for the robotic arm with and without pathfinder algorithmThe pathfinding algorithm allows the robot to avoid obstruction in it's way by following a different route.  
 + 
 +{{http://i.imgur.com/olKYBh8.png?700x500|Pathfinding route}} \\ 
 + 
 +From the figure we can see that without the pathfinding algorithm, we have straight path that the robot follows, which might result in a collision with obstacles that lie on that path. With the pathfinding algorithm we are able to detect those obstacles and thus avoid collision. 
  
 The figures below shows the design of our final robotic arm. \\ The figures below shows the design of our final robotic arm. \\
Line 254: Line 262:
 We see this as a possible issue and the way we are going to handle this is by using opaque objects in the environment instead of transparent. We see this as a possible issue and the way we are going to handle this is by using opaque objects in the environment instead of transparent.
  
 +The figure below shows the point cloud that we were able to construct from the Hokuyo laser sensor. The image shows the 3D map of the room that was scanned and the objects. This point cloud is used in conjunction with the computer vision to get the coordinates of the target in order to interact with it.  \\
 +
 +{{http://i.imgur.com/GLTflAR.png?700x500|Point cloud}} \\
  
 == Old Simulations &  results == == Old Simulations &  results ==
Line 303: Line 314:
 ===== Funding ===== ===== Funding =====
  
-  Lassonde School of Engineering ( $1000),  +  Lassonde School of Engineering ( $1000),  
    
 {{:projects:g1:index.jpg|}} {{:projects:g1:index.jpg|}}
 +
 +  * York University Robotic Society (YURS) (approximately $2500)
 +     - Resources for building the arm
 +     - Machinery for fabrication of the arm
 +     - Sensors and other materials
 +
 +
 +
  
    
projects/g3/start.1398828795.txt.gz · Last modified: 2014/04/30 03:33 by cse93178