projects:g3:start
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
projects:g3:start [2014/04/30 03:40] – cse93178 | projects:g3:start [2014/05/16 15:45] – cse93178 | ||
---|---|---|---|
Line 219: | Line 219: | ||
{{http:// | {{http:// | ||
- | {{http:// | ||
The figure below shows communication between two blocks (the vision module and the voice recognition module) and inverse kinematics calculation. | The figure below shows communication between two blocks (the vision module and the voice recognition module) and inverse kinematics calculation. | ||
Line 232: | Line 231: | ||
{{http:// | {{http:// | ||
- | In the figure we can see the range of the arm (possible positions that it can reach) for different links. We can see that as we increase the number of links (hence joints) of the arms, the arm can reach more position, meaning it has more flexibility. A superimposed image of the three combinations is also shown on the fourth figure of the above figure. | + | In the figure we can see the range of the arm (possible positions that it can reach) for different links. We can see that as we increase the number of links (hence joints) of the arms, the arm can reach more position, meaning it has more flexibility. A superimposed image of the three combinations is also shown on the fourth figure of the above figure. |
+ | |||
+ | {{http:// | ||
+ | |||
+ | From the figure we can see that without the pathfinding algorithm, we have a straight path that the robot follows, which might result in a collision with obstacles that lie on that path. With the pathfinding algorithm we are able to detect those obstacles and thus avoid collision. | ||
The figures below shows the design of our final robotic arm. \\ | The figures below shows the design of our final robotic arm. \\ | ||
Line 258: | Line 262: | ||
We see this as a possible issue and the way we are going to handle this is by using opaque objects in the environment instead of transparent. | We see this as a possible issue and the way we are going to handle this is by using opaque objects in the environment instead of transparent. | ||
+ | The figure below shows the point cloud that we were able to construct from the Hokuyo laser sensor. The image shows the 3D map of the room that was scanned and the objects. This point cloud is used in conjunction with the computer vision to get the coordinates of the target in order to interact with it. \\ | ||
+ | |||
+ | {{http:// | ||
== Old Simulations & results == | == Old Simulations & results == | ||
Line 310: | Line 317: | ||
{{: | {{: | ||
+ | |||
+ | - York University Robotic Society (YURS) | ||
+ | - Resources for building the arm | ||
+ | - Machinery for fabrication of the arm | ||
+ | - Sensors and other materials | ||
+ | |||
+ | |||
+ | |||
projects/g3/start.txt · Last modified: 2014/05/19 03:08 by cse93178