projects:g3:start
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
projects:g3:start [2014/04/30 03:33] – cse93178 | projects:g3:start [2014/05/19 03:08] (current) – cse93178 | ||
---|---|---|---|
Line 118: | Line 118: | ||
- | {{http:// | + | {{http:// |
The above flow chart shows how we get the speech data from (from the mic) and what we do with it. We get the speech from the mic, we compare it with our acoustic, language, sentence model, and then we send the data back to our common data bus, which is received by all the other modules. Now, depending on the source and destination, | The above flow chart shows how we get the speech data from (from the mic) and what we do with it. We get the speech from the mic, we compare it with our acoustic, language, sentence model, and then we send the data back to our common data bus, which is received by all the other modules. Now, depending on the source and destination, | ||
Line 149: | Line 149: | ||
Shape tracking with only one camera produces problematic errors with 3D objects. For instance, a cube viewed on the top looks like a square, but when it is viewed on one of its corner, it looks like a hexagon. The good news is that we only aim to process simple shapes and most of the time, the robotic arm most of the time will view the object at the top. The bad news is that since the design decision that we will only use 1 camera to do computer vision prevents me from doing computer vision in order to process depth. So why did we use only one camera if depth is needed to reach an object? The answer to that is that the computer vision only gives the (x,y) coordinates and the depth will be given by a laser sensor. Therefore, no image processing is done when the robot wants to find how far the object is. The image below shows how our vision module works.\\ | Shape tracking with only one camera produces problematic errors with 3D objects. For instance, a cube viewed on the top looks like a square, but when it is viewed on one of its corner, it looks like a hexagon. The good news is that we only aim to process simple shapes and most of the time, the robotic arm most of the time will view the object at the top. The bad news is that since the design decision that we will only use 1 camera to do computer vision prevents me from doing computer vision in order to process depth. So why did we use only one camera if depth is needed to reach an object? The answer to that is that the computer vision only gives the (x,y) coordinates and the depth will be given by a laser sensor. Therefore, no image processing is done when the robot wants to find how far the object is. The image below shows how our vision module works.\\ | ||
- | {{http:// | + | {{http:// |
A flow chart below shows how the CV (color vision) algorithms in general work. \\ | A flow chart below shows how the CV (color vision) algorithms in general work. \\ | ||
Line 202: | Line 202: | ||
As you can see, the last figure shows the solution to the inverse kinematics. | As you can see, the last figure shows the solution to the inverse kinematics. | ||
- | === Arm Design and Hardware Integration === | + | === Arm Design and Hardware Integration |
- | Our robotic arm is different than any other arms in the sense that it has more flexibility due to more number of links. The total length of the arm is 1.3 meter!. Most robotic arms are half of that and contains only two links (3 joints). Our arm offers 6 DOF and uses AX-18 servos for better performance. | + | Our robotic arm is different than any other arms in the sense that it has more flexibility due to more number of links. The total length of the arm is 1.3 meter!. Most robotic arms are half of that and contains only two links (3 joints). Our arm offers 6 DOF and uses AX-18 servos for better performance. |
+ | |||
+ | {{http:// | ||
Line 216: | Line 218: | ||
{{http:// | {{http:// | ||
+ | |||
The figure below shows communication between two blocks (the vision module and the voice recognition module) and inverse kinematics calculation. | The figure below shows communication between two blocks (the vision module and the voice recognition module) and inverse kinematics calculation. | ||
Line 228: | Line 231: | ||
{{http:// | {{http:// | ||
- | In the figure we can see the range of the arm (possible positions that it can reach) for different links. We can see that as we increase the number of links (hence joints) of the arms, the arm can reach more position, meaning it has more flexibility. A superimposed image of the three combinations is also shown on the fourth figure of the above figure. | + | In the figure we can see the range of the arm (possible positions that it can reach) for different links. We can see that as we increase the number of links (hence joints) of the arms, the arm can reach more position, meaning it has more flexibility. A superimposed image of the three combinations is also shown on the fourth figure of the above figure. |
+ | |||
+ | {{http:// | ||
+ | |||
+ | From the figure we can see that without the pathfinding algorithm, we have a straight path that the robot follows, which might result in a collision with obstacles that lie on that path. With the pathfinding algorithm we are able to detect those obstacles and thus avoid collision. | ||
The figures below shows the design of our final robotic arm. \\ | The figures below shows the design of our final robotic arm. \\ | ||
Line 254: | Line 262: | ||
We see this as a possible issue and the way we are going to handle this is by using opaque objects in the environment instead of transparent. | We see this as a possible issue and the way we are going to handle this is by using opaque objects in the environment instead of transparent. | ||
+ | The figure below shows the point cloud that we were able to construct from the Hokuyo laser sensor. The image shows the 3D map of the room that was scanned and the objects. This point cloud is used in conjunction with the computer vision to get the coordinates of the target in order to interact with it. \\ | ||
+ | |||
+ | {{http:// | ||
== Old Simulations & results == | == Old Simulations & results == | ||
Line 303: | Line 314: | ||
===== Funding ===== | ===== Funding ===== | ||
- | | + | |
{{: | {{: | ||
+ | |||
+ | * York University Robotic Society (YURS) (approximately $2500) | ||
+ | - Resources for building the arm | ||
+ | - Machinery for fabrication of the arm | ||
+ | - Sensors and other materials | ||
+ | |||
+ | |||
+ | |||
projects/g3/start.txt · Last modified: 2014/05/19 03:08 by cse93178