projects:g3:start
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
projects:g3:start [2014/04/30 03:09] – cse93178 | projects:g3:start [2014/04/30 03:52] – cse93178 | ||
---|---|---|---|
Line 94: | Line 94: | ||
In the above figure we can see the different configurations of the modules. For example, the ICCE (Motion control) module and the Veesion (Image processing) module could be used to control a robot. On the other hand, with that configuration, | In the above figure we can see the different configurations of the modules. For example, the ICCE (Motion control) module and the Veesion (Image processing) module could be used to control a robot. On the other hand, with that configuration, | ||
+ | |||
+ | The graph below shows the current trend in the robotic industry. This shows why we have chosen the right field on the right time because the sales of service robots are increasing very rapidly and are projected to increase every year with revenues topping billions. \\ | ||
+ | |||
+ | {{http:// | ||
+ | |||
+ | |||
+ | |||
==== VRobotics : Design Complexity ==== | ==== VRobotics : Design Complexity ==== | ||
Line 169: | Line 176: | ||
What sets our point cloud block apart from the rest is that it is ready to go right out of the box. It uses a serial interface to transmit 3D maps. The brain and all internal processes are hidden out of sight and out of mind…so you can focus on your project. \\ | What sets our point cloud block apart from the rest is that it is ready to go right out of the box. It uses a serial interface to transmit 3D maps. The brain and all internal processes are hidden out of sight and out of mind…so you can focus on your project. \\ | ||
We are happy to announce that we have achieved our goals. The point cloud system is able to obtain a point cloud, process it, and send it via serial. | We are happy to announce that we have achieved our goals. The point cloud system is able to obtain a point cloud, process it, and send it via serial. | ||
+ | |||
+ | A flow chart of our VeeCloud system is shown below: | ||
+ | |||
+ | {{http:// | ||
=== Inverse Kinematics and Movement === | === Inverse Kinematics and Movement === | ||
- | This specific subsystem deals with planning out the motion of the arm not only to make the arm reach its desired destination but also to make sure that the arm does not find itself in compromising situations whereby its movement are going to be hindered. \\ | + | This specific subsystem deals with planning out the motion of the arm not only to make the arm reach its desired destination but also to make sure that the arm does not find itself in compromising situations whereby its movement are going to be hindered. |
+ | |||
+ | {{http:// | ||
+ | \\ | ||
Any robotic platform that requires any form of automated response to a change in its environment will have to have an inverse kinematic system. The challenge posed by our current robotic arm is not only that due to the amount of links it has, there are several configurations to take into account but also since it is a modular robot, at least a significant amount of models have to be considered for any given amount of modules connected. Therefore there is going to be 2 links and 3 links configuration and using the encoders available, we would therefore be able to determine which configuration the arm is in and the arm will know which inverse kinematic set to use. Also collision detection is another aspect being worked on currently and progress is being made when dealing with motion of the arm within the range of the arm itself since there is a region close to the origin whereby the arm cannot reach. Such an example being tackled is when the arm gets too close to the origin of the coordinate frame. In which case points which would then be going through a region inaccessible to the arm in order to get to one which can be accessed, the points whose value are imaginaries are then made to go through the same motion except on the surface of an artificial sphere modelled to make sure to get around this problem. \\ | Any robotic platform that requires any form of automated response to a change in its environment will have to have an inverse kinematic system. The challenge posed by our current robotic arm is not only that due to the amount of links it has, there are several configurations to take into account but also since it is a modular robot, at least a significant amount of models have to be considered for any given amount of modules connected. Therefore there is going to be 2 links and 3 links configuration and using the encoders available, we would therefore be able to determine which configuration the arm is in and the arm will know which inverse kinematic set to use. Also collision detection is another aspect being worked on currently and progress is being made when dealing with motion of the arm within the range of the arm itself since there is a region close to the origin whereby the arm cannot reach. Such an example being tackled is when the arm gets too close to the origin of the coordinate frame. In which case points which would then be going through a region inaccessible to the arm in order to get to one which can be accessed, the points whose value are imaginaries are then made to go through the same motion except on the surface of an artificial sphere modelled to make sure to get around this problem. \\ | ||
If we switch the whole frame into a spherical coordinate system, the end effector would go through the same azimuth and zenith angle though the radius in question would change and match that of an artificially created sphere. \\ | If we switch the whole frame into a spherical coordinate system, the end effector would go through the same azimuth and zenith angle though the radius in question would change and match that of an artificially created sphere. \\ | ||
Line 187: | Line 202: | ||
As you can see, the last figure shows the solution to the inverse kinematics. | As you can see, the last figure shows the solution to the inverse kinematics. | ||
- | === Arm Design and Hardware Integration === | + | === Arm Design and Hardware Integration |
- | Our robotic arm is different than any other arms in the sense that it has more flexibility due to more number of links. The total length of the arm is 1.3 meter!. Most robotic arms are half of that and contains only two links (3 joints). Our arm offers 6 DOF and uses AX-18 servos for better performance. | + | Our robotic arm is different than any other arms in the sense that it has more flexibility due to more number of links. The total length of the arm is 1.3 meter!. Most robotic arms are half of that and contains only two links (3 joints). Our arm offers 6 DOF and uses AX-18 servos for better performance. |
+ | |||
+ | {{http:// | ||
Line 201: | Line 218: | ||
{{http:// | {{http:// | ||
+ | |||
The figure below shows communication between two blocks (the vision module and the voice recognition module) and inverse kinematics calculation. | The figure below shows communication between two blocks (the vision module and the voice recognition module) and inverse kinematics calculation. | ||
Line 213: | Line 231: | ||
{{http:// | {{http:// | ||
- | In the figure we can see the range of the arm (possible positions that it can reach) for different links. We can see that as we increase the number of links (hence joints) of the arms, the arm can reach more position, meaning it has more flexibility. A superimposed image of the three combinations is also shown on the fourth figure of the above figure. | + | In the figure we can see the range of the arm (possible positions that it can reach) for different links. We can see that as we increase the number of links (hence joints) of the arms, the arm can reach more position, meaning it has more flexibility. A superimposed image of the three combinations is also shown on the fourth figure of the above figure. |
+ | |||
+ | {{http:// | ||
+ | |||
+ | From the figure we can see that without the pathfinding algorithm, we have a straight path that the robot follows, which might result in a collision with obstacles that lie on that path. With the pathfinding algorithm we are able to detect those obstacles and thus avoid collision. | ||
The figures below shows the design of our final robotic arm. \\ | The figures below shows the design of our final robotic arm. \\ | ||
Line 239: | Line 262: | ||
We see this as a possible issue and the way we are going to handle this is by using opaque objects in the environment instead of transparent. | We see this as a possible issue and the way we are going to handle this is by using opaque objects in the environment instead of transparent. | ||
+ | The figure below shows the point cloud that we were able to construct from the Hokuyo laser sensor. The image shows the 3D map of the room that was scanned and the objects. This point cloud is used in conjunction with the computer vision to get the coordinates of the target in order to interact with it. \\ | ||
+ | |||
+ | {{http:// | ||
== Old Simulations & results == | == Old Simulations & results == | ||
Line 255: | Line 281: | ||
{{: | {{: | ||
+ | |||
+ | ===== Potential Clients ===== | ||
+ | |||
+ | {{http:// | ||
===== Video Demo ===== | ===== Video Demo ===== |
projects/g3/start.txt · Last modified: 2014/05/19 03:08 by cse93178