User Tools

Site Tools


projects:g3:start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
projects:g3:start [2014/04/30 03:09] cse93178projects:g3:start [2014/05/19 03:08] (current) cse93178
Line 94: Line 94:
  
 In the above figure we can see the different configurations of the modules. For example, the ICCE (Motion control) module and the Veesion (Image processing) module could be used to control a robot. On the other hand, with that configuration, we can just add the VeeCloud (Sensors and Mapping) module to add more functionality to the robot. Hence, it is scalable. We can use two of the same module if needed to control different parts of a robot. This is made possible because of the modular design of botBlocks. This makes our design unique and more user friendly.  In the above figure we can see the different configurations of the modules. For example, the ICCE (Motion control) module and the Veesion (Image processing) module could be used to control a robot. On the other hand, with that configuration, we can just add the VeeCloud (Sensors and Mapping) module to add more functionality to the robot. Hence, it is scalable. We can use two of the same module if needed to control different parts of a robot. This is made possible because of the modular design of botBlocks. This makes our design unique and more user friendly. 
 +
 +The graph below shows the current trend in the robotic industry. This shows why we have chosen the right field on the right time because the sales of service robots are increasing very rapidly and are projected to increase every year with revenues topping billions. \\
 +
 +{{http://i.imgur.com/lXMjuKT.png?700x500|Growth in sales of service robots}}
 +
 +
 +
  
 ==== VRobotics : Design Complexity ==== ==== VRobotics : Design Complexity ====
Line 169: Line 176:
 What sets our point cloud block apart from the rest is that it is ready to go right out of the box. It uses a serial interface to transmit 3D maps. The brain and all internal processes are hidden out of sight and out of mind…so you can focus on your project. \\ What sets our point cloud block apart from the rest is that it is ready to go right out of the box. It uses a serial interface to transmit 3D maps. The brain and all internal processes are hidden out of sight and out of mind…so you can focus on your project. \\
 We are happy to announce that we have achieved our goals. The point cloud system is able to obtain a point cloud, process it, and send it via serial.  We are very happy with the results and believe it holds great potential in the world of robotics. \\ We are happy to announce that we have achieved our goals. The point cloud system is able to obtain a point cloud, process it, and send it via serial.  We are very happy with the results and believe it holds great potential in the world of robotics. \\
 +
 +A flow chart of our VeeCloud system is shown below:
 +
 +{{http://i.imgur.com/fCsXVGJ.png?700x600|VeeCloud}}
  
 === Inverse Kinematics and Movement === === Inverse Kinematics and Movement ===
  
-This specific subsystem deals with planning out the motion of the arm not only to make the arm reach its desired destination but also to make sure that the arm does not find itself in compromising situations whereby its movement are going to be hindered. \\+This specific subsystem deals with planning out the motion of the arm not only to make the arm reach its desired destination but also to make sure that the arm does not find itself in compromising situations whereby its movement are going to be hindered. The flow chart for this system is shown below:\\ 
 + 
 +{{http://i.imgur.com/bLewt6Z.png?700x600|VeeNnverseKinematics}} 
 +\\ 
 Any robotic platform that requires any form of automated response to a change in its environment will have to have an inverse kinematic system. The challenge posed by our current robotic arm is not only that due to the amount of links it has, there are several configurations to take into account but also since it is a modular robot, at least a significant amount of models have to be considered for any given amount of modules connected. Therefore there is going to be 2 links and 3 links configuration and using the encoders available, we would therefore be able to determine which configuration the arm is in and the arm will know which inverse kinematic set to use. Also collision detection is another aspect being worked on currently and progress is being made when dealing with motion of the arm within the range of the arm itself since there is a region close to the origin whereby the arm cannot reach. Such an example being tackled is when the arm gets too close to the origin of the coordinate frame. In which case points which would then be going through a region inaccessible to the arm in order to get to one which can be accessed, the points whose value are imaginaries are then made to go through the same motion except on the surface of an artificial sphere modelled to make sure to get around this problem. \\ Any robotic platform that requires any form of automated response to a change in its environment will have to have an inverse kinematic system. The challenge posed by our current robotic arm is not only that due to the amount of links it has, there are several configurations to take into account but also since it is a modular robot, at least a significant amount of models have to be considered for any given amount of modules connected. Therefore there is going to be 2 links and 3 links configuration and using the encoders available, we would therefore be able to determine which configuration the arm is in and the arm will know which inverse kinematic set to use. Also collision detection is another aspect being worked on currently and progress is being made when dealing with motion of the arm within the range of the arm itself since there is a region close to the origin whereby the arm cannot reach. Such an example being tackled is when the arm gets too close to the origin of the coordinate frame. In which case points which would then be going through a region inaccessible to the arm in order to get to one which can be accessed, the points whose value are imaginaries are then made to go through the same motion except on the surface of an artificial sphere modelled to make sure to get around this problem. \\
 If we switch the whole frame into a spherical coordinate system, the end effector would go through the same azimuth and zenith angle though the radius in question would change and match that of an artificially created sphere. \\ If we switch the whole frame into a spherical coordinate system, the end effector would go through the same azimuth and zenith angle though the radius in question would change and match that of an artificially created sphere. \\
Line 187: Line 202:
 As you can see, the last figure shows the solution to the inverse kinematics.  As you can see, the last figure shows the solution to the inverse kinematics. 
  
-=== Arm Design and Hardware Integration ===+=== Arm Design and Hardware Integration (Motion Control) ===
  
-Our robotic arm is different than any other arms in the sense that it has more flexibility due to more number of links. The total length of the arm is 1.3 meter!. Most robotic arms are half of that and contains only two links (3 joints). Our arm offers 6 DOF and uses AX-18 servos for better performance.+Our robotic arm is different than any other arms in the sense that it has more flexibility due to more number of links. The total length of the arm is 1.3 meter!. Most robotic arms are half of that and contains only two links (3 joints). Our arm offers 6 DOF and uses AX-18 servos for better performance. The flows diagram of the ICCE system (Motion control) is shown below: \\ 
 + 
 +{{http://i.imgur.com/hcRPxt0.png?700x500|ICCE}}
  
  
Line 201: Line 218:
  
 {{http://i.imgur.com/1pbuSzL.jpg?700x500|Voice recognition, improved}} \\ {{http://i.imgur.com/1pbuSzL.jpg?700x500|Voice recognition, improved}} \\
 +
  
 The figure below shows communication between two blocks (the vision module and the voice recognition module) and inverse kinematics calculation.  \\ The figure below shows communication between two blocks (the vision module and the voice recognition module) and inverse kinematics calculation.  \\
Line 213: Line 231:
 {{http://i.imgur.com/E83MV9D.jpg?700x500| Solved kinematics model}} \\ {{http://i.imgur.com/E83MV9D.jpg?700x500| Solved kinematics model}} \\
  
-In the figure we can see the range of the arm (possible positions that it can reach) for different links. We can see that as we increase the number of links (hence joints) of the arms, the arm can reach more position, meaning it has more flexibility. A superimposed image of the three combinations is also shown on the fourth figure of the above figure. Our robotic arm have more than 3 linksSo, our design can reach positions that a 3-linked planar robotic arm can't reach+In the figure we can see the range of the arm (possible positions that it can reach) for different links. We can see that as we increase the number of links (hence joints) of the arms, the arm can reach more position, meaning it has more flexibility. A superimposed image of the three combinations is also shown on the fourth figure of the above figure. The figure below shows the two trajectories for the robotic arm with and without pathfinder algorithmThe pathfinding algorithm allows the robot to avoid obstruction in it's way by following a different route.  
 + 
 +{{http://i.imgur.com/olKYBh8.png?700x500|Pathfinding route}} \\ 
 + 
 +From the figure we can see that without the pathfinding algorithm, we have straight path that the robot follows, which might result in a collision with obstacles that lie on that path. With the pathfinding algorithm we are able to detect those obstacles and thus avoid collision. 
  
 The figures below shows the design of our final robotic arm. \\ The figures below shows the design of our final robotic arm. \\
Line 239: Line 262:
 We see this as a possible issue and the way we are going to handle this is by using opaque objects in the environment instead of transparent. We see this as a possible issue and the way we are going to handle this is by using opaque objects in the environment instead of transparent.
  
 +The figure below shows the point cloud that we were able to construct from the Hokuyo laser sensor. The image shows the 3D map of the room that was scanned and the objects. This point cloud is used in conjunction with the computer vision to get the coordinates of the target in order to interact with it.  \\
 +
 +{{http://i.imgur.com/GLTflAR.png?700x500|Point cloud}} \\
  
 == Old Simulations &  results == == Old Simulations &  results ==
Line 255: Line 281:
  
 {{:projects:g3:sensorresult.png?700x700|Sensor data plots}} \\  {{:projects:g3:sensorresult.png?700x700|Sensor data plots}} \\ 
 +
 +=====  Potential Clients =====
 +
 +{{http://i.imgur.com/oPXIOnm.png?700x500|Our potential clients}} \\
  
 =====  Video Demo ===== =====  Video Demo =====
Line 284: Line 314:
 ===== Funding ===== ===== Funding =====
  
-  Lassonde School of Engineering ( $1000),  +  Lassonde School of Engineering ( $1000),  
    
 {{:projects:g1:index.jpg|}} {{:projects:g1:index.jpg|}}
 +
 +  * York University Robotic Society (YURS) (approximately $2500)
 +     - Resources for building the arm
 +     - Machinery for fabrication of the arm
 +     - Sensors and other materials
 +
 +
 +
  
    
projects/g3/start.1398827358.txt.gz · Last modified: 2014/04/30 03:09 by cse93178