User Tools

Site Tools


projects:g3:start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
projects:g3:start [2014/04/30 02:45] cse93178projects:g3:start [2014/05/19 03:08] (current) cse93178
Line 2: Line 2:
  
 ======   VRobotics: botBlocks - A Robotic Development Platform   ====== ======   VRobotics: botBlocks - A Robotic Development Platform   ======
 +{{http://i.imgur.com/99mYeP3.png?500X200|VRobotics}} 
 +
 +{{http://i.imgur.com/vMmYKPn.png?500X200|botBLocks}}
 +
 \\ \\
 Students:  Students: 
Line 75: Line 79:
 Our project have 5 components: Our project have 5 components:
  
-  * Voice recognition (Audio Input+  * Voice recognition (Audio Processing: VeeVoice 
-  * Sensors +  * Sensors and Mapping : VeeCloud 
-  * Visual input +  * Computer Vision (Image Processing) : Veesion 
-  * Design and Hardware integration +  * Design and Hardware integration (Motion Control) : ICCE 
-  * Movement of the Arm+  * Kinematics Model : VeeNverseKinamatics 
 + 
 +Design and Hardware integration involves the designing of the arm and the require hardware components to control the arm. The other 4 components are involved with the operation of the arm. The chart below shows all our modules and how they can be integrated in different ways for different kinds of applications. \\ 
 + 
 +{{http://i.imgur.com/wa52PmU.png?700x500|The 5 modules that makes botBlocks}} 
 + 
 +As we can see, these five modules could be used to control a robot. \\ 
 + 
 +{{http://i.imgur.com/ITmi5Oh.png?700x600|How the modules could be used}} 
 + 
 +In the above figure we can see the different configurations of the modules. For example, the ICCE (Motion control) module and the Veesion (Image processing) module could be used to control a robot. On the other hand, with that configuration, we can just add the VeeCloud (Sensors and Mapping) module to add more functionality to the robot. Hence, it is scalable. We can use two of the same module if needed to control different parts of a robot. This is made possible because of the modular design of botBlocks. This makes our design unique and more user friendly.  
 + 
 +The graph below shows the current trend in the robotic industry. This shows why we have chosen the right field on the right time because the sales of service robots are increasing very rapidly and are projected to increase every year with revenues topping billions. \\ 
 + 
 +{{http://i.imgur.com/lXMjuKT.png?700x500|Growth in sales of service robots}} 
 + 
  
-Design and Hardware integration involves the designing of the arm and the require hardware components to control the arm. The other 4 components are involved with the operation of the arm. \\ 
  
 ==== VRobotics : Design Complexity ==== ==== VRobotics : Design Complexity ====
  
-Our project has 5 different components, which includes both hardware and software solutions. This makes our project one of the complex designs in the ENG4000 course. We design an arm, of length 1.3 meters, from scratch. We have used developement boards that doesn't have a huge community yet and lacks support and software. This contributed to more complexity in terms of installation of software and integration. We have implemented serial communications and devised our own protocol to maximum this communication and to reduce and handle errors. Hence, a lot of time has been spent on each sub-project to make it work. +Our project has 5 different components, which includes both hardware and software solutions. This makes our project one of the complex designs in the ENG4000 course. We design an arm, of length 1.3 meters, from scratch. We have used development boards that doesn't have a huge community yet and lacks support and software. This contributed to more complexity in terms of installation of software and integration. We have implemented serial communications and devised our own protocol to maximum this communication and to reduce and handle errors. Hence, a lot of time has been spent on each sub-project to make it work. Making our systems reliable was also a challenge since most of these technologies (voice recognition, computer vision, mapping) are still a field of research. We have followed our own approaches to make these systems more accurate and reliable so that systems built using our modules are also reliable
  
-d===== Brief Description of the Blocks =====+===== Brief Description of the Blocks =====
 \\ \\
  
Line 130: Line 149:
 Shape tracking with only one camera produces problematic errors with 3D objects. For instance, a cube viewed on the top looks like a square, but when it is viewed on one of its corner, it looks like a hexagon. The good news is that we only aim to process simple shapes and most of the time, the robotic arm most of the time will view the object at the top. The bad news is that since the design decision that we will only use 1 camera to do computer vision prevents me from doing computer vision in order to process depth. So why did we use only one camera if depth is needed to reach an object? The answer to that is that the computer vision only gives the (x,y) coordinates and the depth will be given by a laser sensor. Therefore, no image processing is done when the robot wants to find how far the object is. The image below shows how our vision module works.\\ Shape tracking with only one camera produces problematic errors with 3D objects. For instance, a cube viewed on the top looks like a square, but when it is viewed on one of its corner, it looks like a hexagon. The good news is that we only aim to process simple shapes and most of the time, the robotic arm most of the time will view the object at the top. The bad news is that since the design decision that we will only use 1 camera to do computer vision prevents me from doing computer vision in order to process depth. So why did we use only one camera if depth is needed to reach an object? The answer to that is that the computer vision only gives the (x,y) coordinates and the depth will be given by a laser sensor. Therefore, no image processing is done when the robot wants to find how far the object is. The image below shows how our vision module works.\\
  
-{{http://i.imgur.com/M1Kcpd0.png?700x300|How our Veesion module works}} \\+{{http://i.imgur.com/M1Kcpd0.png?700x500|How our Veesion module works}} \\
  
 A flow chart below shows how the CV (color vision) algorithms in general work. \\ A flow chart below shows how the CV (color vision) algorithms in general work. \\
Line 157: Line 176:
 What sets our point cloud block apart from the rest is that it is ready to go right out of the box. It uses a serial interface to transmit 3D maps. The brain and all internal processes are hidden out of sight and out of mind…so you can focus on your project. \\ What sets our point cloud block apart from the rest is that it is ready to go right out of the box. It uses a serial interface to transmit 3D maps. The brain and all internal processes are hidden out of sight and out of mind…so you can focus on your project. \\
 We are happy to announce that we have achieved our goals. The point cloud system is able to obtain a point cloud, process it, and send it via serial.  We are very happy with the results and believe it holds great potential in the world of robotics. \\ We are happy to announce that we have achieved our goals. The point cloud system is able to obtain a point cloud, process it, and send it via serial.  We are very happy with the results and believe it holds great potential in the world of robotics. \\
 +
 +A flow chart of our VeeCloud system is shown below:
 +
 +{{http://i.imgur.com/fCsXVGJ.png?700x600|VeeCloud}}
  
 === Inverse Kinematics and Movement === === Inverse Kinematics and Movement ===
  
-This specific subsystem deals with planning out the motion of the arm not only to make the arm reach its desired destination but also to make sure that the arm does not find itself in compromising situations whereby its movement are going to be hindered. \\+This specific subsystem deals with planning out the motion of the arm not only to make the arm reach its desired destination but also to make sure that the arm does not find itself in compromising situations whereby its movement are going to be hindered. The flow chart for this system is shown below:\\ 
 + 
 +{{http://i.imgur.com/bLewt6Z.png?700x600|VeeNnverseKinematics}} 
 +\\ 
 Any robotic platform that requires any form of automated response to a change in its environment will have to have an inverse kinematic system. The challenge posed by our current robotic arm is not only that due to the amount of links it has, there are several configurations to take into account but also since it is a modular robot, at least a significant amount of models have to be considered for any given amount of modules connected. Therefore there is going to be 2 links and 3 links configuration and using the encoders available, we would therefore be able to determine which configuration the arm is in and the arm will know which inverse kinematic set to use. Also collision detection is another aspect being worked on currently and progress is being made when dealing with motion of the arm within the range of the arm itself since there is a region close to the origin whereby the arm cannot reach. Such an example being tackled is when the arm gets too close to the origin of the coordinate frame. In which case points which would then be going through a region inaccessible to the arm in order to get to one which can be accessed, the points whose value are imaginaries are then made to go through the same motion except on the surface of an artificial sphere modelled to make sure to get around this problem. \\ Any robotic platform that requires any form of automated response to a change in its environment will have to have an inverse kinematic system. The challenge posed by our current robotic arm is not only that due to the amount of links it has, there are several configurations to take into account but also since it is a modular robot, at least a significant amount of models have to be considered for any given amount of modules connected. Therefore there is going to be 2 links and 3 links configuration and using the encoders available, we would therefore be able to determine which configuration the arm is in and the arm will know which inverse kinematic set to use. Also collision detection is another aspect being worked on currently and progress is being made when dealing with motion of the arm within the range of the arm itself since there is a region close to the origin whereby the arm cannot reach. Such an example being tackled is when the arm gets too close to the origin of the coordinate frame. In which case points which would then be going through a region inaccessible to the arm in order to get to one which can be accessed, the points whose value are imaginaries are then made to go through the same motion except on the surface of an artificial sphere modelled to make sure to get around this problem. \\
 If we switch the whole frame into a spherical coordinate system, the end effector would go through the same azimuth and zenith angle though the radius in question would change and match that of an artificially created sphere. \\ If we switch the whole frame into a spherical coordinate system, the end effector would go through the same azimuth and zenith angle though the radius in question would change and match that of an artificially created sphere. \\
Line 175: Line 202:
 As you can see, the last figure shows the solution to the inverse kinematics.  As you can see, the last figure shows the solution to the inverse kinematics. 
  
-=== Arm Design and Hardware Integration ===+=== Arm Design and Hardware Integration (Motion Control) ===
  
-Our robotic arm is different than any other arms in the sense that it has more flexibility due to more number of links. The total length of the arm is 1.3 meter!. Most robotic arms are half of that and contains only two links (3 joints). Our arm offers 6 DOF and uses AX-18 servos for better performance.+Our robotic arm is different than any other arms in the sense that it has more flexibility due to more number of links. The total length of the arm is 1.3 meter!. Most robotic arms are half of that and contains only two links (3 joints). Our arm offers 6 DOF and uses AX-18 servos for better performance. The flows diagram of the ICCE system (Motion control) is shown below: \\ 
 + 
 +{{http://i.imgur.com/hcRPxt0.png?700x500|ICCE}}
  
  
Line 189: Line 218:
  
 {{http://i.imgur.com/1pbuSzL.jpg?700x500|Voice recognition, improved}} \\ {{http://i.imgur.com/1pbuSzL.jpg?700x500|Voice recognition, improved}} \\
 +
  
 The figure below shows communication between two blocks (the vision module and the voice recognition module) and inverse kinematics calculation.  \\ The figure below shows communication between two blocks (the vision module and the voice recognition module) and inverse kinematics calculation.  \\
Line 201: Line 231:
 {{http://i.imgur.com/E83MV9D.jpg?700x500| Solved kinematics model}} \\ {{http://i.imgur.com/E83MV9D.jpg?700x500| Solved kinematics model}} \\
  
-In the figure we can see the range of the arm (possible positions that it can reach) for different links. We can see that as we increase the number of links (hence joints) of the arms, the arm can reach more position, meaning it has more flexibility. A superimposed image of the three combinations is also shown on the fourth figure of the above figure. Our robotic arm have more than 3 linksSo, our design can reach positions that a 3-linked planar robotic arm can't reach+In the figure we can see the range of the arm (possible positions that it can reach) for different links. We can see that as we increase the number of links (hence joints) of the arms, the arm can reach more position, meaning it has more flexibility. A superimposed image of the three combinations is also shown on the fourth figure of the above figure. The figure below shows the two trajectories for the robotic arm with and without pathfinder algorithmThe pathfinding algorithm allows the robot to avoid obstruction in it's way by following a different route.  
 + 
 +{{http://i.imgur.com/olKYBh8.png?700x500|Pathfinding route}} \\ 
 + 
 +From the figure we can see that without the pathfinding algorithm, we have straight path that the robot follows, which might result in a collision with obstacles that lie on that path. With the pathfinding algorithm we are able to detect those obstacles and thus avoid collision. 
  
 The figures below shows the design of our final robotic arm. \\ The figures below shows the design of our final robotic arm. \\
Line 227: Line 262:
 We see this as a possible issue and the way we are going to handle this is by using opaque objects in the environment instead of transparent. We see this as a possible issue and the way we are going to handle this is by using opaque objects in the environment instead of transparent.
  
 +The figure below shows the point cloud that we were able to construct from the Hokuyo laser sensor. The image shows the 3D map of the room that was scanned and the objects. This point cloud is used in conjunction with the computer vision to get the coordinates of the target in order to interact with it.  \\
 +
 +{{http://i.imgur.com/GLTflAR.png?700x500|Point cloud}} \\
  
 == Old Simulations &  results == == Old Simulations &  results ==
Line 243: Line 281:
  
 {{:projects:g3:sensorresult.png?700x700|Sensor data plots}} \\  {{:projects:g3:sensorresult.png?700x700|Sensor data plots}} \\ 
 +
 +=====  Potential Clients =====
 +
 +{{http://i.imgur.com/oPXIOnm.png?700x500|Our potential clients}} \\
  
 =====  Video Demo ===== =====  Video Demo =====
Line 272: Line 314:
 ===== Funding ===== ===== Funding =====
  
-  Lassonde School of Engineering ( $1000),  +  Lassonde School of Engineering ( $1000),  
    
 {{:projects:g1:index.jpg|}} {{:projects:g1:index.jpg|}}
 +
 +  * York University Robotic Society (YURS) (approximately $2500)
 +     - Resources for building the arm
 +     - Machinery for fabrication of the arm
 +     - Sensors and other materials
 +
 +
 +
  
    
projects/g3/start.1398825959.txt.gz · Last modified: 2014/04/30 02:45 by cse93178