Aysar Khalid, Computer Engineer,
Project lead. Oversees software engineering aspects of the project, such as device communication between circuit and Arduino, iOS and 3rd party devices. In addition, develops processing algorithms necessary to process and analyze sensing data.
Hassan Chehaitli, Computer Engineer
Oversee hardware engineering aspects of the project such as circuit design, part research and hardware development. In addition, aids in system apparatus building.
Mohammad Aryanpour, Space Engineer,
Oversees material science engineering aspects of the project such various sleeve material and electrodes. In addition, system apparatus building/shielding and industrial design.
Project Adviser & Course Director: Professor Ebrahim Ghafar-Zadeh
Mentors: Mourad Amara, Giancarlo Ayala-Charca
As we live in the digital age, we are constantly using more and more digital devices. Nowadays, people consistently own atleast 3 devices such as your phone, tablet, and computer. As you attempt to interact with these devices, you typically only interact with one device at a time. This limitation is posed by the fact that you need direct contact with the device.
With the increased adoption of wearable technology, we sought to solve this limitation by developing a new type of medium- the Flexus.
The Flexus uses electromyography (EMG) to sense small muscle electrical activity that occur when you flex. We believe detecting muscle activity will help simplify human gesture recognition.
The Flexus changes the way we interact with devices. It allows you to wave your hand or snap your finger to interact with your devices. For example, when you are cooking- you can interact with your TV or laptop by simply waving a gesture, no more having to stop what you’re doing.
One way to demonstrate the versatility and accuracy of the Flexus in real-time was done using an AR Drone. We controled a quadcopter drone not by a remote but by simply flexing and waving hands, making for a very seamless natural experience. Other applications are in the automotive industry, we can truly achieve hands-free driving with the Flexus, no more having to reach down for your phone.
Controlling a drone via Flexus.
The Flexus Alpha version is made possible by:
SOFTWARE
- A java client application that interfaces between Arduino and Pusher.
- iOS app that acts as a server and aggregates all data from sensors (EMG and motion) and processes it for sending
HARDWARE
- Arduino Uno is used to get input from EMG sensor.
- iPhone, provides connectivity with target device (Wi-fi, BLE)
SENSORS
- EMG sensing prototype
- iPhone CoreMotion Framework: three-axis gyroscope, accelerometer
POWER
9V Battery (2)
Competitive Edge
Key to its design is the ability of modularity. It can take multiple inputs from multiple sensors and act on it due to its client/server multicast architecture. This allows for rapid development of add-on sensing and input devices without redeveloping the entire sleeve. In addition, it has the ability to control multiple devices at the same time (given the application allows this functionality).
The Flexus' sleeve nature provides it the ability to be applied to other muscle groups/body parts that have recognizable gestures rather than being limited to a wristband.
Market
The market for business to consumer (B2C) wearable technology is emerging and ripening to become as large as today's smartphone market. As people are evermore interacting with the increasing amount of devices, their capacity to interact at the same time is limited. We believe the Flexus will further increase user device interactivity by allowing for fluid, friendly gestures to be recognized and sent in real time to 3rd party devices.
We estimate the Flexus to retail at $150 once it has been finalized and ready for public release. Our costs to produce it would be about $60, taking into account a mass production line and administrative costs.
Future
The future for the Flexus will be in its ability to interface with more 3rd party devices. To permit our userbase to interact with more of their daily devices. Devices such as media players, gaming, and automotive are all commonplace potential for a device such as the Flexus.
With the added ability to interact with more devices simultaneously, this means we need to be able to recognized multiple gestures.
Conclusion
The Flexus sleeve has been demonstrated to recognize gestures in real-time allowing for fluid device interaction. Using a prototyped EMG sensing circuit for muscle activity data and the iPhone’s three-axis accelerometer and gyroscope. The Flexus communicates to 3rd party devices via Wi-Fi such as the AR Drone or other media player devices.
As people are evermore interacting with the increasing amount of devices, their capacity to interact at the same time is limited. We believe the Flexus will further increase user device interactivity by allowing for fluid, friendly gestures to be recognized and sent in real time to 3rd party devices. In essense, simplifying the interactions between people and devices.