Let’s Bring Rosie Home: 5 Challenges We Need to Solve for Home Robots
From IEEE Spectrum
By Shahin Farshchi
Posted 13 Jan 2016 | 16:25 GMT
Science fiction authors love the robot sidekick. R2-D2, Commander Data, and KITT—just to name a few—defined “Star Wars,” “Star Trek,” and “Knight Rider,” respectively, just as much as their human actors. While science has brought us many of the inventions dreamed of in sci-fi shows, one major human activity has remained low tech and a huge source of frustration: household chores. Why can’t we have more robots helping us with our domestic tasks? That’s a question that many roboticists and investors (myself included) have long been asking ourselves. Recently, we’ve seen some promising developments in the home robotics space, including Jibo’s successful financing and SoftBank’s introduction of Pepper. Still, a capable, affordable robotic helper—like Rosie, the robot maid from “The Jetsons”—remains a big technical and commercial challenge. Should robot makers focus on designs that are extensions of our smartphones (as Jibo seems to be doing), or do we need a clean-sheet approach towards building these elusive bots?
Take a look at the machines in your home. If you remove the bells and whistles, home automation hasn’t dramatically changed since the post–World War II era. Appliances, such as washing machines, dishwashers, and air conditioners, seemed magical after WWII. Comprised primarily of pumps, motors, and plumbing, they were simply extensions of innovations that came to bear during the industrial revolution. It probably doesn’t come as a surprise that industrial behemoths such as GE, Westinghouse, and AEG (now Electrolux) shepherded miniature versions of the machines used in factories into suburban homes. At the time, putting dirty clothes and dishes into a box from which they emerged clean was rather remarkable. To this day, the fundamental experience remains the same, with improvements revolving around reliability, and efficiency. Features enabled by Internet-of-Things technologies are marginal at best, i.e., being able to log into your refrigerator or thermostat through your phone.
But before wondering when we’ll have home robots, it might be fair to ask: Do we even need them? Consider what you can already do just by tapping on your phone, thanks to a host of on-demand service startups. Instacart brings home the groceries; Handy and Super send professionals to fix or clean your home; Pager brings primary care, while HomeTeam does elderly care. (Disclosure: my company, Lux Capital, is an investor in Super, Pager, and HomeTeam.) So, again, why do we need robots to perform these services when humans seem to be doing them just fine? I don’t think anyone has a compelling answer to that question today, and home robots will probably evolve and transform themselves over and over until they find their way into our homes. Indeed, it took decades of automobiles until the Model T was born. The Apple IIs and PC clones of the early 1980s had difficulty justifying their lofty price tags to anyone who wasn’t wealthy, or a programmer. We need to expect the same from our first home bots.
So it might be helpful to examine what problems engineers need to crack before they can attempt to build something like Rosie the robot. Below I discuss five areas that I believe need significant advances if we want to move the whole home robot field forward.
1. We Need Machine-Human Interfaces
Siri and Amazon’s Alexa demonstrate how far speech recognition and natural language processing have come. Unfortunately, they are no more than a human-machine interface, designed to displace the keyboard and mouse. What we need is a machine-human interface. Where is the distinction? It starts with understanding people, rather than aggregating data and using statistical patterns to make inferences. It can understand our moods and emotional contexts, as an artificial intelligence would. Humans do not interact with one another through a series of commands (well, maybe some do); they establish a connection, and once a computer can take on that role, then we have a true machine-human interface. Scientists are starting to tackle this by applying concepts used in programming toward establishing rules for robot-human conversations, but we’ll need much more if we want to have engaging AI assistants like the one in the movie “Her.”
2. Cheap Sensors Need to Get Cheaper
Driverless cars will generate hard cash for their operators, so forking over thousands for an array of lidar, radar, ultrasound, and cameras is a no-brainer. Home robots, however, may need to fit the ever-discretionary consumer budget. The array of sensors the robot would need to properly perceive its environment could render it cost prohibitive unless the sensors cost pennies as they do in mobile phones. MEMS technology dramatically lowered the cost of inertial sensors, which previously cost thousands of dollars and were relegated to aircraft and spacecraft. Can computer vision applied to an array of cheap cameras and infrared sensors provide adequate sensing capability? And can we expect lidar to come down in price, or do we need a whole new sensing technology? A startup called Dual Aperture has added a second aperture for infrared hence creating the ability to infer short distances. Meanwhile, DARPA is funding the research on chip-based lidar, and Quanergy expects to launch a solid-state optical phased array, thereby eliminating the mechanical components that raise the cost of lidar. We expect engineers to find creative ways to reduce the cost of existing sensing technology, while obviating others altogether, and hopefully making them as cheap as sensors in our phones today.
3. Manipulators Need to Get a Grip
Loose objects find their way into our homes because they are easy to manipulate with our hands. If we expect a robot to be able to clean and organize these objects as efficiently as humans do, it needs manipulators that are at least as effective as the human hand. Companies such as Robotiq, Right Hand Robotics, and Soft Robotics, among others, have designed efficient and reliable manipulators. Though air-powered inflatable grippers have the advantage of being soft and lightweight, they do require a pump, which is not very practical for a mobile robot. Efforts funded by DARPA at iRobot, SRI, and other labs and companies seem to be taking us in the right direction, helping robots get a grip.
4. Robots Need to Handle Arbitrary Objects
Opening doors, flipping switches, and cleaning up scattered toys are simple tasks for us humans, but compute-intense for machines today. A robot like Roomba performs two tasks: running a suction motor and generating a path along what’s expected to be a flat surface with rigid obstacles. How about washing dishes or folding laundry? These tasks require a suite of capabilities ranging from recognizing objects, identifying grasping points, understanding how an object will interact with other objects, and even predicting the consequences of being wrong. DARPA, NSF, NASA, and European Union science funding agencies are sponsoring much-needed research in this area, but “solving manipulation” will probably require leveraging a number of different technologies, including cloud robotics and deep learning.
5. Navigating Unstructured Environments Needs to Become Routine
Anyone who saw this year’s DARPA Robotics Challenge would appreciate how difficult of a problem it is to navigate and manipulate an unstructured, unknown environment. Those robots were slow. Though driverless cars pose a formidable challenge, it has been proven to be more tractable. Deep learning techniques can help robots recognize soft objects vs. hard obstructions, and human assistants may be able to “teach” robots until the algorithms take over. Like the manipulation problem, real-time navigation requires robots to quickly sense, perceive, and execute—probably several orders of magnitude faster that they DRC winning team is today. Full autonomy won’t happen overnight, but that isn’t a problem: humans can help robots get out of a bind. Willow Garage was a pioneer with its Heaphy Project, where assistance to robots is crowdsourced to remote operators. More robotics and industrial automation companies are embracing the notion of humans overseeing robots, with the expectation of going from the (superfluous) 1:1 human-robot ratio to a single operator being able to oversee/assist many robots.
Shahin Farshchi is a partner at Lux Capital, where he invests in hardware and robotics companies. Follow him on Twitter:@farshchi
West TN FIRST Lego League Championship 01/23/2015 @ University of Memphis, University Center Ballroom
GRAD GRIFFIN NOTHING BUT NET (VRC) 01/29/2016 @ Grad Academy
From Robotic Trends
By Eugene Demaitre
January 12, 2016
ZeroUI Inc. has developed a set of four motor modules with a gesture-based controller glove for an intuitive user interface. With Ziro, makers can control wheels and direction or arms on homemade robots.
Ziro’s modules can be used to wirelessly move and control robots made from materials including cardboard, wood or metal, allowing for creative development. The kit will be funded by an Indiegogo crowdfunding campaign and costs $199.
from: Robotics Business Review
By Eugene Demaitre
Jan 4, 2016
Are you expecting robot butlers and flying cars—or at least self-driving ones—this year? Certain trends, such as increasing adoption of robots in logistics or smarter drones, are likely to continue in 2016.
However, the forecast is less certain for self-driving cars, doomsday machines, or robots as smart or agile as humans. Fortunately, there are lots of ways for robots and AI to improve our lives in the immediate future, and here are some predictions for robotics in 2016:
1. Productivity will continue to benefit from industrial automation.
Robots have been a fixture in factories for decades, but collaborative robots could move to smaller and midsize businesses. Robotic arms and manipulators are becoming safer, more precise, and easier to use, and some collaborative robots are going mobile.
China, the world’s biggest manufacturer, plans to intensify its level of automation, even in the face of an economic slowdown. While businesses such as Dell are trying to gain access to Chinese factories and markets, countries like Japan will also adopt robots in an effort to stay competitive.
It remains to be seen, however, how much “reshoring” will occur, even if massive unemployment from industrial automation is debatable. In fact, certain skills will be in even higher demand.
Meanwhile, 3D printing has already begun to move from prototyping to production and is starting to change how we look at construction (if not the consumer market). Partly due to labor shortages, precision welding and dairy are among the industries turning to automation—what’s next?
Could the automat return? The profit margins for fast food and retail are pretty tight, but anything that provides an edge in serving billions of meals could catch on.
Thanks to the cobots mentioned above—and the improved autonomy described below—warehousing and logistics companies will invest even more in supply chain automation to serve consumer demand for instant gratification.
2. Robots on land, in the air, and on the seas will be more autonomous.
Whether it’s a self-driving truck at a mine, an unmanned aerial vehicle inspecting a bridge, or an autonomous underwater vehicle maintaining an offshore oil rig, drones and robotic vehicles are becoming easier to operate. This frees up humans to remotely do work that’s otherwise too difficult, dangerous, or tedious to do in person.
When will self-driving cars start ferrying passengers? In closed campuses, such as at universities and airports, they already are. There are significant technical, cultural, and legal hurdles to overcome before self-driving vehicles can hit the roads. They’re not stopping major automakers and tech titans from spending a lot of money on the race to the first fully autonomous vehicle.
In logistics, Amazon’s research into drone deliveries could solve the “last-mile problem” and hasten deliveries, but again, there are safety and regulatory concerns to address.
Corporate rivalries (and in the U.S., an election year) are also factors to watch.
The Federal Aviation Administration currently permits exceptions to its ban on commercial drone use, but that scheme is likely to change with wider applications, just as the FAA finally required consumer drones to be registered.
3. We’ll find new applications for AI and robotics.
Thanks to improving sensors, mobility, and the ability to gather big data, the Internet of Things (IoT) will become more important—and useful—in the coming year. From precision agriculture and warehousing to virtual assistants and medical diagnostics, smarter machines will infiltrate every facet of the global economy.
Of course, there are challenges.
Artificial intelligence research into machine learning, natural language processing, and machine vision will continue to lead to more advanced specialized robots, but that’s still a long way from “strong AI.” Elon Musk and company’s investment in OpenAI is as much a future market play as it is an attempt to restrain robots.
Ubiquitous computing and sensors raise privacy concerns, as was feared with Amazon’s Echo. Medical robots could be hacked, and the offloading of processing and data into the cloud will require a new generation of security technology. Interoperability standards are going to be an issue for environments such as hospitals and hotels using multiple robots.
Apple, Facebook, Google, and others are investing heavily in AI research. In the short term, AI will help analyze big data, manage business processes, and provide robots with autonomy.
Just don’t expect the device that handles your appointments via voice commands to be able to pick up your dry cleaning—at least not yet.
4. Robots will enter households, but not all will make it over the threshold.
Move over, Roomba. As our “Sweet Sixteen for 2016” report noted, there’s a wave of social robots getting ready to enter offices, shops, and homes. Some observers have criticized the first generation of such robots as being little more than tablets or smartphones on wheels.
Will stationary assistants such as Jibo, the FURo-i, or Amazon’s Alexa be the most useful, or will consumers prefer the more humanoid Buddy or Pepper? In a world where telecommuting and video calls already exist, how much demand is there for telepresence robots?
As last year’s DARPA Robotics Challenge demonstrated, humanoid robots are a long way from being able to easily get you a beer from the fridge or walk the dog. Expect to see market consolidation, price shifts, and increasing capabilities before there’s a robot in every home.
5. The robot apocalypse won’t happen, but robots will help people worldwide.
As noted above, true AI is a long ways off, according to people in the know. But a robotic arms race raises legitimate fears.
The U.S. has led in airborne drone warfare, Russia has touted its new autonomous tanks, and ethicists are fretting about accountability and the ease of remote-controlled killing. As with any weapon, restraint is good, but understanding intent and managing global conflict are more important.
On the other hand, surgical robotics, spreading exoskeletons, and cheaper and lighter prosthetics can help people right now. Bionic people are already among us, and improvements in machine vision, movement, and control help both industry and healthcare.
Like other industrial automation, precision agriculture is a response to labor shortages and the need for productivity—in this case, to feed 7.3 billion people. Autonomous machines and IoT will enable more farmers to monitor and manage crops from seed through cultivation, pest control, harvesting, and packaging.
Nearly all nations, regions, and cities worldwide are courting robotics as the key to the future, but each one will have to decide where to specialize its expertise, even as hardware commoditizes. Is it AI research? Drone training? Flexible manufacturing? Social robotics?
Not every educational contest, startup incubator, or merger and acquisition will be successful, but the robotics industry is only going to grow and be fascinating to follow in 2016!
Let me know if you disagree, and even better, check out this year’s webcasts, and let’s compare notes at year’s end!
Date: Jan 4, 2016
Lego Mindstorms robots have done wonders getting kids interested in science and programming. Now Lego Education is launching a new robot learning system, dubbed WeDo 2.0, to help teach kids about engineering, technology, and coding.
The Lego Education WeDo 2.0 system is a combination of hardware and software that gives elementary school children more than 40 hours of hands-on projects.
Lego Education, a division of Denmark’s Lego, unveiled the system at the 2016 International CES, the big tech trade show in Las Vegas this week. The robot itself is a wireless, tablet-ready system that is designed for a younger crowd. The company’s Lego Mindstorms robots are targeted at middle school and high school students, who use them to build robots that can complete tasks and win competitions.
The lessons correlate to educational standards in physical sciences, life sciences, earth and space sciences, and engineering. The lessons are designed to motivate students (in second through fourth grades) to solve real-world science problems.
Lego says that students can use WeDo 2.0 to explore, create, and share their scientific discoveries as they build, program, and modify projects. Teachers receive support through training, curriculum, and built-in assessment. There are eight guided projects and eight open-ended projects.
Here’s the robot that comes with the Lego Education WeDo 2.0 system.
In the “Drop and Rescue” project, students have to design a device to reduce the impacts of a weather-related hazard on humans, animals, and the environment. Students can prototype solutions where there isn’t just a single right answer.
“Teachers know that science and technology skills are crucial for today’s elementary school students, but providing engaging projects that mean something in the real world is a challenge,” said Jeffrey Marlow, a geobiologist at Harvard University and founder of The Mars Academy education and development program, in a statement.
Other projects let students discover the surface of Mars with a model rover, or explore the Amazon rainforest through frog metamorphosis.
“These science lessons do more than just teach students facts to memorize,” Marlow said. “They represent an immersive experience that instills a deeper understanding of the scientific method and evidence-based reasoning.”
The platform includes a Bluetooth low-energy Smarthub element; an electronic building brick that is part of the LEGO Power Functions (LPF), a new technology platform for LEGO Education; and one motor, one tilt, and one motion sensor. It also includes the WeDo 2.0 Core software, which lets kids program through a drag-and-drop graphical user interface.
Lego Education WeDo 2.0 is available today on iPad, Android, PC, and Macs. Chromebook support will be available in the second half of 2016. The WeDo 2.0 Core Set plus software sells for $160. The WeDo ReadyGo 24 Student Class Pack sells for $2,260, and the WeDo YouCreate 24 Student Class Pack sells for $1,930. The YouCreate bundle does not include the extended software, which includes the curriculum pack.
The first WeDo 1.0 system launched in 2009.
Found a wonderful summary of K-12+ competitions at http://robotics.nasa.gov/edu/matrix.php. Check it out.
More for me than general consumption, but the link below is for a series of talks at the American Society of Engineering Education (ASEE) Engineering Technology Leadership Institute (ETLI) that I wanted to keep: