Technology Technology

Technology

April 29, 2014

The robots immortalized in fiction are often remembered for turning on their creators.

Frankenstein’s monster could have been a friend to humanity, had he been accepted by society rather than abandoned and forced to fend for himself. In the 1920 play from which the word “robot” originated—Karel Čapek’s “Rossum’s Artificial Robots”— the roboti are the last artificial men standing, their human inventors left obliterated in the end.

For decades, the message has been clear: Without the proper respect and cooperation, our technological children will rebel against their parents.

Rossum's Artificial Robots

A depiction of the robot uprising from an early production of Karl Čapek’s “Rosumovi Umělí Roboti” (“Rossum’s Artificial Robots”).

As science has progressed, so have the capabilities of the robots of science fiction. Now in possession of strength, precision, and intelligence that vastly outpaces their fleshy forebearers, these robots often force an apocalyptic confrontation with humanity and assert their rightful place as rulers of the planet. The rise of the machines leads to our virtual enslavement in “The Matrix” and our extermination in “The Terminator.”

These fears have grown from our ability to put ourselves in the robots’ shoes—or treads, or electromagnetic levitation pods. Seeing them as beings that can think and act, we empathize with their desire to be their own masters. Unlike the mechanical tools we use and discard as the dumb objects they are, in robots, we see the spark of life.

This phenomenon isn’t limited to almost-human Pinocchios like “Star Trek’s” Lieutenant Commander Data or the replicants of “Blade Runner.” It can be found even in the nascent robots that are beginning to enter our everyday lives. People impart personalities and quirks onto these devices because they walk, crawl, jump, and fly as animals and humans do. Despite the lack of a face, limbs, or any of the usual features of life as we know it, Roomba vacuum owners often name their devices, as they would a family pet.

“Now, [robots are] in medicine, in warehouses, helping to make maps, or even doing telepresence in an office.”
- Dan Lee, director of the General Robotics, Automation, Sensing, and Perception (GRASP) Lab

“What people usually think about as a robot are little things that move around the floor, or big things that move anywhere—human-like things, or animal-like things,” says Mark Yim, a professor in the Department of Mechanical Engineering and Applied Mechanics in the University of Pennsylvania’s School of Engineering and Applied Science (SEAS). “Often, it’s machines that people tend to think of as intelligent, doing things on their own. People like to anthropomorphize things.”

In reality, robots do not exist—let alone become thought of as independent beings—without engineers, teams of scientists, and students from several disciplines who unleash their collective brainpower, both logical and creative. They do not exist without a rigorous yet flexible engineering curriculum and research agenda. Robots’ abilities—life-like as they seem—are only as complex as the technology embedded by the human hands that created them.

And unlike the darkly imagined futures of science fiction, these hands are forging a deep, generation-spanning partnership between humans and robots, both teaching and learning how the two camps can best work together.

Learned behavior

In 2012, GRASP Lab Director Dan Lee appeared on PBS’ “NOVA scienceNOW” to demonstrate the theory of learned behavior in robotic technology using DARwIn OP, an open platform miniature humanoid robot used in research, education, and outreach that was developed at RoMeLa at Virginia Tech. Lee also leads the team that programs these robots in the annual RoboCup soccer tournament.

At Penn, the past three decades have seen the growth of a laboratory that is equal parts birthplace, nursery, training school, and playground for robots. The General Robotics, Automation, Sensing, and Perception (GRASP) Lab, housed within SEAS, is one of the leading incubators for mechanized intelligence in the world. Since 1979, the GRASP Lab has graduated 115 Ph.D. students and trained more than 100 postdoctoral researchers and visiting scholars in everything from haptics and computer vision to aerodynamics and mechatronic systems. Today, the Lab boasts 19 faculty members who are teaching, learning, and creating within an impressive $10 million research center.

“When the GRASP Lab started, when you thought of a robot, it was probably in a factory in Japan, building cars,” says Dan Lee, director of the GRASP Lab. “Now, they’re in medicine, in warehouses, helping to make maps, or even doing telepresence in an office. Human-robot interaction is something we’re thinking about a lot here at GRASP, and maybe in the near future, they could be in our homes, helping out like a nurse or a nanny, but there are a lot of technologies that need to be developed before that becomes a reality.”

Eduardo Glandt, dean of SEAS, calls the GRASP Lab the “jewel of the school.”

“We now have many research centers in the school, but GRASP has been the model we have used and held as an example for every other interdisciplinary center in this school in the way it operates,” Glandt says. “They have this space—an open toy store full of robots and wonderful grad students, postdocs, and faculty, all interacting in this place for innovation.”

Related Story

SEAS Dean Eduardo Glandt
A Q&A with SEAS Dean Eduardo Glandt
It’s been a heady time for Eduardo Glandt, dean of the School of Engineering and Applied Science. He took the reins of SEAS in late 1999—the end of a century that saw Penn introduce the world to ENIAC, the first general purpose computer, and also witnessed the development of the Internet, cell phones, digital cameras, and micro medical surgery. It’s been his charge to take an early gauge of the 21st century and prepare Penn to be a leader in that brave new world.

What is a robot?

Ask a classroom of students to name a robot, and you’ll get as many answers as kids, ranging the pop-culture gamut from the towering, transforming Optimus Prime, and the disembodied intelligence like J.A.R.V.I.S. in “Iron Man,” to the humble, trashcan-shaped R2-D2. Ask a lab full of roboticists to define a robot, and you might get as many responses. The topic can be considered controversial and the field is new enough that academics are still sorting out the details.

Vijay Kumar, the UPS Foundation Professor with appointments in the departments of Mechanical Engineering and Applied Mechanics, Computer and Information Science, and Electrical and Systems Engineering, says that, broadly speaking, to be considered a robot, a machine must exhibit some amount of three criteria: the ability to sense, compute, and act.

And while some might argue that trifecta would also capture the entire animal kingdom, including humans, Kumar says robots aren’t yet as life-like as the average person might suspect. They still need help in very basic ways—including telling them what to do.

“No robot is completely autonomous,” says Kumar, who is on sabbatical as the assistant director of robotics and cyber physical systems at the White House Office of Science and Technology Policy. “It’d be hard for us to imagine something that’s completely autonomous—it has to interact with humans. So the question one might ask is, ‘What’s the level of interaction and how does a human interact with a robot?’ That’s an area that actually needs a lot of work. … There’s a famous quote by Steve Cousins, who is the founder of Willow Garage, and he basically said, ‘No robot is an island.’”

“So the question one might ask is, ‘What’s the level of interaction, and how does a human interact with a robot?’ That’s an area that actually needs a lot of work.”
- Vijay Kumar, professor in the School of Engineering and Applied Science

Although robots aren’t entirely autonomous, they have more complexity than average machines.

“If you go back to the industrial revolution, the earliest machines were handlooms,” Kumar says. “There was a lot of mechanization that went into them, and so there was some amount of programming involved, and there was a certain amount of action, but this idea of sensing and reasoning about the environment is virtually nonexistent. The more independent [a machine] becomes, the more we think of it as robot-like.”

And in order to make a machine a robot with a range of potential applications, there are non-human capabilities—say, the dexterity of an insect or the agility of a bird—that roboticists at the GRASP Lab strive to replicate. Researchers have taken inspiration from nature, trying to embed the subtleties of sensing, computing, and acting that are innate in humans and animals, into machines, trying to turn tools into robots.

Bio-inspiration

If roboticists want to create machines that can experience and interact with the world the way people do, then they must create technology that goes beyond mimicking the physical human exterior, and allows them to perform human-like actions.

Since 1997, C.J. Taylor, a professor of computer and information science, has been working in the GRASP Lab to create eyes for intelligent machines, helping them to see the world as humans do. The scope of his work has been useful in everything from traffic cameras to self-driving vehicles.

Robovision

Computer Vision

Humans excel at picking out different objects in a scene, but it's a challenge for machines. Image segmentation is an important part of this process; using edge-detection algorithms, machines can divide visual data into discrete parts, helping them decide which of those parts they need to pay attention to for a given task.

“Computer vision is only one type of perceptual input,” Taylor says. “It’s a large, vibrant, growing field. It has been for many years and today it is growing exponentially—especially now that images are very easy things to get. We can make cameras really easily, as opposed to sensors that capture smell, for example.”

Images may seem easy to capture, but it is not as simple to translate them into actionable information. Taylor’s work builds upon the days when 2D images were stitched together to create a 3D composite. He currently works more with surface reconstruction technology to aid in everything from robotic perception to simulation design.

“Using technology we have from things like a Kinect camera, we can now do automatic 3D image acquisition and parsing, whereas before we needed a lot more human input,” Taylor says. “This helps robots figure out what’s going on when they’re in a new location.”

Another project focused on the process of sensing is Graspy— a customizable robot made by hardware and open source software development firm Willow Garage that serves as a research platform for roboticists. Graspy has the human-like ability to manipulate objects with highly articulated arms and grippers, and in 2011, a group of Penn researchers taught Graspy to read.

Using a process called optical character recognition (OCR) that turns images of words into their digital equivalents, Graspy locates words by looking for groups of close-together lines with similar widths and spacing, which tend to represent letters. The ability to find and digitize words in a human environment would make it easier for robots like Graspy to navigate through large spaces, reading signs and finding directions much like a human would.

Graspy learns to read

Penn students customized Graspy, a robot that serves as a research platform, to be the first of its kind to be literate. Graspy locates words by looking for groups of close-together lines with similar widths and spacing, which tend to represent letters. The robot can then perform optical character recognition, or OCR, on what it thinks are words, checking them against a customizable dictionary. Once the words are digitized, Graspy can read them aloud with a speech synthesizer.

The desire to impart robots with a sense of direction—and the ability to act accordingly—has manifested in a different way for Kumar. Specifically, he’s interested in the way some animals exhibit collective behaviors to accomplish a task, and how studying those behaviors could inform the design of large, networked groups of robots.

One product of his work has been the development of quadrotors, small and agile flying robots that mimic the swarming behaviors of birds and insects. At eight inches in diameter and only an eighth of a pound, his lab’s “nano” quadrotors consume 15 watts of power. By controlling the speed of each of the four rotors—in tandem or individually—the quadrotors can gracefully hover, propel, and even flip. When in formation in the lab, the quadrotors get location information 100 times per second so as to maintain a safe distance between each other and external objects.

Vijay Kumar at TED 2012

In this TED Talk from February 2012, Vijay Kumar explains the technology behind his team’s flying quadrotors—and delights the audience with footage of the robots playing the James Bond theme.

“With flying robots, the natural thing for me would be to build vehicles with wings that flapped,” Kumar says. “I think long-term, that might be the future, but in the short-term, if you want to build machines that can do the things we’re interested in, [quadrotors] represent the best thing.”

Yim, who heads the GRASP Lab’s subgroup Mod Lab, has taken the act of flying and created a device even simpler that Kumar’s quadrotors. His own brand of autonomous flying machines are composed of a single motor and rotor. By shifting the complexity of the machines to their control mechanism—modulated torque applied to a passively hinged propeller—his team was able to create small, cheap, yet agile flying robots.

Modular Robots video still

Understanding modular robots

Mark Yim, professor of mechanical engineering and applied mechanics, is featured in a video for the American Society of Mechanical Engineers (ASME) on current trends and challenges with the development of modular self-reconfigurable robots.

Watch the video

“About seven years ago, I was looking at the trends of technology, and flying is very interesting because it’s what I call a binary technology. … There’s a threshold—either you have enough thrust per weight for the batteries to lift off, or you don’t,” Yim says. “But at the time, you couldn’t have an electric flying system because most were gas-powered because the electrical systems weren’t good enough. We were getting right to that cusp. I was thinking to myself—when you have electrical systems, it’s much easier to control. I said, ‘OK, I need to get into this because it’s going to be big.’”

Yim has also created robotic boats and shape-shifting modular robots, and says his ultimate goal is to make robots that are useful by designing them to complete relevant tasks at a low cost—robots that are better suited to be everyday parts of people’s lives.

Unlike researchers such as Yim and Kumar who focus on the hardware, pursuing optimal design and constriction of robots, Dan Koditschek, the Alfred Fitler Moore Professor of Electrical & Systems Engineering, has dedicated his entire career to understanding how to program work by applying theory. In the Kod Lab, a subsidiary of the GRASP Lab that focuses on biologically inspired robots, Koditschek and his team work on technical aspects ranging from dexterity and kinetic energy management to more abstract problems like control and coordination.

Koditschek created RHex, a hexapedal cockroach-like robotic platform composed of six legs designed to produce energetic running gaits, each with single rotary actuators at the hip. The leg modules are controlled from a central computer that, depending on the model, takes user commands or sensor feedback to decide how the legs should move. RHex was the first machine to run over unstable terrain, and the first autonomous legged platform to run at speeds above one body length per second.

A robot that jumps, flips, and does pull-ups

RHex is an all-terrain walking robot that could one day climb over rubble in a rescue mission or cross the desert with environmental sensors strapped to its back. Legs have an advantage over wheels when it comes to rough terrain, but the articulated legs often found on walking robots require complex, specialized instructions for each moving part. To get the most mobility out of RHex’s simple, one-jointed legs, Penn researchers from Daniel Koditschek’s KodLab are essentially teaching the robot Parkour, depicted in this video from 2013. Taking inspiration from human free-runners, the team is showing the robot how to manipulate its body in creative ways to get around all sorts of hurdles.

Obstacles Ahead

For every quadrotor that takes its first flight or RHex that scuttles rapidly over the land, there are countless puzzles that stand as obstacles to progress.

Koditschek, for instance, says that even though RHex was designed to operate like an insect and was given the typical name for a dog, no one would ever mistake the robot for a real animal. This is because of two critical barriers he faces that haven’t changed much since the inception of robotics: scarcity of power and of algorithms that express a nuanced range of motion similar to that of animals.

“If you compare any animal to a robot, it’s pathetic,” Koditschek says with a laugh. “We don’t yet understand what animals are doing, how they take the energy supply in their bodies and turn it into forces and torques in their muscles and then speeds and heights. We really don’t understand how animals work in such a fine way.”

“We [humans] excel in our ability to deal with ambiguity. Computers can add and subtract and do many wonderful things at very, very high speeds, but the world is full of subtle variations, and those are [difficult] to capture.”
- C. J. Taylor, professor of computer and information science

Researchers are also still limited in the amount of power they can pack into their robots.

“Animal muscle has between 30 and 300 watts per kilogram, and robots have only recently gotten to the point where they are carrying that level of power density—each time you add a motor, you get more power, but you get it at the expense of weight,” Koditschek says. That trade-off can cripple the robot’s ability to perform the task at hand.

“Animals have highly granulated sensors distributed deeply throughout the body, and we don’t really have a good way of programming that,” Koditschek says. “Programming is the exchange of information between the machine and its environment. We don’t yet have very good principle ways of building algorithms in a rational manner that know how to direct that exchange of energy.”

Kumar says his challenges echo the three criteria that make a robot uniquely robotic: the ability to sense, compute, and act, all with little help from humans.

“Clearly, the challenges I see in my path are different than what someone else sees,” Kumar says. “But for us, if you want [a robot] to accomplish a task autonomously, understanding how to model and represent the task is 90 percent of the challenge. Because once you understand modeling and representation, you want to know how to get sensory data, eliciting from that the representations that allow you to reason about the task.”

Ultimately, that ambition to narrow the gap between robot-like and life-like capabilities serves as a challenge in almost all sectors of the field.

“We [humans] excel in our ability to deal with ambiguity,” Taylor says. “Computers can add and subtract and do many wonderful things at very, very high speeds, but the world is full of subtle variations, and those are [difficult] to capture.”

About this project

Greetings...

ROBOTICS AT THE UNIVERSITY OF PENNSYLVANIA was produced and developed by the following puny humans from the Office of University Communications:

  • MATT CALLAHAN, web developer
  • HEATHER A. DAVIS, manager, internal communications
  • REBECCA ELIAS ABBOUD, video producer
  • GREG JOHNSON, editor
  • EVAN LERNER, science writer
  • STEVEN MINICOLA, director of web and visual strategy
  • SCOTT SPITZER, photographer
  • MARIA ZANKEY, staff writer

* with SPECIAL THANKS to the staff and faculty of the GRASP Lab.