Robotics

Robots and the Future

August 15, 2010 by Steve Meyer


In the field of Robotics, where is the line between remote control, software control and autonomous control? (No, I’m not going after the consciousness thing, it’s way too complicated)

Part of the problem may have to do with our use of the word “intelligence”. We talk about the increasing “intelligence” of processors and particularly about the cost of “intelligent” control dropping to the point where it is suddenly economical to put a microcontroller together with a motor in order to achieve new levels of performance in either energy management or some other critical parameter. Which opens new performance capability in robot design.

Increasingly, industrial robotics involve the use of vision systems to acquire information about the location and orientation of parts so that the robot system can interface smoothly to the “real world”. If any of you have been to an industrial trade show and witnessed the Delta Robots making cookies, it is a very impressive sight to behold. Incredible throughput and accuracy. And that’s what it’s all about in industry. Higher productivity, improved product quality.

But where is the line between remote control and automatic control? A remote manipulator for working in the nuclear industry, which was the big application that drove early robots, is a remote servo loop operating a series of servo motors and controls and powering mechanical systems, in order to do work that is dangerous to humans from a safe distance. The DaVinci medical robot is a phenomenally improved version of the same thing. A remote controlled robot, guided by direct haptic inputs from a surgeon, and with very sophistical tactile feedbacks, whose end effectors operate a variety of surgical instruments and actually increase the precision and speed with which doctors may perform certain procedures.

Is this a robot? Sure!

When we watch welding and painting robots making cars, we are watching decades of technology development in action. There has been significant effort to improve the actuator hardware, and probably many man-years of software development to improve our description of the task and its safety and performance constraints in order to create not only reliable, but increasingly efficient machines to do the tasks that humans cannot compete with for productivity. These are very sophisticated automatic applications, but certainly not autonomous. The boundaries of the application and the programming for it are very finite. Again, its about repetition, speed and accuracy.

And, yes, we call these robots, too.

But increasingly, there is discussion about the next frontier of robotics. Where are the next big apps coming from? Most of the big robotic companies in Japan and Europe are talking about personal service robots. You can let your imagination run wild here. Anything is possible. Certainly the service robot for NASA is interesting because it, again, follows the concept of doing tasks where it is difficult for humans to operate.

Is a Jeep that can be programmed to find a path and drive from one place to another autonomously a robot? Yes, but we may be pushing the boundaries here just a bit. These applications fall into the realm of Artificial Intelligence. The programming and software languages for which were just being described for the first time about 30 years ago. And at this point we are forced into the debate about what is intelligence. In addition, are these systems which are capable of “learning” and what is learning exactly? And more importantly, as all good science fiction movie watchers will ask, can a machine exceed it’s programming? (See? I didn’t even start on consciousness yet)

Robotics researchers have been pushing the envelope for the last 30 years since the inception of “artificial intelligence”. The basics of artificial intelligence programming is the modeling of human expertise and mimicking human behavior in a variety of circumstances.

One aspect of artificial intelligence gave rise to expert systems. Complex systems like diesel locomotives are very difficult to repair because of the large number of parts operating together. Human experience accumulated after years of working with diesel locomotives needed to be captured in order to prevent each generation from having to apprentice workers over long periods of time in order to learn how to troubleshoot these systems. So programmers in the early days of AI were employed to learn and program the diagnostic procedures developed by skilled workmen over many years.

These programs were very successful. But in no way do they replace human intelligence and insight. This is simply an example of subtlety in programming a specific area of human experience. Speech recognition continues to be a challenge after decades of effort, limited to transcription applications and simple material handling instructions.

Another area that came up was large scale logistical mapping, another Expert System. What is the most economical way to use airplanes to transport people around the US? When you think of a large air carrier and the number of airplanes, flights, destinations and how they might be mapped together to get the best use out of the airplanes, it is a problem that is too large and complex for a single human to work with. Enter the expert system programmer.

But in none of these cases can a computer program exceed the boundaries of it’s programming. Can the autonomous Jeep get from it’s starting point to it’s destination? Yes. With many man-years of programming and a vast array of computing power, proper deployment of sensors and actuators, and a lot of stored energy.

Can the autonomous Jeep perform any other task? No. Regardless of the sophistication, the machine cannot exceed the boundaries of it’s programming.

Can we teach machines to learn? So far, only in the most crude and rudimentary way. But the course of the learning is again bounded by the programming.

And again, I will defer discussion of true intelligence or consciousness.

But what robotics can do to expand it’s usefulness is to mimic simple human tasking where it is cost effective and where the robot can “outproduce” or exceed the precision of a human. Robotic welding, for example, has reached the point where a basic robot welding cell is less than $50,000. So the cost of entry, the learning curve and complexity of implementing a welding robot cell in a small production facility is very reasonable.

Will robots be used in “human service” applications? Sure. ”Robot, vacuum my living room” No sweat. We can already do that with a Roomba only it doesn’t have voice recognition yet. We have robots that can mow the grass in the front yard and avoid shrubs and trees. Very cool.

Will we have robot servants like C3PO in Star Wars? Hopefully more intelligent, C3PO was kind of dumb. Simple tasks like serving a drink at a bar? Yes, that’s been done too, although it doesn’t have philosophical conversations with customers.

Will robots be able to provide basic care in hospitals and for the elderly? Anything is possible. It will come down to how far we can push the envelope of programming, safety and return on cost. Certainly we get robots to get a cold beer from the fridge. But if the fridge is empty can it run out to the store and get us a six pack?


Not anytime soon.





Iranian Robot Walks, Stands On One Leg

Researchers at Tehran University, in Iran, unveiled last month an adult-sized humanoid robot called Surena 2.

The initial press reports in Iran’s official news media didn’t include many details, saying only it could “walk like a human being but at a slower pace” and perform some other tasks, and there were questions about the robot’s real capabilities.

IEEE Spectrum obtained more information about Surena, as well as images and videos showing that the robot can indeed walk — and even stand on one leg.

Aghil Yousefi-Koma, a professor of engineering at the University of Tehran who lead the Surena project, tells me that the goal is to explore “both theoretical and experimental aspects of bipedal locomotion.”

The humanoid relies on gyroscopes and accelerometers to remain in balance and move its legs, still very slowly, but Yousefi-Koma says his team is developing a “feedback control system that provides dynamic balance, yielding a much more human-like motion.”

Surena 2, which weighs in at 45 kilograms and is 1.45 meter high, has a total of 22 degrees of freedom: each leg has 6 DOF, each arm 4 DOF, and the head 2 DOF. An operator uses a remote control to make the robot walk and move its arms and head. The robot can also bow. Watch:
http://www.youtube.com/watch?v=4bE9LyELTRs&feature=player_embedded

Surena doesn’t have the agile arms of Hubo, the powerful legs of Petman, or the charisma of Asimo — but hey, this is only the robot’s second-generation, built by a team of 20 engineers and students in less than two years. A first version of the robot, much simpler, with only 8 DOF, was demonstrated in late 2008.

Yousefi-Koma, who is director of both the Center for Advanced Vehicles (CAV) and the Advanced Dynamic and Control Systems Laboratory (ADCSL) at the University of Tehran, says another goal of the project is to “to demonstrate to students and to the public the excitement of a career in engineering.”

Next the researchers plan to develop speech and vision capabilities and improve the robot’s mobility and dexterity. They also plan to give Surena “a higher level of machine intelligence,” he says, “suitable for various industrial, medical, and household applications.”

The robot was unveiled by Iranian President Mahmoud Ahmadinejad on July 3rd in Tehran as part of the country’s celebration of “Industry and Mine Day.” The robot is a joint project between the Center for Advanced Vehicles and the R&D Society of Iranian Industries and Mines.




Robotic Arm's Big Flaw: Patients in Wheelchairs Say It's 'Too Easy'

ScienceDaily (Sep. 24, 2010) — One touch directs a robotic arm to grab objects in a new computer program designed to give people in wheelchairs more independence. University of Central Florida researchers thought the ease of the using the program's automatic mode would be a huge hit. But they were wrong -- many participants in a pilot study didn't like it because it was "too easy."
Most participants preferred the manual mode, which requires them to think several steps ahead and either physically type in instructions or verbally direct the arm with a series of precise commands. They favored the manual mode even though they did not perform tasks as well with it.

"We focused so much on getting the technology right," said Assistant Professor Aman Behal. "We didn't expect this."

John Bricout, Behal's collaborator and the associate dean for Research and Community Outreach at the University of Texas at Arlington School of Social Work, said the study demonstrates how people want to be engaged -- but not overwhelmed -- by technology. The psychology theory of Flow describes this need to have a balance between challenge and capacity in life.

"If we're too challenged, we get angry and frustrated. But if we aren't challenged enough, we get bored," said Bricout, who has conducted extensive research on adapting technology for users with disabilities. "We all experience that. People with disabilities are no different."

The computer program is based on how the human eye sees. A touch screen, computer mouse, joystick or voice command sends the arm into action. Then sensors mounted on the arm see an object, gather information and relay it to the computer, which completes the calculations necessary to move the arm and retrieve the object.

Behal is seeking grants to translate the study's findings into a smoother "hybrid" mode that is more interactive and challenging for users and features a more accurate robotic arm. Laser, ultrasound and infrared technology coupled with an adaptive interface will help him achieve his goals.

The key is to design technology that can be individualized with ease, Behal said. Some patients will have more mobility than others, and they may prefer a design closer to the manual mode. Though the automatic mode wasn't popular in the pilot study, it may be the best option for patients with more advanced disease and less mobility.

Bob Melia, a quadriplegic who advised the UCF team, says the new technology will make life easier for thousands of people who are so dependent on others because of physical limitations.

"You have no idea what it is like to want to do something as simple as scratching your nose and have to rely on someone else to do it for you," Melia said. "I see this device as someday giving people more freedom to do a lot more things, from getting their own bowl of cereal in the morning to scratching their nose anytime they want."

Behal's initial research was funded with a grant from the National Science Foundation and through a pilot grant from the National Multiple Sclerosis Society. Behal presented his findings at the 2010 International Conference on Robotics and Automation in Anchorage, Alaska.

Behal is collaborating with Bricout, who previously worked in the College of Health and Public Affairs at UCF, to apply for another grant in the area of assistive technology.

The research team includes Dae-Jin Kim, Zhao Wang, and Rebekah Hazlett from UCF, John Bricout from UT Arlington, and Heather Godfrey, Greta Rucks, David Portee and Tara Cunningham from Orlando Health Rehabilitation Institute. The institute helped recruit patients for the study.




Giant Dallas Robot Cited as Best Public Art 


By now most residents of the Dallas / Fort Worth area are aware of the giant, 35,000 lbs steel robot that towers over DART's Deep Ellum rail station. Robot builders may also be aware of the robot from coverage in Robot Magazine. Now, the rest of the world is taking notice because the prominent art organization, Americans for the Arts, has included the Dallas Robot, known officially as Traveling Man, on its list of 40 Best Public Art Works in the US and Canada. Read on to learn more about Traveling Man and see more photos of the big robot and little chrome friends.


So what's the story behind this giant robot? A combination of opportunities and influences led to its creation. Dallas Area Rapid Transit or DART as it's known locally, was expanding into the Deep Ellum area with a new rail line and a Deep Ellum rail station. Deep Ellum is the historic Dallas arts district from which have come a long list of musical and visual artists. The area is also well known for its many public art pieces, many improvised in local do-it-yourself fashion. Painters and sculpters often create art on the exterior of their own or other buildings in the area.


Traveling Man Walking Tall sculpture seen (from left to right) as represented in paint on the Deep Ellum news wall, as a steel superstructure during construction, and in its finished form.

The DART rail plans called for destruction of a favorite landmark, the Deep Ellum tunnel, whose sides were covered with art murals. Since DART has a small budget for public art for each of the station, it was decided that DART would commission local Deep Ellum artists to create a public art piece around the new station. The main pieces were to be giant sculptures designed by local artists Brad Oldham and Brandon Oldenburg. They created a set of three large metal works, known collectively as Traveling Man.


Traveling Man - The Awakening

The first sculpture, titled Awakening, shows the robot's head emerging from the ground. The backstory, imagined by the artists, explains that song birds inhabited an elm tree that grew above a buried steam locomotive. When a Deep Ellum musician spilled his gin on the spot, the Traveling Man formed underground as a result. The robot awakened and emerged along with his songbirds.


Traveling Man - Waiting on the Train

In the second sculpture, Waiting on a Train, our giant robot has walked a block down the street and is now seated on a large concrete slab salvaged from the historic Deep Ellum tunnel. The robot is playing a guitar as the cars and trains pass by. In the words of Brad Oldham, "he reminds us that life can slow down a bit to hear the music".


Traveling Man - Walking Tall

The third and largest sculpture, Walking Tall, depicts the Traveling Man strolling past the new DART station with a smile on his face and one of the song birds perched on his arm. This sculpture links the surrounding neighborhood with the station, welcoming visitors and residents. Walking Tall stands nearly 40 feet tall, weighs 35,000 lbs, and is supported by concrete piers sunk 32 feet into the ground. Each sculpture is composed of a steel skeleton covered with a stainless steel skin attached by monobolt rivets.


The Chrome bird/chairs offer endless possibilities to photographers

Aside from a few cranks, who were outraged that city funds were spent to beautify the city, there has been nearly unanimous support for DART's Deep Ellum art project. It has been pointed out that the cost the installation is roughly equivalent to about 57 feet of DART rail. And the success of the installation has pretty much silenced the critics. Almost as soon as they were completed, the sculptures began attracting lots of visitors and photographers. At almost any time of day or night, you can spot people gawking at Traveling Man or photographing their friends sitting on the surrounding chrome birds, which double as chairs.



So for Dallas residents, the recognition of Traveling Man by Americans for the Arts just confirms what they already knew; everybody loves Giant Robots!