California

Robot dog can walk after ONE hour of training as scientists hope it can play fetch in the future

Scientists discovered a robot dog that can teach itself to walk in just one hour.

In a video released by investigators, the 4-legged robot is first seen with his legs in the air and wrestling – but after only 10 minutes he can take steps – and through the one-hour mark he walks fairly easily, rolling from his back and even navigating are knocked down with a stick by one of the researchers.

Unlike many robots, this one was not shown what to do beforehand in a computer simulation.

Danijar Hafner, a researcher in artificial intelligence at the University of CaliforniaBerkeley, worked with his colleagues to train the robot with reinforcement learning.

Scroll down for video

A robotic dog is trained to walk, roll over and navigate obstacles in about an hour, researchers at the University of California at Berkeley have discovered. Pictured above, the robot on the five minute mark

“Typically, robots learn through a large amount of trial and error within computer simulations that are much faster than real-time,” Hafner explains to DailyMail.com via email.

‘After solving a task such as getting up and running in simulation, the learned behavior is then performed on a physical robot.

‘But simulations can not determine the complexity of the real world, so a behavior that works well in simulation can not solve the task in the real world.’

This type of machine learning involves training algorithms by rewarding them for taking certain actions in their environment.

Hafner and his collaborators – Philipp Wu and Alejandro Escontrela – used an algorithm called Dreamer that works from previous experiences to build a real-world model and also allows the robot to perform trial-and-error calculations.

Researchers used an algorithm called Dreamer that uses past experiences to build a real-world model for the robot to learn from. Image above is the robot at 30 minutes

Researchers used an algorithm called Dreamer that uses past experiences to build a real-world model for the robot to learn from. Image above is the robot at 30 minutes

Researchers used an algorithm called Dreamer that uses past experiences to build a real-world model for the robot to learn from. Image above is the robot at 30 minutes

“The Dreamer algorithm has recently shown great promise for learning small-scale interaction through planning within a learned world model,” the researchers say in their paperthat is not yet peer-viewed.

‘Learning a world model to predict the outcomes of potential actions enables planning in imagination, thus reducing the amount of trial and error required in the real environment.’

'Learning reinforcement will be a cornerstone tool in the future of robotic control' shared one scientist who did not join the study. Image above is the robot at 40 minutes

'Learning reinforcement will be a cornerstone tool in the future of robotic control' shared one scientist who did not join the study. Image above is the robot at 40 minutes

‘Learning reinforcement will be a cornerstone tool in the future of robotic control’ shared one scientist who did not join the study. Image above is the robot at 40 minutes

Against the mark of one hour, the robot dog, pictured above, can navigate its surroundings fairly well, roll over and more

Against the mark of one hour, the robot dog, pictured above, can navigate its surroundings fairly well, roll over and more

Against the mark of one hour, the robot dog, pictured above, can navigate its surroundings fairly well, roll over and more

After the robot learned to run, he could also learn to adapt to other less predictable outcomes – such as being stabbed by researchers with a stick.

Even with reinforcement learning, which has been great at convincing people in things like board or video games, the world of learning robots to do well in the real world is extremely challenging – because engineers have to program whether each action is rewarding or not based on whether it is desired by scientists.

“Applying reinforcement learning to physical robots is a big challenge because we can not accelerate the time in the real world and robotic simulators may not capture the real world well enough,” Hafner and his colleagues told DailyMail.com.

“While Dreamer shows promising results, learning hardware over many hours causes wear and tear on robots that may require human intervention or repair,” say researchers in the study. Pictured above, the robot navigates an obstacle

‘Our project has shown that learning world models can drastically accelerate robot learning in the physical world.

‘This brings reinforcement learning closer to solving complex automation tasks, such as manufacturing and assembly tasks and even self-driving cars.’

‘A robotist will have to do this for every task [or] problem that they want the robot to solve, ‘Lerrel Pinto, an assistant professor of computer science at New York University who specializes in robotics and machine learning, explains to MIT Technology Review.

That would come with a voluminous amount of code and a variety of situations that just can’t be predicted.

The research team mentions other obstacles to this type of technology:

“While Dreamer shows promising results, learning about hardware over many hours causes wear and tear on robots that may require human intervention or repair,” they say in the study.

‘In addition, more work is needed to explore the boundaries of Dreamer and our baselines by training longer.

‘Ultimately, we see tackling more challenging tasks, possibly by combining the benefits of real-world rapid learning with those of simulators, as an influential future research direction.’

Hafner says he hopes to teach the robot how to follow spoken commands and connect cameras to the dog to give him vision – all of which would allow him to do more typical dog activities, such as playing fetch.

In a separate study, researchers from the German Max Planck Institute for Intelligent Systems (MPI-IS) revealed in new research that their robot dog, named Morti, can easily learn to walk by using a complex algorithm that includes sensors in its feet.

“As engineers and robotics, we sought the answer by building a robot that has reflexes like an animal and learns from errors,” says Felix Ruppert, a former doctoral student in the Dynamic Locomotion research group at MPI-IS, in a statement.

‘If an animal stumbles, is that a mistake? Not if it happens once. But when it stumbles often, it gives us a measure of how well the robot is running. ‘

The robot dog works by using a complex algorithm that guides how he learns.

Information from foot sensors corresponds to data from the spinal cord of the machine’s machine running as a program inside the robot’s computer.

The robot dog learns to walk by constantly comparing set and expected sensor information, running reflex loops and adjusting the way it controls its movements.

Scientists from the Max Planck Institute for Intelligent Systems in Germany trained a robot dog known as Morti to walk with algorithms

Scientists from the Max Planck Institute for Intelligent Systems in Germany trained a robot dog known as Morti to walk with algorithms

Scientists from the Max Planck Institute for Intelligent Systems in Germany trained a robot dog known as Morti to walk with algorithms

WHAT IS BOSTON DYNAMICS ‘SPOT MINI ROBO-DOG?

Boston Dynamics showed Spot for the first time, the most advanced robot dog ever made, in a video posted in November 2017.

The company, best known for Atlas, its 5 foot 9 (1.7 meter) humanoid robot, has unveiled a new ‘lightweight’ version of its Spot robot.

The robotic dog was shown running around a yard, promising more information from the infamous secret company ‘coming soon’.

‘Spot is a small four-legged robot that fits snugly in an office or home,’ says the company on its website.

It weighs 25 kg (55 lb), or 30 kg (66 lb) when you pick up the robotic arm.

Spot is fully electric and can take about 90 minutes to charge, depending on what it does, the company says, with ‘Spot is the quietest robot we’ve built.’

Spot was first unveiled in 2016, and an earlier version of the mini-version of spot with an early extended neck has been shown to help the house.

In the firm’s previous video, the robot is shown running out of the firm’s HQ and into what appears to be a house.

There, it helps in loading a dishwasher and carries a can to the trash.

It also occasionally encounters a fallen banana skin and falls dramatically – but uses its extended neck to push itself back.

‘Spot is one of the quietest robots we’ve ever built,’ says the company, because of its electric motors.

‘It has a variety of sensors, including depth cameras, a solid state gyro (IMU) and proprioception sensors in the limbs.

‘These sensors help with navigation and mobile manipulation.

‘Spot performs some tasks autonomously, but often uses a person for high-level guidance.’

Robot dog can walk after ONE hour of training as scientists hope it can play fetch in the future Source link Robot dog can walk after ONE hour of training as scientists hope it can play fetch in the future

Related Articles

Back to top button