Add to favorites

#Product Trends

Researchers Launch 26K+ Object Dataset to Help Robots Learn Shapes

PartNet dataset includes more than 573,000 fine-grained part annotations for better robot task completion.

If you want to have a robot arm open a microwave oven door, the robot needs to know how to identify the parts of the microwave oven and buttons that will open the door. To that end, a group of researchers has launched a large-scale dataset with fine-grained, hierarchical and instance-level part annotations.

At the 2019 Conference on Computer Vision and Pattern Recognition, authors Kaichun Mo from Stanford University and Hao Su, an assistant professor at UC San Diego, have partnered with Intel AI and Simon Fraser University to introduce PartNet. The dataset consists of 573,585 fine-grained part annotations (visually and semantically identified subcomponents) for 26,271 shapes (3D point clouds of objects) across 24 object categories (lamp, door, table, chair, etc.).

“If we want AI to be able to make us a cup of tea, large new datasets are needed to better support the training of visual AI applications to parse and understand objects with many small details or with important components,” the researchers stated in a blog post. “Existing 3D shape datasets provide part annotations only on a relatively small number of object instances or on coarse, yet non-hierarchical, part annotations, making these data sets unsuitable for applications involving part-level object understanding.”

The researchers said that with the dataset, people can start building a large-scale simulated environment full of objects and all their parts, with the goal of using the virtual world to teach robots about objects, their parts, and how to interact with them.

“For example, a robot [can learn] that pushing a button on a microwave will open the microwave door,” the researchers said. “This will allow us to train robots to complete daily behaviors as humans do, by understanding all of the parts and steps involved.”

The researchers have provided a sample dataset, sample results, and a summary video, for those interested in trying PartNet. You can read the full paper here.

Collaborative reinforcement learning

At the conference, Intel AI also released a paper discussing a concept called Collaborative Evolutionary Reinforcement Learning (CERL), which “combines policy gradient and evolution methods to optimize the exploit/explore challenge” of traditional reinforcement learning techniques.

Policy gradient-based RL methods, commonly used by AI researchers today, can exploit rewards for learning, but “they suffer from limited exploration and costly gradient computations,” the researchers said. With the Evolutionary Algorithm approach, it addresses some of the problems of policy gradients, but that this takes significant processing time because candidates “are only evaluated at the end of a complete episode.”

“The core ML dichotomy is revealed again: the choice to either explore the world to get more information while sacrificing short-term gains or to exploit the current state of knowledge towards improving performance,” the researchers said.

To test its new approach, the researchers used CERL with the OpenAI Gym Humanoid benchmark, which requires a 3D humanoid model to learn to walk forward as fast as possible without falling. Until recently, researchers said the Humanoid benchmark was unsolved, because robots could learn to walk, but they couldn’t keep up a sustained walk. The authors said they solved it using the CERL approach, and another team from UC Berkeley solved it with a complementary approach, and both teams are working to combine the approaches.

Researchers Launch 26K+ Object Dataset to Help Robots Learn Shapes

Details

  • Santa Clara, CA, USA
  • Intel AI