BERKELEY, Calif., Sept. 19, 2017 (GLOBE NEWSWIRE) -- Bonsai, provider of an AI platform that empowers enterprises to build and deploy intelligent systems, today announced that its AI Platform established a new benchmark for programming industrial control systems. Using a robotics task to demonstrate the achievement, the platform successfully trained a simulated robotic arm to grasp and stack blocks on top of one another by breaking down the task into simpler sub-concepts. Bonsai’s unique concept networking technique trains each sub-concept separately, then trains the overall “grasp and stack” task 45x faster than a comparable approach from Google’s DeepMind. This feat demonstrates the flexibility and efficiency of the platform for programming intelligent control into a range of robotics, manufacturing, HVAC, and other industrial control systems.



“Building off the foundation established by DeepMind, we were able to achieve these results by combining state-of-the-art reinforcement learning techniques with innovative features that are unique to the Bonsai Platform,” said Marcos Campos, Head of AI, Bonsai. “Using Bonsai, enterprises now have access to the tools and technology to program control systems more efficiently than any other commercially available reinforcement learning platform.”

Key features of the Bonsai Platform enabling the achievement of this benchmark include:

Concept Networks allow developers to decompose complex tasks into smaller, individual actions. In this robotic control demonstration, the task was decomposed into a concept network of five subconcepts - reach for the object, orient the hand for grasping the object, grasp the object, move the object, and stack the object. The Platform first trained the system to learn the concepts grasp, orient and stack using reinforcement learning. The Platform then learned a meta-controller - or selector - concept to combine the newly trained concepts with existing move and reach classical controllers into a complete stacking solution. Bonsai’s method of assembling the sub-concepts successfully solved the entire task, and was 45x faster than DeepMind’s approach for leveraging sub-tasks in a similar setting.



allow developers to decompose complex tasks into smaller, individual actions. In this robotic control demonstration, the task was decomposed into a concept network of five subconcepts - reach for the object, orient the hand for grasping the object, grasp the object, move the object, and stack the object. The Platform first trained the system to learn the concepts grasp, orient and stack using reinforcement learning. The Platform then learned a meta-controller - or selector - concept to combine the newly trained concepts with existing move and reach classical controllers into a complete stacking solution. Bonsai’s method of assembling the sub-concepts successfully solved the entire task, and was 45x faster than DeepMind’s approach for leveraging sub-tasks in a similar setting. Gears is a new interoperability feature released this past June allowing developers to leverage existing knowledge by integrating existing models into the Bonsai Platform. The robotic arm used in this task was pre-programmed with classical controllers for two concepts - move and reach. In this task, Gears enables the blending of classical robotics controllers for move and reach with the Bonsai trained neural networks for grasp, orient and stack. The ability to combine these models provides an organization far greater flexibility in programming robotic control systems.



is a new interoperability feature released this past June allowing developers to leverage existing knowledge by integrating existing models into the Bonsai Platform. The robotic arm used in this task was pre-programmed with classical controllers for two concepts - move and reach. In this task, Gears enables the blending of classical robotics controllers for move and reach with the Bonsai trained neural networks for grasp, orient and stack. The ability to combine these models provides an organization far greater flexibility in programming robotic control systems. Reinforcement Learning trains an AI model by rewarding it on the actions it takes in an environment. In this demonstration, the Bonsai Platform employed Hierarchical Reinforcement Learning (HRL) to benefit from multiple levels of decision making. HRL allows the Platform to train each individual concept to solve a single task, and then train the system to combine the concepts to deliver an end solution. In this demonstration, by combining concept networks within HRL, the system is able to solve the ultimate task orders of magnitude faster than alternate approaches.

Bonsai is the only commercially available solution that packages together these techniques within a single platform. To apply for Bonsai’s Early Access Program, visit https://bons.ai/getting-started.

Resources

Learn how the Bonsai AI Platform achieved state-of-the-art robotic control: https://bons.ai/blog/robotics-blog

Watch a video of the “grasp and stack” robotics task, enabled by Bonsai: http://bns.ai/robotics_video

Read our research paper about concept networks and hierarchical deep reinforcement learning: http://bns.ai/drl_paper

About Bonsai

Bonsai offers an AI platform that empowers enterprises to build and deploy intelligent systems. By completely automating the management of complex machine learning libraries and algorithms, Bonsai enables enterprises to program AI models that improve system control and enhance real-time decision support. Businesses use these models today to increase automation and improve operational efficiency of industrial systems including robotics, manufacturing, supply chain, logistics, energy and utilities. Based in Berkeley, CA, Bonsai is backed by leading investors including NEA, Microsoft Ventures, ABB, Samsung NEXT and Siemens. To learn more, please visit: https://bons.ai/ or follow on Twitter @BonsaiAI.

Media and Analyst Contact:

Bridget Hickey

bridget.hickey@bons.ai

+1 (628) 400-9072



