TL;DR

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025. It is predicted that this market will hit the 100 billion U.S. dollar mark in 2020.

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars). Source: Statista

Research articles

by Gal Gorjup and Minas Liarokapis, New Dexterity research group, Department of Mechanical Engineering, University of Auckland, New Zealand

University Of Auckland engineers build 3d printed robotic airship for education and research

The proposed low-cost, open-source, indoor robotic airship. The airship consists of a gondola containing all the electronics and rotors and a Qualatex Microfoil balloon (metallised PET).

Miniature indoor robotic airship platforms offer high mobility, safety, and extended flight times. The paper focuses on the feasibility, design, development, and evaluation of such a platform for robotics education and research. Selected commercially available envelope materials were considered and tested in terms of their helium retention capability and mechanical properties. The obtained envelope properties were used in a feasibility study, demonstrating that indoor airships are environmentally and financially viable, given an appropriate material choice. The platform’s mechanical design was studied in terms of gondola placement and rotor angle positioning, resulting in an unconventional, asymmetric arrangement. The developed system was finally tested in a simple path following experiment for proofof-concept purposes, proving its efficiency in attaining the desired heading and altitude configuration. The proposed robotic airship platform can be used for a variety of education and research oriented applications. Its design is open-source, facilitating replication by others.

Check out their GitHub page to see more.

by Sang-Min Baek, Sojung Yim, Soo-Hwan Chae, Dae-Young Lee, Kyu-Jin Cho in Science Robotics

Ladybird beetle–inspired origami uses the deformation and geometry of its facet to enable unique energy storage and self-locking. Seoul National University researchers developed a ladybird beetle inspired compliant origami structure and a deployable jump-gliding robot based on the origami structure.

Origami can enable structures that are compact and lightweight. The facets of an origami structure in traditional designs, however, are essentially nondeformable rigid plates. Therefore, implementing energy storage and robust self-locking in these structures can be challenging. scientists note that the intricately folded wings of a ladybird beetle can be deployed rapidly and effectively sustain aerodynamic forces during flight; these abilities originate from the geometry and deformation of a specialized vein in the wing of this insect. They report compliant origami inspired by the wing vein in ladybird beetles. The deformation and geometry of the compliant facet enables both large energy storage and self-locking in a single origami joint. On the basis of our compliant origami, researchers developed a deployable glider module for a multimodal robot. The glider module is compactly foldable, is rapidly deployable, and can effectively sustain aerodynamic forces. Researchers also apply their compliant origami to enhance the energy storage capacity of the jumping mechanism in a jumping robot.

by Mike Allenspach, Karen Bodie, Maximilian Brunner, Luca Rinsoz, Zachary Taylor, Mina Kamel, Roland Siegwart, Juan Nieto

Scientists present the design and optimal control of a novel omnidirectional vehicle that can exert a wrench in any orientation while maintaining efficient flight configurations.

Omnidirectional micro aerial vehicles are a growing field of research, with demonstrated advantages for aerial interaction and uninhibited observation. While systems with complete pose omnidirectionality and high hover efficiency have been developed independently, a robust system that combines the two has not been demonstrated to date. This paper presents the design and optimal control of a novel omnidirectional vehicle that can exert a wrench in any orientation while maintaining efficient flight configurations. The system design is motivated by the result of a morphology design optimization. A six degrees of freedom optimal controller is derived, with an actuator allocation approach that implements task prioritization, and is robust to singularities. Flight experiments demonstrate and verify the system’s capabilities.

This video shows content related to the paper submission “Design and optimal control of a tiltrotor micro aerial vehicle for efficient omnidirectional flight”.

by Seyed M. Mirvakili, Douglas Sim, Ian W. Hunter, Robert Langer in Science Robotics

Controlled volumetric expansion using magnetic induction enables actuation of pneumatic artificial muscles without valves or pumps. Engineers develop terminator-like muscles powered by lithium batteries. These artificial muscles achieved strength proportional to that of human beings.

Pneumatic artificial muscles have been widely used in industry because of their simple and relatively high-performance design. The emerging field of soft robotics has also been using pneumatic actuation mechanisms since its formation. However, these actuators/soft robots often require bulky peripheral components to operate. Scientists report a simple mechanism and design for actuating pneumatic artificial muscles and soft robotic grippers without the use of compressors, valves, or pressurized gas tanks. The actuation mechanism involves a magnetically induced liquid-to-gas phase transition of a liquid that assists the formation of pressure inside the artificial muscle. The volumetric expansion in the liquid-to-gas phase transition develops sufficient pressure inside the muscle for mechanical operations. They integrated this actuation mechanism into a McKibben-type artificial muscle and soft robotic arms. The untethered McKibben artificial muscle generated actuation strains of up to 20% (in 10 seconds) with associated work density of 40 kilojoules/meter3, which favorably compares with the peak strain and peak energy density of skeletal muscle. The untethered soft robotic arms demonstrated lifting objects with an input energy supply from only two Li-ion batteries.

by Can Cui, Otitoaleke Gideon Akinola, Naimul Hassan, Christopher Bennett, Matthew Marinella, Joseph Friedman, Jean Anne Currivan Incorvia in Nanotechnology

New research finds that magnetic wires, spaced a certain way, can lead to a 20–30x reduction in the amount of energy needed to run neural network training algorithms

The rapid progression of technology has led to a huge increase in energy usage to process the massive troves of data generated by devices. But researchers in the Cockrell School of Engineering at The University of Texas at Austin have found a way to make the new generation of smart computers more energy efficient.

Lateral inhibition is an important functionality in neuromorphic computing, modeled after the biological neuron behavior that a firing neuron deactivates its neighbors belonging to the same layer and prevents them from firing. In most neuromorphic hardware platforms lateral inhibition is implemented by external circuitry, thereby decreasing the energy efficiency and increasing the area overhead of such systems. Recently, the domain wall — magnetic tunnel junction (DW-MTJ) artificial neuron is demonstrated in modeling to be intrinsically inhibitory. Without peripheral circuitry, lateral inhibition in DW-MTJ neurons results from magnetostatic interaction between neighboring neuron cells. However, the lateral inhibition mechanism in DW-MTJ neurons has not been studied thoroughly, leading to weak inhibition only in very closely-spaced devices. This work approaches these problems by modeling current- and field- driven DW motion in a pair of adjacent DW-MTJ neurons. Researchers maximize the magnitude of lateral inhibition by tuning the magnetic interaction between the neurons. The results are explained by current-driven DW velocity characteristics in response to an external magnetic field and quantified by an analytical model. Dependence of lateral inhibition strength on device parameters is also studied. Finally, lateral inhibition behavior in an array of 1000 DW-MTJ neurons is demonstrated. Their results provide a guideline for the optimization of lateral inhibition implementation in DW-MTJ neurons. With strong lateral inhibition achieved, a path towards competitive learning algorithms such as the winner-take-all are made possible on such neuromorphic devices.

by Mingsong Jiang, Ziyi Zhou, and Nicholas Gravish in Soft Robotics

Engineers have developed a new method that doesn’t require any special equipment and works in just minutes to create soft, flexible, 3D-printed robots. The structures were inspired by insect exoskeletons, which have both soft and rigid parts — the researchers called their creations ‘flexoskeletons.’

One of the many secrets to the success and prevalence of insects is their versatile, robust, and complex exoskeleton morphology. A fundamental challenge in insect-inspired robotics has been the fabrication of robotic exoskeletons that can match the complexity of exoskeleton structural mechanics. Hybrid robots composed of rigid and soft elements have previously required access to expensive multi-material three-dimensional (3D) printers, multistep casting and machining processes, or limited material choice when using consumer-grade fabrication methods. In this study, researchers introduce a new design and fabrication process to rapidly construct flexible exoskeleton-inspired robots called “flexoskeleton printing.” They modify a consumer-grade fused deposition modeling (FDM) 3D printer to deposit filament directly onto a heated thermoplastic base layer, which provides extremely strong bond strength between deposited material and the inextensible, flexible base layer. This process significantly improves the fatigue resistance of printed components and enables a new class of insect-inspired robot morphologies. They demonstrate these capabilities through design and testing of a wide library of canonical flexoskeleton elements; ultimately leading to the integration of elements into a flexoskeleton walking legged robot.

by Xue Bin (Jason) Peng, Student Researcher and Sehoon Ha, Research Scientist, Robotics at Google

A team of researchers at Google’s AI lab is seeing results in its effort to develop a dog-like robot quadruped that learns dog behavior by studying how real dogs move. The team has posted an outline of the work they are doing on the Google AI blog.

Whether it’s a dog chasing after a ball or a horse jumping over obstacles, animals can effortlessly perform an incredibly rich repertoire of agile skills. Developing robots that are able to replicate these agile behaviors can open opportunities to deploy robots for sophisticated tasks in the real world. But designing controllers that enable legged robots to perform these agile behaviors can be a very challenging task. While reinforcement learning (RL) is an approach often used for automating development of robotic skills, a number of technical hurdles remain and, in practice, there is still substantial manual overhead. Designing reward functions that lead to effective skills can itself require a great deal of expert insight, and often involves a lengthy reward tuning process for each desired skill. Furthermore, applying RL to legged robots requires not only efficient algorithms, but also mechanisms to enable the robots to remain safe and recover after falling, without frequent human assistance.

In this post, scientists discuss two of their recent projects aimed at addressing these challenges. First, they describe how robots can learn agile behaviors by imitating motions from real animals, producing fast and fluent movements like trotting and hopping. Then, they discuss a system for automating the training of locomotion skills in the real world, which allows robots to learn to walk on their own, with minimal human assistance.

In “Learning Agile Robotic Locomotion Skills by Imitating Animals”, they present a framework that takes a reference motion clip recorded from an animal (a dog, in this case) and uses RL to train a control policy that enables a robot to imitate the motion in the real world. By providing the system with different reference motions, we are able to train a quadruped robot to perform a diverse set of agile behaviors, ranging from fast walking gaits to dynamic hops and turns. The policies are trained primarily in simulation, and then transferred to the real world using a latent space adaptation technique that can efficiently adapt a policy using only a few minutes of data from the real robot.

Read more>>>

by Nikhil Churamani, Francisco Cruz, Sascha Griffiths, Pablo Barros

Researchers at the University of Hamburg in Germany have recently developed a machine learning-based method to teach robots how to convey what have previously been defined as the seven universal emotions, namely anger, disgust, fear, happiness, sadness, surprise and a neutral state. In their paper they applied and tested their technique on a humanoid robot called iCub.

The new approach proposed by the researchers draws inspiration from a previously developed framework called TAMER. TAMER is an algorithm that can be used to train multilayer perceptrons (MLPs), a class of artificial neural networks (ANNs).

The purpose of the present study is to learn emotion expression representations for artificial agents using reward shaping mechanisms. The approach takes inspiration from the TAMER framework for training a Multilayer Perceptron (MLP) to learn to express different emotions on the iCub robot in a human-robot interaction scenario. The robot uses a combination of a Convolutional Neural Network (CNN) and a Self-Organising Map (SOM) to recognise an emotion and then learns to express the same using the MLP. The objective is to teach a robot to respond adequately to the user’s perception of emotions and learn how to express different emotions.

by Chao Wangab, Weiyu Guobc, Hang Zhangbc, Linlin Guof, Changcheng Huangg, Chuang Lin in Biomedical Signal Processing and Control

Researchers from the Shenzhen Institutes of Advanced Technology (SIAT) of the Chinese Academy of Sciences proposed a continuous estimation method for six daily grasp movements by the long short-term memory network (LSTM).

Summary of the NRMSE of LSTM, RBF and SPGP for 6 movements.

Surface electromyography (sEMG) is a non-invasive, computer-based technique that can record electrical impulses. The present pattern-recognition-based control strategy can realize some myoelectric control, but it is not as smooth as a human hand.

According to a study published in Biomedical Signal Processing and Control, the team designed an experiment on six daily grasp movements selected in the light of different shapes and diameters of the objects. Twenty-two sensors were spaced around a CyberGlove for recording sEMG signals.

To estimate the six grasp movements, the researchers carried out the tests through three evaluation criteria, the Pearson Correlation Coefficient (CC), the Root Mean SquareError (RMSE) and the Normalized Root Mean Square Error (NRMSE).

Then they compared LSTM with the other two algorithms, SPGP (Sparse Gaussian Processes using Pseudo-inputs) and RBF (Radial Basis Function Neural Network). The results exhibited that LSTM performed better as well as faster in all 6 movements.

The chain structure with repetitive modules of LSTM

Although in some joints, SPGP or RBF has better performance than LSTM, the statistical analysis showed that LSTM could perform better in continuous estimation of 20 finger joint angles than SPGP and RBF.

“Our results show a bright prospect of LSTM. It can be used in bioelectrical signals processing and human-machine-interaction,” said Dr. LIN Chuang, corresponding author of the study. “It should be noted that the method should be personalized and optimized based on different applications.”

by Prannay Kaul, Daniele De Martini, Matthew Gadd, Paul Newman, Oxford Robotics Institute, Dept. Engineering Science, University of Oxford

Paper on weakly-supervised multiclass semantic segmentation with FMCW scanning radar

This paper presents an efficient annotation procedure and an application thereof to end-to-end, rich semantic segmentation of the sensed environment using Frequency-Modulated Continuous-Wave scanning radar. Researchers advocate radar over the traditional sensors used for this task as it operates at longer ranges and is substantially more robust to adverse weather and illumination conditions. They avoid laborious manual labelling by exploiting the largest radar-focused urban autonomy dataset collected to date, correlating radar scans with RGB cameras and LiDAR sensors, for which semantic segmentation is an already consolidated procedure. The training procedure leverages a stateof-the-art natural image segmentation system which is publicly available and as such, in contrast to previous approaches, allows for the production of copious labels for the radar stream by incorporating four camera and two LiDAR streams. Additionally, the losses are computed taking into account labels to the radar sensor horizon by accumulating LiDAR returns along a posechain ahead and behind of the current vehicle position. Finally, scientists present the network with multi-channel radar scan inputs in order to deal with ephemeral and dynamic scene objects.

An overview of the pipeline implemented to generate labelled training data for radar segmentation. The section within the blue box is completed before training such that segmented RGB streams are available on disk, as is the pose chain described in Section III-C. During training, a radar scan is selected from the training set. The temporally nearest RGB images and corresponding LiDAR scans are then used to form the labelled radar image as described in Section III. The resulting data are therefore formed on the fly during the training/testing process.

COVID-19

Researchers from Nanyang Technological University, Singapore (NTU Singapore) have developed a semi-autonomous robot that can disinfect large surfaces quickly. The researchers are planning to have public trials to support Singapore’s fight against COVID-19.

Named eXtreme Disinfection roBOT (XDBOT), it can be wirelessly controlled via a laptop or tablet, removing the need for cleaners to be in contact with surfaces, thereby reducing the risk of picking up the virus from potentially contaminated areas.

In this current COVOD-19 outbreak, there is a national demand for deep cleaning and disinfection services . According to news reports, working hours for cleaners have doubled to 16 hours a day due to the manpower crunch.

The new robot differs from other disinfection robots currently on the market that are primarily intended to clean and vacuum floor surfaces and are unable to disinfect odd-shaped surfaces or anything above ground level.

Comprising a semi-autonomous control unit with motorised wheels, XDBOT has a 6-axis robotic arm that can mimic human movement to reach awkward locations such as under tables and beds, as well as doorknobs, tabletops and light switches.

And instead of a conventional pressure-spray nozzle, it uses an electrostatic-charged nozzle to ensure a wider and further spread of the disinfectant, behind and over hidden surfaces.

Unlike typical nozzles, XDBOT’s nozzle discharges chemicals with a positive electrical charge. These disinfectants will then be attracted to all negatively-charged surfaces. Surfaces already covered with the disinfectant will then repel the spray, making this method very efficient. This concept of charge attraction is similar to how positive and negative poles of magnets are drawn to each other.

“To stop the transmission of a virus means we need a way to quickly disinfect surfaces, which is a labour-intensive and repetitive activity,” Prof Chen explained. “Using our new robot from a distance, a human operator can precisely control the disinfection process, increasing surface area cleaned by up to four times, with zero contact with surfaces.”

Aertos 120-UVC Flight: Daycare Drone Flight

By using its patented technologies, the Aertos 120-UVC can fly stably inside buildings contaminated by the COVID-19 virus, allowing humans to stay safely away from infected areas. Digital Aerolus’ industrial drones do not use GPS or external sensors, enabling them to operate stably in places other drones cannot go, including small and confined spaces. When the drone flies at 6 feet above a surface for 5 minutes, it provides a greater than 99% disinfection rate of more than a 2 x 2-meter surface.

The shiny new robots gently check the pulses of highly infectious patients on life support in the Italian epicentre of COVID-19. The doctors and nurses love them because they also help save their own lives.

The Varese hospital has received six of the sleek and slightly human looking machines on wheels. The readings from the machines allows medics to stay out of the intensive care units and monitor patients’ vital signs on computer screens in separate rooms.

One of the six robots at the Circolo di Varese hospital in northern Italy checks up on a patient in the intensive care unit, helping medical staff reduce the risk of direct contact

Robots to the Rescue During COVID-19 Lockdown:

MISC

by Sarah Wild, Horizon: The EU Research & Innovation Magazine

A robot ‘recognises’ itself in the bathroom mirror

Robots passing cognitive tests such as recognising themselves in a mirror and being programmed with a human sense of time are showing how machines are being shaped to become a bigger part of our everyday lives.

In 2016, for the first time ever, the number of robots in homes, the military … shops and hospitals surpassed that used in industry. Instead of being concentrated in factories, robots are a growing presence in people’s homes and lives — a trend that is likely going to increase as they become more sophisticated and ‘sentient.”

“If we take out the robot from a factory and into a house, we want safety,” said Dr. Pablo Lanillos, an assistant professor at Radboud University in the Netherlands.

And for machines to safely interact with people, they need to be more like humans, experts like Dr. Lanillos say. He has designed an algorithm that enables robots to recognise themselves, in a similar way to humans.

A major distinction between humans and robots is that our senses are faulty, feeding misleading information into our brains. “We have really imprecise proprioception (awareness of our body’s position and movement). For example, our muscles have sensors that are not precise versus robots, which have very precise sensors,” he said.

The human brain takes this imprecise information to guide our movements and understanding of the world. Robots are not used to dealing with uncertainty in the same way.

“In real situations, there are errors, differences between the world and the model of the world that the robot has,” Dr. Lanillos said. “The problem we have in robots is that when you change any condition, the robot starts to fail.”

At age two, humans can tell the difference between their bodies and other objects in the world. But this computation that a two-year-old human brain can do is very complicated for a machine and makes it difficult for them to navigate the world.

The algorithm that Dr. Lanillos and colleagues developed in a project called SELFCEPTION, enables three different robots to distinguish their ‘bodies’ from other objects.

Their test robots included one composed of arms covered with tactile skin, another with known sensory inaccuracies, and a commercial model. They wanted to see how the robots would respond, given their different ways of collecting ‘sensory’ information.

One test the algorithm-aided robots passed was the rubber hand illusion, originally used on humans. “We put a plastic hand in front of you, cover your real hand, and then start to stimulate your covered hand and the fake hand that you can see,” Dr. Lanillos said.

Within minutes, people begin to think that the fake hand is their hand.

The goal was to deceive a robot with the same illusion that confuses humans. This is a measure of how well multiple sensors are integrated and how the robot is able to adapt to situations. Dr. Lanillos and his colleagues made a robot experience the fake hand as its hand, similar to the way a human brain would.

The second test was the mirror test, which was originally proposed by primatologists. In this exercise, a red dot is put on an animal or person’s forehead, then they look at themselves in a mirror. Humans, and some animal subjects like monkeys, try to rub the red dot off of their face rather than off the mirror. The test is a way to determine how self-aware an animal or person is. Human children are usually able to pass the test by their second birthday.

The team trained a robot to ‘recognise’ itself in the mirror by connecting the movement of limbs in the reflection with its own limbs. Now they are trying to get a robot to rub off the red dot.

The next step in this research is to integrate more sensors in the robot — and increase the information it computes — to improve its perception of the world. A human has about 130 million receptors in their retina alone, and 3,000 touch receptors in each fingertip, says Dr. Lanillos. Dealing with large quantities of data is one of the crucial challenges in robotics. “Solving how to combine all this information in a meaningful way will improve body awareness and world understanding,” he said.

Improving the way robots perceive time can also help them operate in a more human way, allowing them to integrate more easily into people’s lives. This is particularly important for assistance robots, which will interact with people and have to co-operate with them to achieve tasks. These include service robots w … care for the elderly.

‘(Humans’) behaviour, our interaction with the world, depends on our perception of time,” said Anil Seth, co-director of the Sackler Centre for Consciousness Science at the University of Sussex, UK. “Having a good sense of time is important for any complex behaviour.”

Just like humans, when robots have a decision to make there are often many options and hundreds of potential outcomes. Robots have been able to simulate a handful of these outcomes to figure out which course of action will be the most likely to lead to success. But what if one of the other options were equally likely to succeed — and safer?

The Office of Naval Research has awarded Brendan Englot, an MIT-trained mechanical engineer at Stevens Institute of Technology, a 2020 Young Investigator Award of $508,693 to leverage a new variant of a classic artificial intelligence tool to allow robots to predict the many possible outcomes of their actions, and how likely they are to occur. The framework will allow robots to figure out which option is the best way to achieve a goal, by understanding which options are the safest, most efficient — and least likely to fail.

“If the fastest way for a robot to complete a task is by walking on the edge of a cliff, that’s sacrificing safety for speed,” said Englot, who will be among the first to use the tool, distributional reinforcement learning, to train robots. “We don’t want the robot falling off the edge of that cliff, so we are giving them the tools to predict and manage the risks involved in completing the desired task.”

For years, reinforcement learning has been used to train robots to navigate autonomously in the water, land and air. But that AI tool has limitations, because it makes decisions based on a single expected outcome for each available action, when in fact there are often many other possible outcomes that may occur. Englot is using distributional reinforcement learning, an AI algorithm that a robot can use to evaluate all possible outcomes, predict the probability of each action succeeding and choose the most expedient option likely to succeed while keeping a robot safe.

Before putting his algorithm to use in an actual robot, Englot’s first mission is to perfect the algorithm. Englot and his team create a number of decision-making situations in which to test their algorithm. And they often turn to one of the field’s favorite playing grounds: Atari games.

For example, when you play Pacman, you are the algorithm that is deciding how Pacman behaves. Your objective is to get all of the dots in the maze and if you can, get some fruit. But there are ghosts floating around that can kill you. Every second, you are forced to make a decision. Do you go straight, left or right? Which path gets you the most dots — and points — while also keeping you away from the ghosts?

Englot’s AI algorithm, using distributional reinforcement learning , will take the place of a human player, simulating every possible move to safely navigate its landscape.

So how do you reward a robot? Englot and his team will be assigning points to different outcomes, i.e., if it falls off a cliff, the robot gets -100 points. If it takes a slower, but safer option, it may receive -1 point for every step along the detour. But if it successfully reaches the goal, it may get +50.

“One of our secondary goals is to see how reward signals can be designed to positively impact how a robot makes decisions and can be trained,” said Englot. “We hope the techniques developed in this project could ultimately be used for even more complex AI, such as training underwater robots to navigate safely amidst varying tides, currents, and other complex environmental factors.”

by Alex Hern in The Guardian

A robotic delivery service in Milton Keynes could prove to be the future of locked-down Britain, as miniature autonomous vehicles bring food deliveries to almost 200,000 residents of the town.

Starship Technologies, an autonomous delivery startup created in 2014 by two Skype cofounders, has been testing its beer cooler-sized robots in public since 2015. The small, white, six-wheeled vehicles trundle along pavements to bring small deliveries to residents and workers of the neighbourhoods in which they operate, without the need for a human driver or delivery person.

The Milton Keynes operation is the first commercial deployment in the UK, and started in mid-March, just as the country was implementing widespread social distancing in an effort to tackle the spread of coronavirus. Residents can download the Deliveroo-style Starship Deliveries app to buy cooked food and small orders from supermarkets, which gets loaded into the robots and driven to them.

Sam Crooks, the mayor of Milton Keynes, said: “I’ve got a fairly young demographic in my ward, and they love it. There was obviously a burst of use at the beginning, because of the novelty, but already it’s just a part of people’s routines.

“People are taking seriously the guidance about not going out, so something like the robot deliveries are absolutely ideal, because people can order and obtain something without going out. Particularly as their first relationship was with Tesco and the Co-op.”

Andy Curtis, the head of UK operations at Starship, said:

“We’ve seen huge surges in demand since we started operating in Milton Keynes two years ago. We’re excited that both residents and workers can now enjoy this low cost and convenient benefit in the centre of Milton Keynes, and we hope that it will make the town an even more attractive place to work in the future.”

A Starship robot in Milton Keynes. This picture was taken before the government’s new guidance on social distancing.

The launch of Starship’s Milton Keynes offering comes as conventional delivery services are under increasing strain. Royal Mail delivery workers have complained that they are being asked to risk their health for non-essential deliveries, and the service has had to deal with a growing number of workers taking sick leave. Gig economy delivery companies, such as Deliveroo, have come under pressure too, with their riders, who are not normal employees, worrying that they may have to continue deliveries if they get sick or lose their income entirely.

The robotics firm says demand has been high in the last week, and that it plans to expand further across the UK and US. A spokesperson said: “We’ve had grocery stores, restaurants and other delivery companies get in touch to ask for assistance from our robots. To date the robots have completed over 100,000 autonomous deliveries, travelled over 500,000 miles and completed over 5m road crossings around the world.

Videos

Autonomous robot could help in the search for signs of life in space

A new autonomous robot developed by engineers at NASA and tested in Antarctica by a team of researchers, including an engineer from The University of Western Australia, is destined for a trip into outer space and could, in the future, search for signs of life in ocean worlds beyond Earth.

Highlight videos from both the systems and virtual tracks of the DARPA SubT Urban Circuit

The ability to traverse vertical spaces proved key to success in the Subterranean (SubT) Challenge Urban Circuit. CoSTAR took the top spot in the Systems competition. Extending their performance from the Tunnel Circuit, CTU-CRAS-NORLAB earned the highest score among self-funded teams and the $500,000 prize. In the Systems competition of the Urban Circuit, 10 teams navigated two courses winding through an unfinished nuclear power plant in Elma, Washington, Feb. 18–27, 2020. DARPA designed the courses to represent complex urban underground infrastructure. Teams from eleven countries participated across the Virtual and Systems competitions in the Urban Circuit.

The ability to traverse vertical spaces proved key to success in the Subterranean (SubT) Challenge Urban Circuit. BARCS topped the leaderboard in the Virtual competition. Repeating success from the Tunnel Circuit, Coordinated Robotics earned the self-funded first prize in the Virtual competition, taking home $250,000. The Virtual competition with eight teams took place Jan. 23–30, with results announced Feb. 27. Teams from eleven countries participated across the Virtual and Systems competitions in the Urban Circuit.

Go Fetch: Mobile Manipulation in Unstructured Environments

With humankind facing new and increasingly large-scale challenges in the medical and domestic spheres, automation of the service sector carries a tremendous potential for improved efficiency, quality, and safety of operations. Mobile robotics can offer solutions with a high degree of mobility and dexterity, however these complex systems require a multitude of heterogeneous components to be carefully integrated into one consistent framework. This work presents a mobile manipulation system that combines perception, localization, navigation, motion planning and grasping skills into one common workflow for fetch and carry applications in unstructured indoor environments. The tight integration across the various modules is experimentally demonstrated on the task of finding a commonly-available object in an office environment, grasping it, and delivering it to a desired drop-off location. arXiv: https://arxiv.org/abs/2004.00899

ABB Robotics mini snack packaging factory

Watch a food and beverage factory of the future in action. Customized snack boxes are packed on an integrated APCOStrack system

Working from home as a robotics engineer

Robotics engineer trying to automate things in the kitchen

Subscribe to Paradigm!

Medium. Twitter. Telegram. Reddit.