TL;DR

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025. It is predicted that this market will hit the 100 billion U.S. dollar mark in 2020.

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars). Source: Statista

COVID-19

Can robots be effective tools in combating the COVID-19 pandemic? A group of leaders in the field of robotics, including Henrik Christensen, director of UC San Diego’s Contextual Robotics Institute, say yes, and outline a number of examples in an editorial in the March 25 issue of Science Robotics. They say robots can be used for clinical care such as telemedicine and decontamination; logistics such as delivery and handling of contaminated waste; and reconnaissance such as monitoring compliance with voluntary quarantines.

“Already, we have seen robots being deployed for disinfection, delivering medications and food, measuring vital signs, and assisting border controls,” the researchers write.

Christensen, who is a professor in the Department of Computer Science and Engineering at UC San Diego, particularly highlighted the role that robots can play in disinfection, cleaning and telepresence.

Other co-authors include Marcia McNutt, president of the National Research Council and president of the National Academy of Sciences, as well as a number of other robotics experts from international and U.S. universities.

“For disease prevention, robot-controlled noncontact ultraviolet (UV) surface disinfection has already been used because COVID-19 spreads not only from person to person via close contact respiratory droplet transfer but also via contaminated surfaces,” the researchers write. “Opportunities lie in intelligent navigation and detection of high-risk, high-touch areas, combined with other preventative measures,” the researchers add. “New generations of large, small, micro-, and swarm robots that are able to continuously work and clean (i.e., not only removing dust but also truly sanitizing/sterilizing all surfaces) could be developed.”

In terms of telepresence, “the deployment of social robots can present unique opportunities for continued social interactions and adherence to treatment regimes without fear of spreading more disease,” researchers write. “However, this is a challenging area of development because social interactions require building and maintaining complex models of people, including their knowledge, beliefs, emotions, as well as the context and environment of interaction.”

“COVID-19 may become the tipping point of how future organizations operate,” researchers add. “Rather than cancelling large international exhibitions and conferences, new forms of gathering — virtual rather than in-person attendance — may increase. Virtual attendees may become accustomed to remote engagement via a variety of local robotic avatars and controls.” “Overall, the impact of COVID-19 may drive sustained research in robotics to address risks of infectious diseases,” researchers go on. “Without a sustainable approach to research and evaluation, history will repeat itself, and technology robots will not be ready ready to assist for the next incident.”

Even as infections in China reportedly subside, the rest of the world is still bracing for the full impact of the COVID-19 pandemic. China is closing its borders to reduce the possibility of a second wave, and the U.S. has surpassed Italy in confirmed cases. People in India and much of the world have been asked to maintain “social distancing.” At the same time, UVD Robots ApS has been scaling up production to meet demand.

Odense, Denmark-based UVD Robots makes an autonomous mobile robot that can enter a room and disinfect it with UV-C light, without exposing staffers to potentially harmful radiation. It is one of several companies offering disinfection systems in response to the novel coronavirus, but robotics suppliers have a long way to go to meet shifting and growing global needs.

Said Claus Risager, co-founder and CEO of Blue Ocean Robotics, which owns UVD Robots:

“We’ve been working for six years to create a hospital disinfection system, and this crisis comes two years after we launched the UVD robot. Inquiries from resellers and new customers have been overwhelming, and our production people are the only ones in the office. We’ve sold as many robots in three days as we did in our first year, and we’re moving to a larger building by the end of summer.”

While existing competitors such as Xenex Disinfection Services LLC have also reported growth in demand for their systems, many others have jumped on the COVID-19 bandwagon.

UVD disinfection robots are designed for clinical use. Source: UVD Robots

Artificial intelligence and robotics experts in Edinburgh are working to create what they hope will be the first healthcare robots to hold a conversation with more than one person at a time.

It is a project designed to help older people, but it could one day be used to help handle virus outbreaks like the coronavirus pandemic.

Says Heriot-Watt’s professor of computer science Oliver Lemon:

“It’s not something we had actually considered while designing the project. But as it turns out it’s quite relevant to what’s going on today. You can imagine in the future that when you walk into a hospital waiting room, instead of encountering a human you encounter a robot who’s able to help you. That kind of hands-free, touch-free speech interface is really going to be in more demand.”

The new project, funded by the EU’s Horizon 2020, is called Socially Pertinent Robots in Gerontological Healthcare (SPRING). Robots are already working in some hospitals but are mostly confined to menial tasks such as shifting supplies or patient records. You can also have a chat with a digital helper like Alexa or Siri, but such conversations are typically simple, short and one-on-one. SPRING will develop new robots which can deal with multiple people in social situations.

The drive to create what are called socially assistive robots (SARs) is the first project to be announced by the National Robotarium.

It is a partnership between Heriot-Watt and Edinburgh that aims to create a world-leading centre for robotics and AI.

Its new building is due to open on the Heriot-Watt campus in 2021.

by Jess Hohenstein and Malte Jung

AI as mediator: ‘Smart’ replies help humans communicate during pandemic

AI-generated smart replies in messaging affect interpersonal relationships.

Increased levels of trust in the human communicator are seen in the presence of smart replies.

When conversations go awry, the AI is designated some responsibility that would have been assigned to the human communicator.

In AI-mediated messaging, the AI is considered to have agency only when things go awry

Smart replies could be used to improve relationships between communicators.

Daily life during a pandemic means social distancing and finding new ways to remotely connect with friends, family and co-workers. And as we communicate online and by text, artificial intelligence could play a role in keeping our conversations on track, according to new Cornell University research.

Humans having difficult conversations said they trusted artificially intelligent systems — the “smart” reply suggestions in texts — more than the people they were talking to, according to a new study, “AI as a Moral Crumple Zone: The Effects of Mediated AI Communication on Attribution and Trust,” published online in the journal Computers in Human Behavior.

“We find that when things go wrong, people take the responsibility that would otherwise have been designated to their human partner and designate some of that to the artificial intelligence system,” said Jess Hohenstein, a doctoral student in the field of information science and the paper’s first author. “This introduces a potential to take AI and use it as a mediator in our conversations.”

For example, the algorithm could notice things are going downhill by analyzing the language used, and then suggest conflict-resolution strategies, Hohenstein said.

The study was an attempt to explore the myriad ways — both subtle and significant — that AI systems such as smart replies are altering how humans interact. Choosing a suggested reply that’s not quite what you intended to say, but saves you some typing, might be fundamentally altering the course of your conversations — and your relationships, the researchers said.

In addition to shedding light on how people perceive and interact with computers, the study offers possibilities for improving human communication — with subtle guidance and reminders from AI.

Hohenstein and Jung said they sought to explore whether AI could function as a “moral crumple zone” — the technological equivalent of a car’s crumple zone, designed to deform in order to absorb the crash’s impact.

in Guardian

Companies the world over are directing their ingenuity at the fight against the coronavirus. Here are the front-runners, from sanitising robots to a 3D-printed hospital ward.

Research articles

by Amanda Sutrisno and David J. Braun in Science Advances

Hypothetical spring-loaded human exoskeleton could double running speed

A pair of researchers at Vanderbilt University has proposed a method to create a device that would allow human beings to run nearly twice as fast as is possible naturally. In their paper published in the journal Science Advances, Amanda Sutrisno and David Braun describe their idea for such a device and what is required to make it a reality.

Prior research has shown that the average person can run approximately 24 kilometers per hour; world record holder Usain Bolt has been clocked at 12.3 meters per second. But such speeds are not very impressive when compared to other animals — of course, most of them run on four legs. Humans have found a way to improve their speed, however, using only human power — using bicycles. But there might be another way. In this new effort, Sutrisno and Braun have shown that it might be possible that a human being could run as fast as a person riding a bicycle by wearing a device that takes advantage of air time. When a person runs, their feet take turns hovering in the air for brief moments — a time when the foot is not doing anything to advance running.

The idea presented by Sutrisno and Braun is to build a device that attaches to the body to serve as an assist. The device would have springs, one for each leg. The springs would be pulled by leg action during air time. The knee joint serves as a hinge that extends the leg — it is during that extension that the spring would pull, storing energy that could be expended once the foot comes back to the ground. That energy would then be combined with normal muscle energy, allowing the foot to push back harder than normal against the ground, propelling the person forward faster than they would normally be able to achieve on their own. When the researchers ran simulations with such a device, they found that it could help people run nearly twice as fast as normal.

Unfortunately, there is a hitch with the idea: Materials such as carbon fiber lack the energy-holding capacity that would be needed to realize the device the team has envisioned. Something new will have to be developed before they are able to test their idea in the real world.

Top speeds of human-powered locomotion.

World records in natural running (12.3 m/s) (1), running with a spring blade prosthesis (11 m/s) (13), ice-skating (15 m/s) (52), and cycling (21.4 m/s) (fig. S7) (2) and the top speed predicted for augmented running (20.9 m/s). There is a linear empirical relation vmax ∝ ΔtE/T between the world record speeds and the relative time available for each leg to supply energy in running, ice-skating, and cycling. The air resistance limit is given by a cube-root relation vmax ∝ (ΔtE/T)1/3 (see Materials and Methods). This relation is calculated assuming that the energy supply rate of each leg is 18 W/kg (25), which is near to what has been measured for world-class cyclists (26).

by Kirby A. Witte, Pieter Fiers, Alison L. Sheets-Singer and Steven H. Collins in Science Robotics

Stanford Engineers find ankle exoskeleton aids running

Running is great exercise but not everyone feels great doing it. In hopes of boosting physical activity — and possibly creating a new mode of transportation — engineers at Stanford University are studying devices that people could strap to their legs to make running easier.

In experiments with motor-powered systems that mimic such devices — called exoskeleton emulators — the researchers investigated two different modes of running assistance: motor-powered assistance and spring-based assistance. The results, published in Science Robotics, were surprising.

The mere act of wearing an exoskeleton rig that was switched off increased the energy cost of running, making it 13 percent harder than running without the exoskeleton. However, the experiments indicated that, if appropriately powered by a motor, the exoskeleton reduced the energy cost of running, making it 15 percent easier than running without the exoskeleton and 25 percent easier than running with the exoskeleton switched off.

In contrast, the study suggested that if the exoskeleton was powered to mimic a spring there was still an increase in energy demand, making it 11 percent harder than running exoskeleton-free and only 2 percent easier than the non-powered exoskeleton.

“When people run, their legs behave a lot like a spring, so we were very surprised that spring-like assistance was not effective,” said Steve Collins, associate professor of mechanical engineering at Stanford and senior author of the paper. “We all have an intuition about how we run or walk but even leading scientists are still discovering how the human body allows us to move efficiently. That’s why experiments like these are so important.”

If future designs could reduce the energy cost of wearing the exoskeleton, runners may get a small benefit from spring-like assistance at the ankle, which is expected to be cheaper than motor-powered alternatives.

Experimental setup

(A) Exoskeleton emulator testbed. A participant runs on a treadmill while wearing bilateral ankle exoskeletons actuated by motors located off-board with mechanical power transmitted through flexible Bowden cables. (B) Ankle exoskeleton. The ankle exoskeleton attaches to the user by a strap above the calf, a rope through the heel of the shoe, and a carbon fiber plate embedded in the toe of the shoe. The inner Bowden cable terminates on a 3D printed titanium heel spur that is instrumented with strain gauges for direct measurement of applied torque. A magnetic encoder measures ankle angle. © Participant running on the treadmill with bilateral ankle exoskeletons. Metabolic data are collected through a respiratory system by measuring the oxygen and carbon dioxide content of the participant’s expired gasses.

by Yousef Emam, Siddharth Mayya, Gennaro Notomista, Addison Bohannon, Magnus Egerstedt

Researchers recently developed a framework for adaptive task allocation during missions that are to be completed by a team of robots. Their framework can assign tasks to robots based on their unique capabilities and characteristics.

In recent years, robots have become increasingly sophisticated, hence they are now able to complete a wide variety of tasks. While some robots are designed to work individually, for instance providing basic assistance in people’s homes, others might be more efficient when deployed in teams.

During search & rescue missions, for instance, after natural disasters, robots might be more effective as a team, as they could deliver supplies or search for survivors faster, covering larger geographical regions. To complete missions as a team most efficiently, however, robots should be able to cooperate well and effectively distribute different tasks among each other.

With this in mind, researchers at Georgia Institute of Technology (Georgia Tech) recently developed a framework for adaptive task allocation during missions that are to be completed by a team of robots.

“Robot teams are envisioned to operate in dynamic environments and this paper proposes an updated rule that allows robots to know how fit they are for each of the various tasks they get assigned to on-the-fly,” Yousef A Emam, one of the researchers who carried out the study, says.

The framework developed by the researchers is based on a task allocation technique for heterogeneous multi-robot systems that they introduced in a previous paper. This previously devised strategy entails the use of an algorithm that accounts for differences in individual robot capabilities and allocates tasks accordingly. The allocation and execution of these tasks take place simultaneously.

“Our framework solves optimization problems online, telling individual robots how to prioritize their contributions to the various tasks they are to complete (i.e., task allocation), and how to do so (i.e., task execution),” Emam said.

In their study, Emam and his colleagues built on the task allocation strategy they previously developed, making it more responsive to changes in the robots’ surrounding environment. In contrast with its previous version, their new framework does not require an explicit model of the environment or of robot capabilities that are unknown. Instead, it primarily considers the collective progress that the team of robots made on a given mission and each robot’s performance on individual tasks.

by Nathan S. Usevitch, Zachary M. Hammond, Mac Schwager, Allison M. Okamura, Elliot W. Hawkes, Sean Follmer in Science Robotics

Stanford engineers create shape-changing, free-roaming soft robot. A new type of robot combines traditional and soft robotics, making it safe but sturdy. Once inflated, it can change shape and move without being attached to a source of energy or air.

Advances in soft robotics could someday allow robots to work alongside humans, helping them lift heavy objects or carrying them out of danger. As a step toward that future, Stanford University researchers have developed a new kind of soft robot that, by borrowing features from traditional robotics, is safe while still retaining the ability to move and change shape.

“A significant limitation of most soft robots is that they have to be attached to a bulky air compressor or plugged into a wall, which prevents them from moving,” said Nathan Usevitch, a graduate student in mechanical engineering at Stanford. “So, we wondered: What if we kept the same amount of air within the robot all the time?”

From that starting point, the researchers ended up with a human-scale soft robot that can change its shape, allowing it to grab and handle objects and roll in controllable directions.

“The casual description of this robot that I give to people is Baymax from the movie Big Hero 6 mixed with Transformers. In other words, a soft, human-safe robot mixed with robots that can dramatically change their shape,” said Usevitch.

The simplest version of this squishy robot is an inflated tube that runs through three small machines that pinch it into a triangle shape. One machine holds the two ends of the tube together; the other two drive along the tube, changing the overall shape of the robot by moving its corners. The researchers call it an “isoperimetric robot” because, although the shape changes dramatically, the total length of the edges — and the amount of air inside — remains the same.

The isoperimetric robot is a descendent of three types of robots: soft robots, truss robots and collective robots. Soft robots are lightweight and compliant, truss robots have geometric forms that can change shape and collective robots are small robots that work together, making them particularly strong in the face of single-part failures.

Demonstration and characterization of the robot’s compliant behavior

(A) Overloading the robot causes the robot to collapse. After being restored to its initial configuration, the robot is again able to support the initial load. (B) Load displacement behavior of a single triangle in three different configurations. In all cases, there is a moderate initial stiffness until a critical load is reached and the beam buckles, at which point the force required to maintain a given level of deflection is much lower than the peak value, demonstrating a mechanical fuse–type behavior of the robot. © The robot moves a 6.8-kg load over a trajectory.

by Jiazheng Chai and Mitsuhiro Hayashibe in IEEE Robotics and Automation Letters

Researchers have observed a similar concept in robotic agents using deep reinforcement learning (DRL) algorithms

Human motor control has always been efficient at executing complex movements naturally, efficiently, and without much thought involved. This is because of the existence of motor synergy in the central nervous system (CNS). Motor synergy allows the CNS to use a smaller set of variables to control a large group of muscles; thereby simplifying the control over coordinated and complex movements.

DRL allows robotic agents to learn the best action possible in their virtual environment. It allows complex robotic tasks to be solved whilst minimizing manual operations and achieving peak performance. Classical algorithms, on the other hand, require manual intervention to find specific solutions for every new task that appears.

However, applying motor synergy from the human world to the robotic world is no small task. Even though many studies support the employment of motor synergy in human and animal motor control, the background process is still largely unknown.

In the current study, researchers from Tohoku University utilized two DRL algorithms on walking robotic agents known as HalfCheetah and FullCheetah. The two algorithms were TD3, a classical DRL, and SAC, a high-performing DRL.

The two robotic agents were tasked with running forward as far as possible within a given time. In total, the robotic agents completed 3 million steps. Synergy information was not used vis-à-vis the DRLs but the robotic agents demonstrated the emergence of motor synergy throughout their movements.

Mitsuhiro Hayashibe, Tohoku University professor and co-author of the study, notes:

“We first confirmed in a quantitative way that motor synergy can emerge even in deep learning as humans do. After employing deep learning, the robotic agents improved their motor performances while limiting energy consumption by employing motor synergy.”

by Davide Falanga, Kevin Kleber and Davide Scaramuzza in Science Robotics

Using a novel type of cameras, researchers from the University of Zurich have demonstrated a flying robot that can detect and avoid fast-moving objects. A step towards drones that can fly faster in harsh environments, accomplishing more in less time.

Drones can do many things, but avoiding obstacles is not their strongest suit yet — especially when they move quickly. Although many flying robots are equipped with cameras that can detect obstacles, it typically takes from 20 to 40 milliseconds for the drone to process the image and react. It may seem quick, but it is not enough to avoid a bird or another drone, or even a static obstacle when the drone itself is flying at high speed. This can be a problem when drones are used in unpredictable environments, or when there are many of them flying in the same area.

In order to solve this problem, researchers at the University of Zurich have equipped a quadcopter (a drone with four propellers) with special cameras and algorithms that reduced its reaction time down to a few milliseconds — enough to avoid a ball thrown at it from a short distance. The results, published in the journal Science Robotics, can make drones more effective in situations such as the aftermath of a natural disaster.

“For search and rescue applications, such as after an earthquake, time is very critical, so we need drones that can navigate as fast as possible in order to accomplish more within their limited battery life,” explains Davide Scaramuzza, who leads the Robotics and Perception Group at the University of Zurich as well as the NCCR Robotics Search and Rescue Grand Challenge. “However, by navigating fast drones are also more exposed to the risk of colliding with obstacles, and even more if these are moving. We realized that a novel type of camera, called Event Camera, are a perfect fit for this purpose”.

Scaramuzza and his team first tested the cameras and algorithms alone. They threw objects of various shapes and sizes towards the camera, and measured how efficient the algorithm was in detecting them. The success rate varied between 81 and 97 percent, depending on the size of the object and the distance of the throw, and the system only took 3.5 milliseconds to detect incoming objects.

Then the most serious test began: putting cameras on an actual drone, flying it both indoor and outdoor and throwing objects directly at it. The drone was able to avoid the objects — including a ball thrown from a three-meter distance and traveling at 10 meters per second — more than 90 percent of the time. When the drone “knew” the size of the object in advance, one camera was enough. When, instead, it had to face objects of varying size, two cameras were used to give it stereoscopic vision.

According to Scaramuzza, these results show that event cameras can increase the speed at which drones can navigate by up to ten times, thus expanding their possible applications. “One day drones will be used for a large variety of applications, such as delivery of goods, transportation of people, aerial filmography and, of course, search and rescue,” he says. “But enabling robots to perceive and make decision faster can be a game changer for also for other domains where reliably detecting incoming obstacles plays a crucial role, such as automotive, good delivery, transportation, mining, and remote inspection with robots”.

A sequence from our outdoor experiments. (A) t = 0 s. (B) t = 0.15 s. (C)t = 0.30 s. (D) t = 0.45 s. The quadrotor is flying toward a reference goal position when an obstacle is thrown toward it. The obstacle is successfully detected using a stereo pair of event cameras and is avoided by moving upward.

by Akhil Padmanabha, Frederik Ebert, Stephen Tian, Roberto Calandra, Chelsea Finn, Sergey Levine

A team of researchers at UC Berkeley recently developed a new multi-directional tactile sensor, called OmniTact, that overcomes some of the limitations of previously developed sensors.

In recent years, researchers worldwide have been trying to develop sensors that could replicate humans’ sense of touch in robots and enhance their manipulation skills. While some of these sensors achieved remarkable results, most existing solutions have small sensitive fields or can only gather images with low-resolutions.

OmniTact set to be presented at ICRA 2020, acts as an artificial fingertip that allows robots to sense the properties of objects it is holding or manipulating.

OmniTact, the sensor developed by Ebert and his colleagues, is an adaptation of GelSight, a tactile sensor created by researchers at MIT and UC Berkeley. Getsight can generate detailed 3-D maps of an object’s surface and detect some of its characteristics.

An illustration and image explaining the basic differences between the GelSight Sensor and OmniTact.

In contrast with GelSight, OmniTact is multi-directional, which means that all of its sides have sensing capabilities. In addition, it can provide high-resolution readings, is highly compact and has a curved shape. When integrated into a gripper or robotic hand, the sensor acts as a sensitive artificial ‘finger,” allowing the robot to manipulate and sense a wide range of objects varying in shape and sizes.

by Hailong Wang, Erik T. Nilsen and Moneesh Upmanyu in Journal of The Royal Society Interface

What a rhododendron can teach us about robotics?

On a chilly winter day, Moneesh Upmanyu, a professor of mechanical and industrial engineering at Northeastern, took a walk with his son near their home outside of Boston. They passed a rhododendron bush, its thick green leaves curled up into thin tubes dangling limply from their stems. It looked dead, or dying.

But when Upmanyu walked past the spot a few days later, on a warmer day, the plant seemed to have revived. The leaves were spread out flat and lifted upwards towards the sun. His son had one question: Why?

Upmanyu studies the structural properties of different materials and how they respond to stimuli, for use in things like microelectronics or robotics systems. In a recent paper, Upmanyu and his colleagues examined the mechanical aspects of how rhododendron leaves curl and droop.

“In robotics, microelectronic devices, you want to design switches which can make contact and disconnect just based on some stimulus, like temperature, light, or even touch,” Upmanyu says. “This sort of understanding is quite important for designing smart, active structures.”

In this particular instance, it comes down to the movement of water, Upmanyu says. When temperatures drop, water moves from the stem into the leaf, causing the stem to droop. The water is distributed through the leaf unevenly, and as it freezes, it causes the top of the leaf to expand and the underside to contract. This makes the leaf start to curl.

If that were the end of it, though, the leaf would curl uniformly downwards, resulting in an upside-down cup shape. What causes the leaves to roll into a tight cigar are their stiff spines, or midribs, that run down the center of the leaf, says Hailong Wang, lead author on the study and a professor at the University of Science and Technology in China.

“The leaf cannot bend in a dome-shaped hemispherical structure — it has to bend only along one direction, which the stiff midrib picks,” says Wang, “The curvature develops only along one direction, but it’s amplified.”

Understanding why these leaves curl could help researchers design smart, folding structures and electronics that respond to temperature changes or other stimulation.

When the researchers cut strips from rhododendron leaves, separating them from midrib, they curled and twisted loosely in all directions. But with curling restricted by the midrib, those forces are redirected into just one direction, causing a much tighter curl. The biological reason for this curling is to help these plants survive in winter.

Rhododendrons retain their green leaves throughout winter, despite growing in tough, alpine conditions. As the deciduous trees around them lose their leaves, extra sunlight reaches the rhododendrons. But in the coldest weather they can’t use it — their metabolism shuts down. That radiation can damage the leaves. By curling and drooping, rhododendron leaves drastically reduce the amount of sunlight hitting them when they can’t use it.

This may also help them thaw slower after a frost, Upmanyu says. If the leaves thaw and uncurl too quickly, frost needles could puncture and damage the surface of the leaves.

Understanding how these mechanisms work in rhododendrons could potentially help scientists engineer crops that are more resistant to cold weather. But Upmanyu is also interested in how these same principles can be applied to engineering.

Videos

DARPA SubT Urban Circuit: Collision-tolerant Exploration of Staircases using Aerial Robots

In this video the team present the autonomous exploration of a staircase with four sub-levels and the transition between two floors of the Satsop Nuclear Power Plant during the DARPA Subterranean Challenge Urban Circuit. The utilized system is a collision-tolerant flying robot capable of multi-modal Localization And Mapping fusing LiDAR, vision and inertial sensing. Autonomous exploration and navigation through the staircase is enabled through a Graph-based Exploration Planner implementing a specific mode for vertical exploration. The collision-tolerance of the platform was of paramount importance especially due to the thin features of the involved geometry such as handrails. The whole mission was conducted fully autonomously.

Here’s a look at one of the preliminary simulated cave environments for the DARPA SubT Challenge;

Robot system SherpaUW: Tests with hybrid underwater rover in the maritime exploration hall

SherpaUW is a hybrid walking and driving exploration rover for subsea applications. The locomotive system consists of four legs with 5 active DoF each. Additionally, a 6 DoF manipulation arm is available. All joints of the legs and the manipulation arm are sealed against water. The arm is pressure compensated, allowing the deployment in deep sea applications. SherpaUW’s hybrid crawler-design is intended to allow for extended long-term missions on the sea floor. Since it requires no extra energy to maintain its posture and position compared to traditional underwater ROVs (Remotely Operated Vehicles), SherpaUW is well suited for repeated and precise sampling operations, for example monitoring black smockers over a longer period of time.

Can Machines Perceive Emotion?

Many tech companies are trying to build machines that detect people’s emotions, using techniques from artificial intelligence. Some companies claim to have succeeded already. Dr. Lisa Feldman Barrett evaluates these claims against the latest scientific evidence on emotion. What does it mean to “detect” emotion in a human face? How often do smiles express happiness and scowls express anger? And what are emotions, scientifically speaking?

If social distancing already feels like too much work, Misty is like that one-in-a-thousand child that enjoys cleaning. See her in action here as a robot disinfector and sanitizer for common and high-touch surfaces. Alcohol reservoir, servo actuator included.

Underground Bunker Exploration with a Team of Robots:

On Oxford Dynamic Robot Systems Group YouTube channel. DRS has been developing navigation and mapping for autonomous exploration underground. This video demonstrates our work exploring an underground bunker near Oxford.

HiBot, which just received an influx of funding, is adding new RaaS (robotics as a service) offerings to its collection of robot arms and snakebots.

Robotics Trends & the Impact on Employment

Alex Shikany, VP of Membership and Business Intelligence for A3 shares insights from his organization on the relationship between robotics growth and employment.

Other

Robopets: Using technology to monitor older adults raises privacy concerns: Social isolation and loneliness are concerns for many older adults, and can be triggered by the need to transition to a condo, rental accommodation, long-term care facility or retirement home.

Sometimes, the only thing standing between an older adult and loneliness may be a beloved pet. This reciprocal relationship of affection and attention between human and non-human animal translates into physical and mental health benefits. However, in many cases, pets can’t move with their older adults since very few jurisdictions guarantee the right to bring an animal into a rental unit or condominium.

The authors’ research asks: what are the factors that impact well-being in older age? He explore the impacts of technology on privacy, autonomy and well-being, as well as the effects of the human-animal bond on health and well-being. The author is also interested in whether social robots, including robopets, can produce the same effects.

Subscribe to detailed companies’ updates by Paradigm!

Medium. Twitter. Telegram. Reddit.