TL;DR

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025. It is predicted that this market will hit the 100 billion U.S. dollar mark in 2020.

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars). Source: Statista

Research articles

CSAIL’s Conduct-A-Bot system uses muscle signals to cue a drone’s movement, enabling more natural human-robot communication

Albert Einstein famously postulated that “the only real valuable thing is intuition,” arguably one of the most important keys to understanding intention and communication. But intuitiveness is hard to teach — especially to a machine. Looking to improve this, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with a method that dials us closer to more seamless human-robot collaboration. The system, called “Conduct-A-Bot,” uses human muscle signals from wearable sensors to pilot a robot’s movement.

“We envision a world in which machines help people with cognitive and physical work, and to do so, they adapt to people rather than the other way around,” says Professor Daniela Rus, director of CSAIL, deputy dean of research for the MIT Stephen A. Schwarzman College of Computing, and co-author on a paper about the system.

To enable seamless teamwork between people and machines, electromyography and motion sensors are worn on the biceps, triceps, and forearms to measure muscle signals and movement. Algorithms then process the signals to detect gestures in real-time, without any offline calibration or per-user training data. The system uses just two or three wearable sensors, and nothing in the environment — largely reducing the barrier to casual users interacting with robots.

While Conduct-A-Bot could potentially be used for various scenarios, including navigating menus on electronic devices or supervising autonomous robots, for this research the team used a Parrot Bebop 2 drone, although any commercial drone could be used. By detecting actions like rotational gestures, clenched fists, tensed arms, and activated forearms, Conduct-A-Bot can move the drone left, right, up, down, and forward, as well as allow it to rotate and stop. If you gestured toward the right to your friend, they could likely interpret that they should move in that direction. Similarly, if you waved your hand to the left, for example, the drone would follow suit and make a left turn.

In tests, the drone correctly responded to 82 percent of over 1,500 human gestures when it was remotely controlled to fly through hoops. The system also correctly identified approximately 94 percent of cued gestures when the drone was not being controlled.

“Understanding our gestures could help robots interpret more of the nonverbal cues that we naturally use in everyday life,” says Joseph DelPreto, lead author on the new paper. “This type of system could help make interacting with a robot more similar to interacting with another person, and make it easier for someone to start using robots without prior experience or external sensors.”

This type of system could eventually target a range of applications for human-robot collaboration, including remote exploration, assistive personal robots, or manufacturing tasks like delivering objects or lifting materials. These intelligent tools are also consistent with social distancing — and could potentially open up a realm of future contactless work. For example, you can imagine machines being controlled by humans to safely clean a hospital room, or drop off medications, while letting us humans stay a safe distance.

Muscle signals can often provide information about states that are hard to observe from vision, such as joint stiffness or fatigue. For example, if you watch a video of someone holding a large box, you might have difficulty guessing how much effort or force was needed — and a machine would also have difficulty gauging that from vision alone. Using muscle sensors opens up possibilities to estimate not only motion, but also the force and torque required to execute that physical trajectory.

For the gesture vocabulary currently used to control the robot, the movements were detected as follows:

stiffening the upper arm to stop the robot (similar to briefly cringing when seeing something going wrong): biceps and triceps muscle signals;

waving the hand left/right and up/down to move the robot sideways or vertically: forearm muscle signals (with the forearm accelerometer indicating hand orientation);

fist clenching to move the robot forward: forearm muscle signals; and

rotating clockwise/counterclockwise to turn the robot: forearm gyroscope.

Machine learning classifiers detected the gestures using the wearable sensors. Unsupervised classifiers processed the muscle and motion data and clustered it in real time to learn how to separate gestures from other motions. A neural network also predicted wrist flexion or extension from forearm muscle signals.

The system essentially calibrates itself to each person’s signals while they’re making gestures that control the robot, making it faster and easier for casual users to start interacting with robots.

by You Yu, Joanna Nassar, Changhao Xu, Jihong Min, Yiran Yang, Adam Dai, Rohan Doshi, Adrian Huang, Yu Song, Rachel Gehlhar, Aaron D. Ames and Wei Gao in Science Robotics

A flexible and fully biofuel-powered electronic skin enables continuous, multiplexed, and multimodal wireless sensing

Perspiration-powered soft electronic skin (e-skin) for multiplexed wireless sensing. (A) Schematic of a battery-free, biofuel-powered e-skin that efficiently harvests energy from the human body, performs multiplexed biosensing, and wirelessly transmits data to a mobile user interface through Bluetooth. (B and C) Photographs of a PPES on a healthy individual’s arm. Scale bars, 1 cm. (D and E) Schematic illustrations of the flexible BFC-biosensor patch (D) and the soft electronic-skin interface (E). (F) System-level packaging and encapsulation of the PPES for efficient on-body biofluid sampling. M-tape, medical tape.

Existing electronic skin (e-skin) sensing platforms are equipped to monitor physical parameters using power from batteries or near-field communication. For e-skins to be applied in the next generation of robotics and medical devices, they must operate wirelessly and be self-powered. However, despite recent efforts to harvest energy from the human body, self-powered e-skin with the ability to perform biosensing with Bluetooth communication are limited because of the lack of a continuous energy source and limited power efficiency. Here, we report a flexible and fully perspiration-powered integrated electronic skin (PPES) for multiplexed metabolic sensing in situ. The battery-free e-skin contains multimodal sensors and highly efficient lactate biofuel cells that use a unique integration of zero- to three-dimensional nanomaterials to achieve high power intensity and long-term stability. The PPES delivered a record-breaking power density of 3.5 milliwatt·centimeter−2 for biofuel cells in untreated human body fluids (human sweat) and displayed a very stable performance during a 60-hour continuous operation. It selectively monitored key metabolic analytes (e.g., urea, NH4+, glucose, and pH) and the skin temperature during prolonged physical activities and wirelessly transmitted the data to the user interface using Bluetooth. The PPES was also able to monitor muscle contraction and work as a human-machine interface for human-prosthesis walking.

On-body evaluation of the PPES as a wireless human-machine interface for robotic assistance. (A) Schematic illustration of the PPES for remote human-machine interaction. (B) Schematic of the CNTs-PDMS elastomer-based stain sensors. (C )Resistance response of a CNTs-PDMS strain sensor under different strains. (D) Photograph and schematic (inset) of the PPES integrated with strain sensors. (E) Real-time multidegree motion tracking using a PPES with the strain sensors on an individual’s finger and elbow. (F) Time-lapse images of the wireless robotic arm control using a PPES. (G and H) Time-lapse images of front view (G) and side view (H) of the use of the PPES for robotic prosthesis control.

by Benjamin Shih, Dylan Shah, Jinxing Li, Thomas G. Thuruthel, Yong-Lae Park, Fumiya Iida, Zhenan Bao, Rebecca Kramer-Bottiglio and Michael T. Tolley in Science Robotics

Developments in e-skins and machine learning may achieve tactile sensing and proprioception for autonomous, deployable soft robots

Soft robots have garnered interest for real-world applications because of their intrinsic safety embedded at the material level. These robots use deformable materials capable of shape and behavioral changes and allow conformable physical contact for manipulation. Yet, with the introduction of soft and stretchable materials to robotic systems comes a myriad of challenges for sensor integration, including multimodal sensing capable of stretching, embedment of high-resolution but large-area sensor arrays, and sensor fusion with an increasing volume of data. This Review explores the emerging confluence of e-skins and machine learning, with a focus on how roboticists can combine recent developments from the two fields to build autonomous, deployable soft robots, integrated with capabilities for informative touch and proprioception to stand up to the challenges of real-world environments.

by Florian Bergner, ,Emmanuel Dean-Leon and Gordon Cheng, Institute for Cognitive Systems (ICS), Technische Universität München

Researchers at Technische Universität München in Germany have recently developed an electronic skin that could help to reproduce the human sense of touch in robots. This e-skin requires far less computational power than other existing e-skins and can thus be applied to larger portions of a robot’s body.

The sense of touch enables us to safely interact and control our contacts with our surroundings. Many technical systems and applications could profit from a similar type of sense. Yet, despite the emergence of e-skin systems covering more extensive areas, large-area realizations of e-skin effectively boosting applications are still rare. Recent advancements have improved the deployability and robustness of e-skin systems laying the basis for their scalability. However, the upscaling of e-skin systems introduces yet another challenge — the challenge of handling a large amount of heterogeneous tactile information with complex spatial relations between sensing points. Researchers targeted this challenge and proposed an event-driven approach for large-area skin systems. While their previous works focused on the implementation and the experimental validation of the approach, this work now provides the consolidated foundations for realizing, designing, and understanding large-area event-driven e-skin systems for effective applications. This work homogenizes the different perspectives on event-driven systems and assesses the applicability of existing event-driven implementations in large-area skin systems. Additionally, they provide novel guidelines for tuning the novelty-threshold of event generators. Overall, this work develops a systematic approach towards realizing a flexible event-driven information handling system on standard computer systems for large-scale e-skin with detailed descriptions on the effective design of event generators and decoders. All designs and guidelines are validated by outlining their impacts on our implementations, and by consolidating various experimental results. The resulting system design for e-skin systems is scalable, efficient, flexible, and capable of handling large amounts of information without customized hardware. The system provides the feasibility of complex large-area tactile applications, for instance in robotics.

by Patrick D. Ganzer, Samuel C. Colachis 4th, Michael A. Schwemmer, David A. Friedenberg, Collin F. Dunlap,Carly E. Swiftney, Adam F. Jacobowitz, Doug J. Weber, Marcia A. Bockbrader, and Gaurav Sharma in Cell

Researchers have been able to restore sensation to the hand of a research participant with a severe spinal cord injury using a brain-computer interface (BCI) system. The technology harnesses neural signals that are so minuscule they can’t be perceived and enhances them via artificial sensory feedback sent back to the participant, resulting in greatly enriched motor function.

Following spinal cord injury, subperceptual touch signals affect the human motor cortex

A brain-computer interface uses subperceptual signals to restore the sense of touch

Sensorimotor function is further enhanced using demultiplexed sensorimotor signals

Touch-regulated grip force can automate movement cascades and grip reanimation

Paralyzed muscles can be reanimated following spinal cord injury (SCI) using a brain-computer interface (BCI) to enhance motor function alone. Importantly, the sense of touch is a key component of motor function. Researchers demonstrate that a human participant with a clinically complete SCI can use a BCI to simultaneously reanimate both motor function and the sense of touch, leveraging residual touch signaling from his own hand. In the primary motor cortex (M1), residual subperceptual hand touch signals are simultaneously demultiplexed from ongoing efferent motor intention, enabling intracortically controlled closed-loop sensory feedback. Using the closedloop demultiplexing BCI almost fully restored the ability to detect object touch and significantly improved several sensorimotor functions. Afferent grip-intensity levels are also decoded from M1, enabling grip reanimation regulated by touch signaling. These results demonstrate that subperceptual neural signals can be decoded from the cortex and transformed into conscious perception, significantly augmenting function.

“It has been amazing to see the possibilities of sensory information coming from a device that was originally created to only allow me to control my hand in a one-way direction,” says first author Patrick Ganzer, a principal research scientist at Battelle.

by Shira Sardi, Roni Vardi, Yuval Meir, Yael Tugendhaft, Shiri Hodassman, Amir Goldental & Ido Kanter in Scientific Reports

Researchers have successfully rebuilt the bridge between experimental neuroscience and advanced artificial intelligence learning algorithms. Conducting new types of experiments on neuronal cultures, the researchers were able to demonstrate a new accelerated brain-inspired learning mechanism. When the mechanism was utilized on the artificial task of handwritten digit recognition, for instance, its success rates substantially outperformed commonly-used machine learning algorithms.

Attempting to imitate the brain’s functionalities, researchers have bridged between neuroscience and artificial intelligence for decades; however, experimental neuroscience has not directly advanced the field of machine learning (ML). Here, using neuronal cultures, scientists demonstrate that increased training frequency accelerates the neuronal adaptation processes. This mechanism was implemented on artificial neural networks, where a local learning step-size increases for coherent consecutive learning steps, and tested on a simple dataset of handwritten digits, MNIST. Based on their on-line learning results with a few handwriting examples, success rates for brain-inspired algorithms substantially outperform the commonly used ML algorithms. They speculate this emerging bridge from slow brain function to ML will promote ultrafast decision making under limited examples, which is the reality in many aspects of human activity, robotic control, and network optimization.

Experimental results indicate that adaptation rates increase with training frequency. (a) The experimental scheme where a patched neuron is stimulated intracellularly via its dendrites (Materials and Methods) and a different spike waveform is generated for each stimulated route. (b) The scheduling for coherent training consists of repeated pairs of intracellular stimulations (orange) generating a spike followed by an extracellular stimulation (blue) with the lack of a spike. (c )An example of the first type of experiments, where decreasing extracellular stimulation amplitude is used to estimate the threshold using intracellular recording (left), and enhanced responses measured a minute after the termination of the training, b (right). (d) An example of the second type of experiment, similar to c, where enhanced responses are observed 10 seconds after the termination of the training (Materials and Methods).

by Hausera, M. Mutluab, P.-A.Léziarta, H.Khodra, A.Bernardinob, A. J.Ijspeerta in

Roombot Swarm creates on-demand mobile furniture

•Depiction of how modular robots could be integrated into furniture in our living space.

•Definition of five key-tasks of adaptive and assistive furniture.

•Development of a dedicated GUI for an easy control of multiple Roombots (RB) modules.

•Hardware demonstrations of various sub-tasks involving up to 12 modules.

•Demonstrations of mobility, manipulation and human-module-interaction.

This work presents a series of demonstrations of our self-reconfigurable modular robots (SRMR) “Roombots” in the context of adaptive and assistive furniture. In literature, simulations are often ahead of what currently can be demonstrated in hardware with such systems due to significant challenges in transferring them to the real world. Researchers describe how Roombots tackled these difficulties in real hardware and focus qualitatively on selected hardware experiments rather than on quantitative measurements (in hardware and simulation) to showcase the many possibilities of an SRMR. We envision Roombots to be used in our living space and define five key tasks that such a system must possess. Consequently, they demonstrate these tasks, including self-reconfiguration with 12 modules (36 Degrees of Freedom), autonomously moving furniture, object manipulation and gripping capabilities, human-module-interaction and the development of an easy-to-use user interface. They conclude with the remaining challenges and point out possible directions of research for the future of adaptive and assistive furniture with Roombots.

by Andreas M. Fischer, Akos Varga-Szemes, Marly van Assen, L. Parkwood Griffith et al. in American Journal of Roentgenology

Deploying artificial intelligence could help radiologists to more accurately classify lung diseases

“Everybody has a different trigger threshold for what they would call normal and what they would call disease,” said U. Joseph Schoepf, M.D., director of cardiovascular imaging for MUSC Health and assistant dean for clinical research in the Medical University of South Carolina College of Medicine. And until recently, scans of damaged lungs have been a moot point, he said.

“In the past, if you lost lung tissue, that was it. The lung tissue was gone, and there was very little you could do in terms of therapy to help patients,” he said.

But with advancements in treatment in recent years has come an increased interest in objectively classifying the disease, Schoepf said. That’s where artificial intelligence and imaging could come into play.

Schoepf was principal investigator in a study looking at the results of Siemens Healthineers’ AI-Rad Companion as compared with traditional lung function tests. The study showed that the algorithm within AI-Rad Companion, which examines chest scans, provides results comparable with lung function tests, which measure how forcefully a person can exhale. Showing that the artificial intelligence software works is the first step toward possibly using chest scans to quantify the severity of the lung disease and track the progress of treatment.

In the study, researchers went back and looked at the chest scans and lung function tests of 141 people. Chest scans aren’t currently part of the guidelines for diagnosing chronic obstructive pulmonary disease, an umbrella term that includes emphysema, chronic bronchitis and other lung diseases, Schoepf said, because there hasn’t been an objective means to evaluate scans.

However, he anticipates a role for imaging scans if it can be shown that they offer a benefit in terms of objectivity and quantification.

Philipp Hoelzer, customer engagement manager with Siemens Healthineers, said having an objective measurement could help in assessing the value of new treatments or drugs. The Siemens Healthineers team sees the program as a way for artificial intelligence to work in tandem with the clinical expertise of radiologists, he said.

“Taking away manual, repetitive tasks, like those that require a lot of measurement, is of great benefit to a radiologist, especially when reading cases that may have 20 or more nodules,” he said. “Interpreting the images, and the abstract thinking that goes along with it, will remain with the radiologist.”

The program can also offer a concrete aid to doctors trying to impress upon patients the necessity of making changes. It can create a 3D model of the patient’s lungs, showing the existing damage.

“If you could visualize it and provide the information in image terms, you could better communicate with the patient and hopefully nudge the patient into smoking cessation or lifestyle changes,” Hoelzer said.

A potential additional benefit is that AI-Rad Companion automatically looks for problems across multiple organ systems, including measuring the aorta and bone density. As Schoepf moves into a prospective study phase, he’ll be examining whether the artificial intelligence finds things that humans miss. And it can be easy for humans to miss problems that they aren’t specifically looking for, he said.

“We’re told the patient has these types of symptoms, and then we basically go look for stuff that could explain those symptoms. So, we’re often blind to things that do not necessarily relate to the organ system we’re interested in,” he said.

It can also be difficult for humans to create an accurate measurement of a three-dimensional structure within the body from a two-dimensional scan — something that isn’t a problem for the artificial intelligence program. It can automatically combine multiple 2D images to produce 3D measurements.

Schoepf wants to see whether the program improves patient management by prompting early treatment of problems, like a widened aorta or decreased bone density, before the problems become painfully obvious to both doctor and patient.

Scatterplot shows correlation between Tiffeneau index and percentage of low-attenuation area (LAV%) for two reconstruction methods used in this study. Reconstruction method 1 (blue; ρ = −0.86) consisted id=”P”>of section thickness of 1.5 mm with lung kernel (filtered back-projection body kernel B60s, sharp, standard mode). Reconstruction method 2 (red; ρ = −0.85) used section thickness of 1.5 mm with soft-tissue kernel id=”P”>(filtered back-projection body kernel B31s, medium-smooth, standard mode).

Further, addressing the dynamically changing health care environment, significant efforts are currently in the final stages to train the artificial intelligence software in the detection and characterization of COVID-19-related lung changes. Hopefully, this would provide physicians with a tool to better differentiate the rather non-specific lung findings of COVID-19 pneumonia from other infectious or inflammatory lung disorders and more objectively quantify the extent of disease.

In terms of the measures for which it was originally developed, Schoepf said MUSC Health will test the system for three months before determining whether to deploy it more extensively. With a regional network that now includes hospitals across the state, it could be a useful tool in standardizing care.

“It’s a great chance for patients to get better care. We have world-class radiologists here, but these systems add a little extra,” he said.

Researchers at the University of Surrey have recently developed self-organizing algorithms inspired by biological morphogenesis that can generate formations for multi-robot teams, adapting to the environment they are moving in. Their recent study was partly funded by the European Commission’s FP7 program.

“This research can be traced back to my previous work on morphogenetic robotics that applies genetic and cellular principles underlying biological morphogenesis to the self-organization of collective systems, such as robot swarms,” Professor Yaochu Jin, a Surrey University Distinguished Chair and principal investigator on the study, told TechXplore. “Our main idea was to build a metaphor between cells in multi-cellular organisms and robots, including modules for reconfigurable modular robots.”

The main advantage of using morphological principles observed in nature to generate collective robot behavior is that these principles allow robots to self-organize themselves in a way that is ‘guided’, ‘predictable’ or ‘controllable’. Nonetheless, self-organizing systems (i.e., systems without centralized control) also typically have a number of limitations.

For instance, defining local interaction rules for generating a desired group behavior in these systems can be highly challenging. In other words, predicting and controlling the systems’ global behavior when given a set of define local rules is difficult.

In their work, Jin and his colleagues tried to overcome this limitation by using simplistic robots that are fairly basic and are not capable of self-localizing themselves. Applying morphological principles to these ‘minimalistic’ robots could enable more effective group behaviors, such as target surrounding or team formations.

“The main difference between our recent work and previous studies is that we use very simplistic robots (e.g., the kilobots we used in our experiments) that do not have self-localization and orientation capabilities,” Jin said.

In biological development, cells are guided to a desirable position by a type of chemical called morphogens, or more specifically morphogen gradients (i.e., the change in the concentration of morphogens in an animal’s body). Morphogen gradients can either be predefined, as they are, for instance, in the uterus (i.e. maternal morphogens) or established through what is known as ‘morphological development’.

In their study, Jin and his colleagues drew inspiration from a process called biological morphogenesis, through which cells generate morphogens themselves as an organism develops. While in nature these morphogens are then used to guide cells to specific locations, the researchers tried to replicate this principle to guide robots and shape their group behaviors.

“During the self-organization, targets and the robots can generate edge morphogens that can be sensed by the neighboring robots (within the sensing range of the robots),” Jin said. “The robots receive information simulating morphogens from its neighboring robots and also pass such information to neighboring robots, by simulating the reaction and diffusion process of biological morphogens.”

In their experiments, the researchers assumed that robots can only sense objects (e.g. targets or other robots) within their sensing range. The robots they used, called kilobots, did not possess any self-organization and orientation capabilities.

The researchers reproduced morphogenetic principles observed in nature by using the gradient (i.e., the difference in concentration) of artificial ‘morphogens’ to guide the movements of multiple robots, so that they effectively reached a desired destination and produced specific group behavior. In a series of preliminary tests, their hierarchical gene regulatory network (H-GRN) allowed robots to autonomously move towards a destination that was not predefined, surrounding targets or forming a specific shape.

“We found that by learning from biological morphogenesis, simplistic robots without self-localization capability (i.e., the capability of determining their own coordinates in a given environment), such as kilobots, can evenly surround moving or stationary targets in a self-organized way,” Jin said. “Most previously developed approaches to produce collaborative robot behaviors, on the other hand, are designed for robots that know their own position.”

The new hierarchical gene regulatory network (H-GRN) developed by this team of researchers has several advantages over other existing methods for producing collaborative behaviors in robots. Its primary advantage is that it can be used to shape the behavior of hundreds or even thousands of robots, allowing them to complete target surrounding or tracking tasks without centralized control and without any prior information about the targets.

Other

Boston Dynamics’ Spot robot is helping hospitals fight COVID-19 by reducing the amount of direct contact healthcare workers must have with patients. Boston Dynamics is also devising ways to use Spot to disinfect hospitals. In some places, robots are already using UV-C lights to do so.

Some Boston-area hospitals have turned to an unlikely assistant: a robotic dog named Spot.

“Starting in early March, [we] started receiving inquiries from hospitals asking if our robots could help minimize their staff’s exposure to COVID-19,” Boston Dynamics, the maker of the robot, said in a blog post Thursday. “One of the hospitals that we spoke to shared that, within a week, a sixth of their staff had contracted COVID-19 and that they were looking into using robots to take more of their staff out of range of the novel virus.”

Spot has seen internet stardom on YouTube for hijinks like loading a dishwasher, dancing, and hanging out with Adam Savage, but Boston Dynamics recently made the robot available for commercial lease. Since then, the robo-dog has helped bomb squads and worked on an oil rig, but this is its most important gig yet.

Spot has already been deployed at Brigham and Women’s Hospital of Harvard University for two full weeks. Right now, the bot works as telemedicine support, helping frontline staff in ad-hoc environments like triage tents and parking lots.

Generally, protocols require that patients must line up in outside tents for initial temperature readings. This can take up to five medical staff members, who are in high risk of contracting the virus. By using a robot, the hospital can reduce the number of workers in these environments and stow away limited personal protective equipment, like face shields and N-95 masks.

Spot is equipped with an iPad and a two-way radio on its back so healthcare providers can video conference with patients while remotely directing the robot through the lines of patients in the tents.

So far, feedback from the hospital indicates Spot has helped reduce situations where nursing staff could be exposed to contagious patients. “For every intake shift completed by a tele-operated robot shift, at least one healthcare provider is able to reduce their interaction with the disease,” Boston Dynamics notes.

Still, there are only so many Spot units to go around, so Boston Dynamics is open-sourcing its hardware and software stack to rejigger the robot for healthcare work. It’s all available on the company’s GitHub page, down to Computer Aided Drafting files for mounts. The company says no proprietary Boston Dynamics hardware or software is required to transform other robots into triage workers — you just need the open-sourced software.

In fact, Boston Dynamics imagines wheeled or tracked robots would work even better than Spot. So the company has been working closely with Canada’s Clearpath Robotics to build out more robotic triage workers.

Boston Dynamics wants to make the robot even more useful in hospital settings by finding a way to remotely measure vital signs like body temperature, respiratory rate, pulse rate, and oxygen saturation levels. After that, the company plans to use UV-C light (or similar technology) to kill virus particles and sanitize surfaces inside hospitals.

In health technology, wearable robots are programmable devices designed to mechanically interact with the body of the wearer. Sometimes referred to as exoskeletons, their purpose is to support motor function for people with severe mobility impairments. But market adoption of exoskeletons has been limited due to factors such as the weight of the equipment and the sometimes inaccurate predictions of wearer’s movements when walking on uneven ground or approaching an obstacle. However, recent advances in robotics, materials science and artificial intelligence could make these mobility assistance and rehabilitation tools more compact, lightweight and effective for the wearer.

The BioMot project, based at the Human Locomotion Laboratory in Madrid, advanced this emerging field by demonstrating that personalized computational models of the human body can be used to control wearable exoskeletons. Funded by the EU Future and Emerging Technologies (FET) program and completed in September 2016, the project developed robots with real-time adaptability and flexibility by increasing interactions between the robot and the user through dynamic sensorimotor interactions. Inspired by biology, the BioMot architecture design emulates multiple levels of sensory information processing, so researchers and developers could utilize relate specific movements to customize motor control recovery. In short, this means that an exoskeleton can be personalized to an individual user.

To allow this new human-robot interaction, a new type of stiffness actuator that uses springs to emulate the biomechanical properties of human muscles was developed. Building upon the knowledge gathered on actuators during the BIOMOT project, the Robotics and MultiBody Mechanics Research Group at the Free University of Brussels-VUB has sought to bring this technology closer to market through various spin-out initiatives.

An example of such a spin-out initiative is the Smart Wearable project, which is further developing a smart robot system for lower limb rehabilitation. It uses the previous actuator technology with a control and monitoring system linked to a user interface in the form of a video game. The integrated system aims to better personalize the equipment to the movements and needs of the user.

Built in about 24 hours, this robot is undergoing in-hospital testing for coronavirus disinfection

UV disinfection is one of the few areas where autonomous robots can be immediately and uniquely helpful during the COVID pandemic. Unfortunately, there aren’t enough of these robots to fulfill demand right now, and although companies are working hard to build them, it takes a substantial amount of time to develop the hardware, software, operational knowledge, and integration experience required to make a robotic disinfection system work in a hospital.

Conor McGinn, an assistant professor of mechanical engineering at Trinity College in Dublin and coleader of the Robotics and Innovation Lab (RAIL), has pulled together a small team of hardware and software engineers who’ve managed to get a UV disinfection robot into hospital testing within a matter of just a few weeks. They made it happen in such a short amount of time by building on previous research, collaborating with hospitals directly, and leveraging a development platform: the TurtleBot 2.

Over the last few years, RAIL has been researching mobile social robots for elder-care applications, and during their pilot testing, they came to understand how big of a problem infection can be in environments like nursing homes. This was well before COVID-19, but it was (and still is) one of the leading causes of hospitalization for nursing home residents. Most places just wipe down surfaces with disinfectant sometimes, but these facilities have many surfaces (like fabrics) that aren’t as easy to clean, and with people coming in and out all the time, anyone with a compromised immune system is always at risk.

“UV seemed to offer a lot of potential for addressing this problem,” McGinn told us last week. “It’s something that covers a lot of space, and it’s something that can very easily be put on a robot.”

The researchers thought about developing this concept further, but after a bit of poking around, it turned out that while there was plenty of data showing that UV light does kill viruses and bacteria, there wasn’t a lot of useful design information, like what kind of light you need, how powerful it has to be, and how long you have to illuminate difference surfaces from specific distances. McGinn says that as a sort of a side project, his group started working with microbiologists at Trinity to figure out all of these parameters. “The idea was that if we came up with an effective way to make it work, we could always add it to our robot.”

DJI, the global leader in civilian drones and aerial imaging technology, ushers in a new era of aerial creativity with the Mavic Air 2 drone, combining high-grade imaging, intuitive yet advanced flight performance and revolutionary smart and safe technology.

Created to make capturing unique, high-quality content from the air simple, fun, and safe, Mavic Air 2 offers flagship capabilities in a compact and easy to use folding drone that features 8K functionality. A larger 1/2” camera sensor offers high-resolution photos and videos to make content stand out, while advanced programmed flight modes, intelligent features and imaging technology make capturing professional-looking content effortless. Pilots can now stay in the sky longer with an enhanced maximum flight time, capture vivid imagery with completely revamped autonomous capabilities, and wholly transform their content with in-app editing features.

“Mavic Air 2 is another milestone for DJI, demonstrating that our smartest consumer drone does not have to be the largest,” said DJI President Roger Luo. “While the Mavic Air 2 bears all the hallmarks of the Mavic drone family, we had to completely rethink its design and development process. Our goal was to create a drone that offered the best overall experience possible to even the most novice pilot. We hope our drones can help boost creativity and become a fun yet educational experience that can be enjoyed, even at this unprecedented moment in history.”

Brain Corp.’s autonomous driving software doesn’t power sexy machines. Its mobile operating system instead controls squat, floor scrubbing robots used in supermarkets, malls and airports.

Though it lacks sizzle, the BrainOS self-driving platform has turned out to be a good business for the San Diego company — with more than 10,000 mobile autonomous robots worldwide running on its software.

And that helped the 340-employee company land another $36 million in venture capital funding to meet growing demand for its autonomous robots, in part due to ramped-up sanitation efforts during the coronavirus pandemic.

The new funding announced Monday brings the total amount raised by Brain Corp. to more than $160 million. SoftBank Vision Fund 1 led the round, with new investors Satwik Ventures and ClearBridge Investments joining in. Qualcomm Ventures, an existing investor, also participated.

“Every company that’s tried to build a sexy robot failed,” said Eugene Izhikevich, a computational neuroscientist and chief executive of Brain Corp., “We have these boring robots that do a routine job, but it is something that is helping essential workers to keep the stores clean.”

Self-driving robots powered by Brain Corp.”s software are used by Walmart and Kroger, as well as Giant Eagle, C&W Services, Simon Property Group and several other big box stores, as well as educational institutions. The additional funding will be used to support expansion, including possibly overseas.

Industrial equipment makers including Tennant Co., Minuteman International, Dane Technologies, UniCarriers Americas license BrainOS software to transform their manual floor cleaners into autonomous robots.

Last week, the company announced it is donating about two dozen floor-scrubbing robots for use in essential businesses at no charge for the next 90 days though its Robot Relief program.