Advancements in robotic technology are making it increasingly possible to integrate robots into the human workspace in order to improve productivity and decrease worker strain resulting from the performance of repetitive, arduous physical tasks. While new computational methods have significantly enhanced the ability of people and robots to work flexibly together, there has been little study of the ways in which human factors influence the design of these computational techniques. In particular, collaboration with robots presents unique challenges related to the preservation of human situational awareness and the optimization of workload allocation for human teammates while respecting their workflow preferences. We conducted a series of human subject experiments to investigate these human factors, and provide design guidelines for the development of intelligent collaborative robots based on our results.

4. Formal problem definition The problem of scheduling a team of heterogeneous agents to complete a set of tasks with upper- and lower-bound temporal constraints and shared resources (e.g. spatial locations) falls within the XD [ST-SR-TA] class of scheduling problems, according to the comprehensive taxonomy defined by Korsah et al. (2013). This class is one of the most computationally challenging in the field of scheduling. The [ST-SR-TA] class of problems is composed of tasks requiring one robot or agent at a time (single-robot tasks [ST]), robots or agents that perform one task at a time (single-task robots [SR]) and a time-extended schedule of tasks that must be built for each robot or agent (time-extended allocation [TA]). This time-extended schedule includes cross-schedule dependencies (XD) among the individual schedules of the agents; such dependencies arise, for example, when agents must share limited-access resources (e.g. physical locations). We formulated an instance of this problem to develop an experiment task as a mixed-integer linear program, as depicted in equations (3) to (13). This formulation serves as a common basis to model each of the three experiments. We subsequently discuss experiment-specific extensions min z , z = g ( { A τ i j a | τ i j ∈ τ , a ∈ A } , { J 〈 τ i j , τ x y 〉 | τ i j , τ x y ∈ τ } , { s τ i j , f τ i j | τ i j ∈ τ } ) (3) subject to ∑ a ∈ A A τ i j a = 1 , ∀ τ i j ∈ τ (4) u b τ i j ≥ f τ i j − s τ i j ≥ l b τ i j , ∀ τ i j ∈ τ (5) f τ i j − s τ i j ≥ lb τ i j a − M ( 1 − A τ i j a ) , ∀ τ i j ∈ τ , a ∈ A (6) s τ x y − f τ i j ≥ W 〈 τ i j , τ x y 〉 , ∀ τ i j , τ x y ∈ τ | , ∀ W 〈 τ i j , τ x y 〉 ∈ TC (7) f τ x y − s τ i j ≤ D 〈 τ i , τ j 〉 rel , ∀ τ i j , τ x y ∈ τ | ∃ D 〈 τ i j , τ x y 〉 rel ∈ TC (8) f τ i j ≤ D τ i j abs , ∀ τ i ∈ τ | ∃ D τ i j abs ∈ TC (9) s τ x y − f τ i j ≥ M ( A τ i j a + A τ x y a − 2 ) + M ( J 〈 τ i j , τ x y 〉 − 1 ) , ∀ τ i j , τ x y ∈ τ , ∀ a ∈ A (10) s τ i j − f τ x y ≥ M ( A τ i j a + A τ x y a − 2 ) − M ( J 〈 τ i j , τ x y 〉 ) , ∀ τ i j , τ x y ∈ τ , ∀ a ∈ A (11) s τ x y − f τ i j ≥ M ( J 〈 τ i j , τ x y 〉 − 1 ) , ∀ τ i j , τ x y ∈ τ | R τ i j = R τ x y (12) s τ i j − f τ x y ≥ − M ( J 〈 τ i j , τ x y 〉 ) ∀ τ i , τ j ∈ τ | R τ i j = R τ x y (13) In this formulation, A τ i j a ∈ { 0 , 1 } is a binary decision variable for the assignment of agent a to subtask τ i j (i.e. the jth subtask of the ith task); A τ i j a equals 1 when agent a is assigned to subtask τ i j and 0 otherwise. J 〈 τ i j , τ x y 〉 ∈ { 0 , 1 } is a binary decision variable specifying whether τ i j comes before or after τ x y , and s τ i j , f τ i j ∈ [ 0 , ∞ ) are the start and finish times of τ i j , respectively. TC is the set of simple temporal constraints relating task events. M is a large, positive constant used to encode conditional statements as linear constraints. Equation (3) is a general objective that is a function of the decision variables { A τ i j a | τ i j ∈ τ , a ∈ A } , { J 〈 τ i j , τ x y 〉 | τ i j , τ x y ∈ τ } and { s τ i j , f τ i j | τ i j ∈ τ } . Equation (4) ensures that each τ i j is assigned to a single agent. Equation (5) ensures that the duration of each τ i j ∈ τ does not exceed its upper- and lower-bound durations. Equation (6) requires that the duration of τ i j , f τ i j − s τ i j is no less than the time required for agent a to complete τ i j . Equation (7) requires that τ x y occurs at least W 〈 τ i j , τ x y 〉 units of time after τ i j (i.e. W 〈 τ i j , τ x y 〉 is a lower-bound on the amount of time between the start of τ x y and the finish of τ i j ). Equation (8) requires that the duration between the start of τ i j and the finish of τ x y is less than D 〈 τ i j , τ x y 〉 rel (i.e. D 〈 τ i j , τ x y 〉 rel is an upper-bound on the finish time of τ x y relative to the start of τ i j ). Equation (9) requires that τ i j finishes before D τ i j abs units of time have expired since the start of the schedule (i.e. D τ i j abs is an upper-bound on the latest absolute time τ i j can be finished). Equations (10) and (11) enforce that agents can only execute one subtask at a time. Equations (12) and (13) enforce that each resource R i can only be accessed by one agent at a time. The worst-case time complexity of a complete solution technique for this problem is dominated by the binary decision variables for allocating tasks to agents ( A τ i j a ) and sequencing ( J 〈 τ i j , τ x y 〉 ), and the complexity is given by O ( 2 | A | | τ | 3 ) , where | A | is the number of agents and | τ | is the number of tasks. Agent allocation contributes O ( 2 | A | | τ | ) , and sequencing contributes O ( 2 | τ | 2 ) .

5. Scheduling mechanism For all three experiments, we adapted a dynamic scheduling algorithm, called Tercio, to schedule the human–robot teams (Gombolay et al., 2013). Tercio is an empirically fast, high-performance dynamic scheduling algorithm designed for coordinating human–robot teams with upper- and lower-bound temporospatial constraints. The algorithm is designed to operate on a simple temporal network (Muscettola et al., 1998) with set-bounded uncertainty. If the schedule’s execution exceeds its set bounds, Tercio reschedules the team (Gombolay et al., 2013). As shown in Figure 1, the algorithm takes as input a temporal constraint problem, a list of agent capabilities (i.e. the lower-bound, upper-bound, and expected duration for each agent performing each task) and the physical location of each task. Tercio first solves for an optimal task allocation by ensuring that the minimum amount of work assigned to any agent is as large as possible, as depicted in equation (14). In this equation, Agents is the set of agents, A τ i j a is a task allocation variable that equals 1 when agent a is assigned to subtask τ i j and 0 otherwise, A is the set of task allocation variables, A * is the optimal task allocation, and C τ i j a is the expected time it will take agent a to complete subtask τ i j A * = min { A } max Agents ∑ τ i j A τ i j a × C τ i j a , ∀ a ∈ Agents (14) Download Open in new tab Download in PowerPoint After determining the optimal task allocation, A * , Tercio uses a fast sequencing subroutine to complete the schedule. The sequencer orders the tasks through simulation over time. Before each commitment is made, the sequencer conducts an analytical schedulability test to determine whether task τ i can be scheduled at time t, given prior scheduling commitments. If the test returns that this commitment can be made, the sequencer then orders τ i and continues. If the schedulability test cannot guarantee commitment, the sequencer evaluates the next available task. If the schedule, consisting of a task allocation and a sequence of tasks, does not satisfy a specified makespan, a second iteration is performed by finding the second-most optimal task allocation and the corresponding sequence. The process terminates when the user is satisfied with the schedule quality or when no better schedule can be found. In this experiment, we specified that Tercio run for 25 iterations and return the best schedule. We employed Tercio because it allows for easy altering of task allocation within its task allocation subroutine. Here, we describe the specific Tercio alterations incorporated in each experiment. Note that only the task allocation subroutine within Tercio was modified for our three experiments; the sequencing subroutine remained unaltered. 5.1. Algorithm modifications for mixed-initiative scheduling In the situational awareness experiment, we sought to determine whether situational awareness degrades as a robotic agent is allowed greater autonomy over scheduling decisions. We considered three conditions: autonomous, semi-autonomous, and manual control. Under the autonomous condition, the robotic teammate performed scheduling for the entire team; as such, the robot could use Tercio without modifications. Under the semi-autonomous condition, in which the human participant decides which tasks to perform and the robotic agent decides how to allocate the remaining tasks between itself and a human assistant, Tercio was required to consider the tasks allocated by the participant. After the participant specified which tasks he or she would perform, the experimenter provided these assignments to the robot, which encoded the allocation as an assignment to the decision variables. Specifically, Tercio set A τ i j participant = 1 , A τ i j asst . = 0 , A τ i j robot = 0 for subtasks τ i j assigned to the participant, and A τ x y participant = 0 for subtasks τ x y that the participant left unassigned. Thus, the robot (via Tercio) only needed to solve for the allocation variables not already allocated by the participant. Under the autonomous condition, the participant specified all task allocation assignments. As such, the robotic agent set A τ i j a = 1 for all subtasks τ i j assigned to agent a, and A τ x y a = 0 for all subtasks τ x y not assigned to agent a, for all agents a. 5.2. Algorithm modifications for scheduling with preferences We focused on the effect of incorporating the preferences of human team members when generating a team’s schedule. Preferences can exist in a variety of forms. For example, human team members may have preferences about the duration of events (how long it takes to complete a given task) or the duration between events (the lower-bound or upper-bound on the time between two tasks) (Wilcox et al., 2012). In our investigation, we considered preferences related to task types—for example, a worker may prefer to complete a drilling task rather than a painting task. Such preferences can be included in the mathematical formulation in equations (3) to (13) as an objective function term where one seeks to maximize the number of preferred tasks assigned to the participant, as shown in equation (15). In this equation, the objective function term for maximizing preferences is balanced with the established criteria (i.e. function g ( { A τ i j a | τ i j ∈ τ , a ∈ A } , { J 〈 τ i j , τ x y 〉 | τ i j , τ x y ∈ τ } , { s τ i j , f τ i j | τ i j ∈ τ } ) from equation (3)) via a weighting parameter α min z , z = α × g ( { A τ i j a | τ i j ∈ τ , a ∈ A } , { J 〈 τ i j , τ x y 〉 | τ i j , τ x y ∈ τ } , { s τ i j , f τ i j | τ i j ∈ τ } ) − ( 1 − α ) × ( ∑ τ i j ∈ τ preferred A τ i j participant ) (15) Alternatively, one could incorporate preferences as a set of constraints on enforcement of a minimum or maximum level of preferred work assigned to the participant, as shown in equations (16) and (17). In these equations, k ub pref and k lb pref are upper- and lower-bounds on the number of preferred tasks allocated to the participant, and k ub pre f c and k lb pre f c are upper- and lower-bounds on the number of non-preferred tasks allocated to the participant. k lb pref ≤ ∑ τ i j ∈ τ pref A τ i j participant ≤ k ub pref (16) k lb pre f c ≤ ∑ τ i j ∈ τ pref c A τ i j participant ≤ k ub pre f c (17) We chose to model the inclusion of preferences as a set of constraints, which we added to Tercio’s task allocation subroutine. For the purpose of human participant experimentation, where one must control for confounders, this approach offers greater control over schedule content, as opposed to including a preference term within the objective function. The challenge of using an objective function model lies in the need to tune one or more coefficients (e.g. α in equation (15)) in the objective function to balance the contribution of the schedule efficiency (i.e. makespan) with the importance of adhering to preferences. We found this tuning to be difficult for a variety of participants. For all three conditions, we set k lb pref = k lb pre f c = 0 . Under the positive condition, participants could be assigned only one task that did not align with their preferences (i.e. k ub pref = ∞ and k ub pre f c = 1 ). Participants preferring to build could be assigned one fetching task at most, and vice versa. Under the negative condition, participants could be assigned a maximum of one task that aligned with their preferences (i.e. k ub pref = 1 and k ub pre f c = ∞ ); for example, participants preferring to build could be assigned one build task at most. Under the neutral condition, Tercio’s task allocation subroutine would run without alteration (i.e. k ub pref = k ub pre f c = 1 , τ preferred = ∅ ). Based on results from previous studies indicating the importance of team efficiency (Gombolay et al., 2015, 2014), we sought to control for the influence of schedule duration on team dynamics. For the experiment studying scheduling preferences, we ran 50 iterations of Tercio for each participant under the positive, neutral, and negative parameter settings, generating a total of 150 schedules. We then identified a set of three schedules, one from each condition, for which the makespans were approximately equal. (We did not control for the workload of the individual agents.) The robot then used these schedules to schedule the team under the respective conditions. 5.3. Algorithm modifications for constraints based on workload and schedulingpreference In this experiment, we needed to control for makespan across all four conditions while varying the participants’ workloads and the types of tasks they were assigned. To control for the degree to which preferences were included in the schedule, we again added equations (16) and (17) to Tercio’s task allocation subroutine. Under conditions with high preference, all tasks assigned to the participant were preferred tasks (i.e. k ub pref = ∞ and k ub pre f c = 0 ); under conditions with low preference, all tasks assigned to the participant were non-preferred tasks (i.e. k ub pref = 0 and k ub pre f c = ∞ ). Under all conditions, we set k lb pref = k lb pre f c = 0 . To control for the utilization of the participant, we added an objective function term to Tercio’s task allocation subroutine that minimized the absolute value of the difference between the desired utilization of the participant U target and the actual utilization of the participant ∑ τ i j ∈ τ A τ i j participant × l b τ i j . Since the absolute value function is non-linear and cannot be handled by a linear program solver, we linearized the term as z utility ≥ U target − ∑ τ i j ∈ τ A τ i j participant × l b τ i j (18) z utility ≥ − U target + ∑ τ i j ∈ τ A τ i j participant × l b τ i j (19) We generated schedules for each condition in three steps. First, we ran Tercio without any alterations to the task allocation subroutine for 100 iterations. Tercio works by iteratively generating task allocations and then sequencing the task set given the corresponding task allocation. Each iteration takes approximately one-third of a second. By running Tercio for several iterations, we allowed it to explore the search space so that it could then identify a candidate schedule with given characteristics (e.g. a specific degree of utilization of a particular agent). From these iterations, we recorded the median utilization U median of the participant. Next, we ran four additional sets of 100 iterations of Tercio—one set for each of the four conditions listed above. As before, we used equations (16) and (17) to control for the degree to which the robot included the participant’s preferences while scheduling. When the preference variable was set to high, we set k ub pref = ∞ and k ub pre f c = 0 ; we set k ub pref = 0 and k ub pre f c = ∞ for the low preference condition. In both conditions, k lb pref = k lb pre f c = 0 . In the experiment studying workload, we controlled for the participant’s utilization via equations (18) and (19). When the utilization variable was set to high, we set U target = U median . When the utilization variable was set to low, we set U target = U median / 2 . We then identified one schedule from each of the four sets of 100 Tercio iterations to generate a set of schedules with short, approximately equal, makespans and utilizations close to their respective targets. To generate this set, we employed equation (20), which minimizes the difference between the longest and shortest makespans across the four conditions (i.e. max i , j ( m i − m j ) ), the longest makespan (i.e. max i m i ), and the maximum difference between each schedule’s target utilization U i target and its actual utilization U i . In our experimental procedure, we set α 1 = α 2 = 1 , α 3 = 2 z tuning = α 1 max i , j ∈ schedules ( m i − m j ) + α 2 max i ∈ schedules m i + α 3 max i ∈ schedules ( U i target − U i ) (20)

6. Experimental design We conducted a series of three human participant experiments ( n = 17 , n = 18 , n 3 = 20 ) that required the fetching and assembly of Lego part kits. The goal of these experiments was to assess: (1) how a robotic teammate’s inclusion of the preferences of its human teammates while scheduling affects team dynamics, (2) how the benefits of including these scheduling preferences varies as a function of the degree to which the robot utilizes the human participant, and (3) how situational awareness degrades as a function of the level of autonomy afforded to the robot over scheduling decisions. We used the same basic experimental set-up for all three experiments, which we describe next. 6.1. Materials and set-up Our human–robot manufacturing team consisted of a human participant, a robotic assistant, and a human assistant. The human participant was capable of both fetching and building, while the robot assistant was only capable of fetching. One of the experimenters played the role of a third teammate (the human assistant) for all participants and was capable of both fetching and building. This human assistant was included to represent the composition of a human–robot team more realistically within a manufacturing setting. We used a Willow Garage PR2 platform, depicted in Figure 2, as the robotic assistant for our human–robot team. The robot used adaptive Monte Carlo localization (Fox, 2003) and the standard Gmapping package in the Robot Operating System for navigation. Download Open in new tab Download in PowerPoint 6.2. Procedure The scenario included two types of task: fetching and assembling part kits. As shown in Figure 2, the experiment environment included two fetching stations and two build stations, with four part kits located at each fetching station. Fetching a part kit involved moving to one of two fetching stations where the kits were located, inspecting the part kit, and carrying it to the build area. The architecture of our fetching task is analogous to actions required in many manufacturing domains. To adhere to strict quality assurance standards, fetching a part kit required verification from one or two people that all of the correct parts were present in the kit, as well as certification from another person that the kit had been verified. We also imposed additional constraints to better mimic an assembly manufacturing environment. A part kit had to be fetched before it could be built, and no two agents were able to occupy the same fetching or build station at the same time. Agents were required to take turns using the fetching stations, as allowing workers to sort through parts from multiple kits at the same location risked the participants mixing the wrong part with the wrong kit. Furthermore, in manufacturing, if a part or part kit is missing from an expected location for too long, work in that area of the factory will temporarily cease until the missing item has been found. As such, we imposed a 10 min deadline from the time that the fetching of a part kit began until that kit had been built. Assembly of the Lego model involved eight tasks τ = { τ 1 , τ 2 , … , τ 8 } , each of which consisted of a fetch and build subtask τ i = { τ i fetch , τ i build } . The amount of time each participant took to complete each subtask C i participant − fetch and C i participant − build was measured during a training round. The timings for the robot C i robot − fetch and human assistant C i assist − fetch and C i assist − build (as measured by an experimenter) were collected prior to the experiments. In all three experiments, the robotic agent employed Tercio as a dispatcher, communicating to the participant and human assistant when to initiate their next subtasks. Tercio would tell agents when they were each able to initiate or complete each subtask, and each agent would send a message acknowledging initiation or completion via simple, text-based messages over a TCP/IP GUI (SocketTest v3.0.0 ©2003–2008 Akshathnkumar Shetty, http://sockettest.sourceforge.net/). 6.2.1. Modifications for the experiment studying situational awareness For the study evaluating the effects of mixed-initiative scheduling on the situational awareness of the human team members, we performed a between-participants experiment, where each participant experienced only one of three conditions: autonomous, semi-autonomous, or manual. As stated previously, under the autonomous condition, the robot scheduled the three members of the team using Tercio with the default task allocation subroutine. Under the semi-autonomous condition, participants each selected which tasks they would perform and the robot allocated the remaining tasks to itself and the human assistant. Under the manual condition, the participant allocated tasks to each of the team members. The robot sequenced the tasks under all conditions. After the human participant or the robot completed the task allocation and sequencing process, the participants were allowed 3 min to review the schedule. We found in prior work that participants required ≈3 min to perform task allocation (Gombolay et al., 2015); therefore, we wanted to allow participants at least this much time to review a robot-generated schedule under the autonomous condition. Participants were not told that they would be later questioned about their experiences because we did not want to unduly bias them to focus on preparing for such a test. Instead, we wanted participants to attend fully to the task at hand. After the participants reviewed the schedule, the team executed their tasks according to that schedule. At approximately 200 s into task execution, the experimenter halted the process and administered the post-trial questionnaire (Table 1) according to the SAGAT. The timing of the intervention was tuned to allow each team member to have been assigned at least one task on average. The team did not complete the schedule after the SAGAT test; the experiment concluded following administration of the questionnaire. 6.2.2. Extensions for the experiment to study scheduling preferences For the experiment to study scheduling preferences, we employed a within-participants design. Participants experienced all three experimental conditions: positive, neutral, and negative. The order in which participants experienced these conditions was randomized. Participants were randomly assigned to these conditions. At the beginning of each condition, participants were told that their robot teammate wanted to know whether they preferred to complete fetch tasks or build tasks, and the participants responded accordingly. Deference to the participants with regard to their preferred tasks is in keeping with a pseudo-experiment. We did not attempt to balance participants according to the number in our sample who preferred fetching vs. building, as 14 of 18 participants (78%) preferred building tasks. Participants were not informed a priori of the different conditions; as such, subjective evaluations of team dynamics under each condition would not be influenced by the expectation that the robot would or would not cater to the participants’ preferences. The preferences, along with task completion times for each of the three team members, were provided to the robot, which scheduled the team. The team then performed the tasks to completion. After the schedule was completed, participants completed the post-trial questionnaire given in Table 3. This process was repeated once for each condition, as indicated previously. After completing the tasks under all three conditions, participants completed the post-test questionnaire shown in Table 4. The experiment concluded after completion of this questionnaire. 6.2.3. Extensions for the experiment to study workload For the experiment to study workload influence, we employed an experimental design that mirrored the procedure for the experiment studying workflow preferences, with one primary difference: We varied the workload and the degree to which human preferences were considered during scheduling, rather than preferences alone. Participants were not informed about whether the robot was varying their utilization, and the schedule itself was not reported to the participant; participants had to infer changes to their degree of utilization based only on their subjective experience.

8. Discussion 8.1. Design guidance for roboticists We investigated key gaps in prior literature by assessing how situational awareness is affected by the level of autonomy in mixed-initiative scheduling for human–robot teams, the effects of increased or decreased workload in human–robot team fluency, and the role of workflow preferences in robotic scheduling. Based on our findings, we can provide design guidance for roboticists developing intelligent collaborative robots that engage in mixed-initiative decision-making with human participants. Human situational awareness is poorer when the robotic agent has full autonomy over scheduling decisions, as assessed by both objective and subjective measures. However, prior research has indicated that decreasing robotic autonomy over scheduling decisions reduces efficiency and decreases the desire of the human participant to work with a robotic agent. Therefore, the positive and negative effects of increasing the robot’s role in decision-making must be carefully weighed. If there is a high probability that the human agent will have to intervene to adjust work allocations, or if the potential cost of poorer human performance due to reduced situational awareness is high, then we recommend that the human participant retain the primary decision-making authority. If human intervention is unlikely, or the cost of poorer human performance is low, then the benefits of improved team efficiency can be safely achieved by allowing the robot to retain primary decision-making authority. In many applications, a mixed-initiative approach in which the participant and robot collaborate to make decisions offers a suitable middle ground between the two ends of this spectrum. In addition, a human participant’s perception of a robotic teammate scheduling a team’s activities may improve when the human participant is scheduled to complete tasks that he or she prefers. However, human team members’ perception of the robot may be negatively impacted when they are scheduled to be idle for much of the time. Providing human team members with more highly preferred tasks at the cost of decreasing the total amount of work assigned to them may, in fact, have more of a negative impact than assigning human team members less-preferred tasks. Although the degree to which these variables interact is likely to be application-specific, it cannot be assumed that increasing one criterion at the cost of the other will improve team fluency. Collaborations with robots that participate in decision-making related to the planning and scheduling of work present unique challenges with regard to preserving human situational awareness and optimizing workload allocation to human teammates while also respecting their workflow preferences. Careful consideration is necessary in order to design intelligent collaborative robots that effectively balance the benefits and detriments of maintaining an increased role in the decision-making process. 8.2. Limitations and future work There are limitations to our findings. Our sample population consisted of young adults enrolled from a local university campus, whereas the target population consists of older, working adults in the fields of manufacturing and search-and-rescue, among other domains. Impressions of robotic teammates, in general, may differ significantly between these populations. Workers may also use different criteria to evaluate a human–robot team. For example, if chronic fatigue is an issue in a given setting, workers may prefer a greater amount of idle time. Also, we limited the expression of preferences to a binary choice between two types of task; however, the preferences of real workers may be more nuanced and difficult to encode computationally. For these reasons, we recommend a follow-on study, conducted in multiple factories across a variety of industries and work environments, to confirm the results of our experiments. We studied one robot form factor (i.e. a PR2) in our investigation. It is possible that other form factors could elicit a different response from participants. Further, we used a specific scheduling technique, Tercio, well-suited for human–robot teaming. It is possible that alternate scheduling algorithms could alter participants’ experience. When manipulating the degree to which participants are utilized and the amount of preferred work assigned to those participants, we used “high” and “low” settings. We found that increasing the setting of these independent variables from low to high positively affected participants’ experience working with the robot. It is possible, however, that the relationship between utilization and participants’ subjective experience is not linear. For example, an “extremely high” utilization could be less desirable even than low utilization. Future work should investigate utilization and workflow preferences across the entire spectrum.

9. Conclusions While new computational methods have significantly enhanced the ability of people and robots to work flexibly together, there has been little study of the ways in which human factors must influence the design of these computational techniques. In this work, we investigated how situational awareness varies as a function of the degree of autonomy a robotic agent has during scheduling, and found that human participants’ awareness of their team’s actions decreased as the degree of robot autonomy increased. This indicates that the desire for increased autonomy and accompanying performance improvements must balanced with the risk of—and cost resulting from—reduced situational awareness. We also studied how team fluency varies as a function of the workload given to a human team member by a robotic agent, and the manner in which a robot should include the workflow preferences of its human teammates in the decision-making process. Results indicate a complex relationship between preferences, utilization, and the participants’ perception of team efficiency. The three study results provide guidelines for the development of intelligent collaborative robots, and a framework for weighing the positive and negative effects of increasing the robot’s role in decision-making.

Funding

The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Science Foundation Graduate Research Fellowship Program (grant number 2388357).