1979: A 25-year-old Ford Motor assembly line worker is killed on the job in a Flat Rock, Michigan, casting plant.1 It's the first recorded human death by robot.

Robert Williams' death came on the 58th anniversary of the premiere of Karel Capek's play about Rossum's Universal Robots. R.U.R gave the world the first use of the word robot to describe an artificial person. Capek invented the term, basing it on the Czech word for "forced labor." (Robot entered the English language in 1923.)

Williams died instantly in 1979 when the robot's arm slammed him as he was gathering parts in a storage facility, where the robot also retrieved parts. Williams' family was later awarded $10 million in damages. The jury agreed the robot struck him in the head because of a lack of safety measures, including one that would sound an alarm if the robot was near.

Thanks in large part to the industrial assembly line, the robot has become commonplace in today's world. But unlike the one that killed Williams, today's robots vacuum floors, blow up landmines, rove on Mars, harvest fruit, may soon care for the elderly and are already largely responsible for producing printed circuit boards.

Those and other advancements are fueling a wide-ranging ethical discussion over robots, machines that Microsoft co-founder Bill Gates suggests are to become the focus of the next technological frontier.

One age-old concern is the Luddite argument, a fear that machinery would eventually replace the worker. Another more-evolved concern surrounds the common science-fiction theme of robot intelligence exceeding human intelligence.

Under that theory, the machines could rise up and eliminate their masters, a concept forbidden under Isaac Asimov's "Three Laws of Robotics." The first rule spelled out in Asimov's 1950s I, Robot stories says: "A robot may not injure a human being, or, through inaction, allow a human being to come to harm."

When it comes to robots, scientists don't want to wake up one day and ask, "Oh my God, what happened?" as some did following the development of nuclear weapons, said Ronald Arkin, the director of the Mobile Robot Laboratory at the Georgia Institute of Technology.

He described Williams' death as an "industrial accident," one in which the lack of physical safeguards were at fault. The death was not caused by the robot's will, he cautioned.

"It was not an ethical lapse, unless you're a Luddite against the Industrial Revolution," Arkin said in a recent telephone interview.

Three decades after Williams' death, governments are beginning to regulate robots. Scholars are exploring the legal implications of a robot's actions and whether they'll soon need their own lawyers.

Arkin is concerned more about the human spirit's reaction to interacting with robots, especially as one goal of robotics is to create a personal companion to fulfill our daily needs, like the robot Rosie from The Jetsons.

"What are the consequences of that if we succeed?" Arkin asked. "Artificial things may be more desirable and attractive than their faulty human counterparts."

1. The original version of this post mistakenly placed the death in a different Michigan City. This Day in Tech regrets the error and thanks the readers who pointed it out.

Source: Various

Photo: Robot arms assemble truck bodies in the fully automated Ford Motor Company truck plant in Dearborn, Michigan, in 2009.

Car Culture/Corbis