The article “AP Explains. Should you be worried about the rise of AI?” (July 26) points to a debate unimaginatively restricted to such artificial intelligence (AI) inventions as self-driving cars. Longer-term, AI will have far more momentous outcomes. These are many, but one, important to the brouhaha recounted in the article, is ultra-intelligence. Advanced AI systems might, arguably, figure out heuristically how to attain consciousness and think for themselves at a level where ham-fisted human intervention in their cognition would prove a handicap.

Such an eventual development is not just realistic, but may well cascade toward a transformative shift in the history of humankind — all the more so, given the potential for AI-brain interfaces as just one line of species evolution.

The much-publicized cautionary notes by a few evangelists of science and technology, such as Stephen Hawking, Elon Musk, Bill Gates, and Max Tegmark, regarding existential threats may inject discretion into the AI process; however, they are unlikely to markedly deflect the longer-term trajectory.

Curiosity and the technology’s promise will pique the imagination. Governments, the scientific and technology communities, legal scholars, special-interest groups, and philosophers including ethicists will strive to mitigate risk by crafting (malleable) safeguards.


However, each controlling cycle will prove transitory, giving ground to more permissiveness in a march forward as humankind finds the allure of incubating AI technology irresistible.

Keith Tidman

Bethesda


Letters and commentary policy

The U-T welcomes and encourages community dialogue on important public matters. Please visit this page for more details on our letters and commentaries policy. You can email letters@sduniontribune.com or leave a comment below.