As A.I. becomes more capable, we will start to see signs of real collaboration , which requires a level of trust akin to that of human partners in a complex environment. Our lives are full of cultural work that requires this kind of give and take, which depends in turn on accessibility, literacy and accountability. Most of the truly beautiful things humans create emerge from that kind of multi-agent process. As we start to work with A.I., we will need to find ways to collaborate with it.

So how do we go about expanding our thinking?

One way is to recognize the diversity of work already going on that gets drowned out by the loudest and most powerful voices talking about A.I. It is also important to consider who is in the room when A.I. systems are designed, and how easy it is for those human architects to embed their biases in the systems they build. Additional voices need to be at the table, people from groups underrepresented in Silicon Valley-style innovation, especially women, people of color, and the economically disadvantaged. Bringing more diversity into the design process will improve the outcomes of technologists who seek to design products and solve problems for “everyone.”

One great example of bringing diversity into A.I. is the Global A.I. Narratives project from Cambridge University’s Leverhulme Center for the Future of Intelligence, which seeks to understand what stories we are telling about the future of A.I. outside the Anglophone West.

Another kind of diversity lies in our rich cultural archives. People have been dreaming and writing about A.I. for centuries. Inspired by the work of Mr. Noessel, the Center for the Future of Intelligence and others, I am helping to lead A.I. Policy Futures, a project at Arizona State University and the New America Foundation. Our first goal is to create a taxonomy of A.I. in science fiction literature and film. We hope this will give us a broader view of the possibilities of A.I. by resurrecting good ideas that we have collectively forgotten, while also highlighting the gaps in our collective thinking.

The best way forward is to commit to the goal of thinking more holistically about A.I. and then orchestrate activities and conversations to advance it. When you bring together science fiction writers, technologists and policymakers, you create a feedback loop in which interesting things tend to happen. People ask one another simple yet surprisingly profound questions. Leaders in these different fields occasionally realize they have been working with impoverished conceptions of what particular words or ideas really mean. New plans and stories get hatched. From the Science Fiction Advisory Council at the X-Prize Foundation to the value that entities like NASA have attached to science fiction, there is growing recognition in the power and potential of this feedback loop.

A.I. is too interesting, too ubiquitous, and too poorly defined to be left to Hollywood mega-franchises or the same old cultural shorthand we’ve been using for 60 years. The thinking machines we should be talking about are not super-intelligences or replicants but rather the just-smart-enough technologies already changing our lives. How will we deal with A.I. that is not alien or omniscient but just a few steps ahead of us, like a good tennis partner?

We have to tell new stories about “little” A.I.: machines that aren’t trying to take over the world but just get the job done. Most important, we need more stories about real people in these futures, and how we will adapt to a reality where we share intelligence, work and creativity with our machines.

Ed Finn, the author of “What Algorithms Want: Imagination in the Age of Computing” and co-editor of “Frankenstein: Annotated for Scientists, Engineers, and Creators of All Kinds,” is the founding director of the Center for Science and the Imagination at Arizona State University, where he is an associate professor.

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.