Perhaps you’ve stumped Siri before, asking Apple’s automated assistant things like, “What is the meaning of life?” or “How can I be healthier and happier?”

If so, you’re not alone in turning to your phone for existential guidance and serious, practical life advice. According to an Apple job posting, lots of people do it. That’s why the company is seeking software engineers with feeling—and a background in psychology and peer counseling—to help improve Siri’s responses to the toughest questions.

“People talk to Siri about all kinds of things, including when they’re having a stressful day or have something serious on their mind. They turn to Siri in emergencies or when they want guidance on living a healthier life,” states the April ad for a “Siri Software Engineer, Health and Wellness” in Santa Clara, California. The job posting was unearthed by CNBC reporter Christina Farr, who shared it on Twitter on Sept. 14.

The position requires a unique skill set. Basically, the company is looking for a computer scientist who knows algorithms and can write complex code, but also understands human interaction, has compassion, and communicates ably, preferably in more than one language. The role also promises a singular thrill: to “play a part in the next revolution in human-computer interaction.”

The post doesn’t note this, but it’s obvious from the job description that it also offers lots of fodder for a science fiction writer with a Ballardian bent. If this talented, compassionate, emotionally intelligent computer scientist also happens to be literary, we can look forward to a great but creepy tale someday, too, apart from a more intelligent assistant.

Computer scientists developing artificial intelligence have long debated what it means to be human and how to make machines more compassionate. Apart from the technical difficulties, the endeavor raises ethical dilemmas, as noted in the 2012 MIT Press book Robot Ethics: The Ethical and Social Implications of Robotics.

Even if machines could be made to feel for people, it’s not clear what feelings are the right ones to make a great and kind advisor and in what combinations. A sad machine is no good, perhaps, but a real happy machine is problematic, too. In a chapter on creating compassionate artificial intelligence (pdf), sociologist, bioethicist, and Buddhist monk James Hughes writes:

Programming too high a level of positive emotion in an artificial mind, locking it into a heavenly state of self-gratification, would also deny it the capacity for empathy with other beings’ suffering, and the nagging awareness that there is a better state of mind.

The job at Apple has been up since April, so maybe it’s turned out to be a tall order to fill. Still, it shouldn’t be impossible to find people who are interested in making machines more understanding. If it is, we should probably stop asking Siri such serious questions.