From personalised searches of Google to the seductive experience of driverless cars, from educational robots that hone your French to prosthetics that are stronger and faster than our own limbs: artificial intelligence is poised to revolutionise our lives.

Now scientists, legal experts and philosophers are joining forces to scrutinise the promise of intelligent systems and wrangle over their implications. This week in Brighton, the fourth EuCogIII members' conference is set to tackle these issues head on. "Fundamentally we're interested in considering the ethical and societal impact of such systems," says Alan Winfield, professor of electronic engineering at UWE Bristol. It is time, he says, to make some crucial decisions. "If we get it wrong, there are consequences right now."

It's a point well illustrated by IBM's intelligent system, Watson. Two years after thrashing human contestants at the quickfire quiz Jeopardy!, Watson has graduated from gameshows to medical school and could soon be diagnosing diseases. This year commercial products based on Watson were unveiled for clinical use, harnessing the system's ability to crunch through swaths of medical information and make decisions.

"There is a huge amount of knowledge now that doctors can potentially have. Obviously they can't absorb all of it and they can't necessarily remember all of it," says Tony Prescott, professor of cognitive neuroscience at the University of Sheffield. With access to the latest developments as well as the medical records of patients, systems such as Watson could suggest an accurate diagnosis faster and more often, as well as predicting an individual's health risks.

But there is a hitch. With intelligent systems accessing medical records comes the fear of compromised privacy and security, as many will be connected via the internet. Could we, or even should we, be allowed to opt out of such an intelligent system? "It is a decision we have to make as a society," says Prescott. "Whether we want to give up some of our privacy in order to get improved services like better healthcare."

But how far can we trust such systems? Putting your faith in a "black box" may seem at best naive, at worst reckless. It is an issue that boils down to trust, making it essential that doctors are closely involved in training such systems, understanding how they work and confirming the diagnoses are spot on.

At the heart of the revolution is you, the consumer. With computers getting smaller, more powerful and more energy-efficient, few areas of our lives will remain untouched by intelligent machines. Driverless cars are expected to cause a storm. "The technology is ready," says Winfield. "The problem is insurance and legislation." While driverless cars could offer many benefits, from bringing independence to the elderly to reducing the number of road accidents, disasters could still happen. Who then pays the damages – the owner, or the car producer?

Last year a European research project, RoboLaw, was created to tackle such legal conundrums and will deliver its guidelines on regulations to the European commission in the spring. One question is whether it's time to rethink liability to ensure safety and justice without compromising the incentive for companies to develop the technology – "for instance, through the usage of compulsory insurance schemes or by assessing so-called 'safe harbours' to shield, in some cases under certain conditions, the liability of the producer of the car," explains Andrea Bertolini, a post-doctoral fellow in private law at the Scuola Superiore Sant'Anna in northern Italy and a member of the RoboLaw team.

And it is not just issues of liability that could be reformed. Fallible humans are constrained by speed limits to reduce the number of crashes, but with an all-encompassing knowledge of road layout and road users, intelligent cars could themselves decide how fast they travel, banishing the need for fixed limits.

One of the greatest issues, says Bertolini, is that there are many types of robots each posing different legal problems. State of the art prosthetic devices – essentially wearable intelligent robots – could soon outperform our natural limbs, raising new concerns that the technology could become available to individuals who may wish to trade in their healthy body parts for a prosthesis. "Should this be regulated, and eventually if it should be regulated, how should it be regulated?" asks Bertolini.

The questions become even more pressing when the possibility of implants are considered – imagine a brain chip that could let you check your email, search the internet or tap in to GPS. It's the ultimate "hands-free" device.

This possibility of becoming "bio-hybrid" may sound futuristic, unlikely even. But when technology develops, it develops quickly. "It is moving way faster than legislation can keep up and yes, it's a problem," says Tony Belpaeme, professor of cognitive systems and robotics at the University of Plymouth.

And the issues are international. "The trouble is that your data is now globally spread and legislation isn't the same across various regions across the planet," says Belpaeme. With recent revelations over the access and use of data by various government agencies still reverberating, issues of data storage, privacy and security need to be aired openly. "How much worse would it be if there were such external and covert constraints on cognitive technology?" asks Dr Ron Chrisley, reader in philosophy at the University of Sussex and one of the conference organisers. With the possibility of technology becoming intertwined with our very bodies, the threat of unauthorised access looms large.

Running scared is not an option. Intelligent systems offer us the chance to hone many fundamental areas of our lives, including education. It's an effect Belpaeme has seen firsthand through his research into use of social robots in education. While current computer systems can support learning, robots, particularly those sporting a face and personalised conversation, evoke a stronger response. "What we found is that if you have a robot there taking you through exactly the same exercises, the children learn faster and better," says Belpaeme.

"I can just see a future where you have one or two robots sitting in the corner of a classroom," he says. "If you just need a little push or you want to be challenged, you get 20 minutes or half an hour with a robot." According to Belpaeme, hospitals could benefit from such technology, with robots teaching children how to manage their medical conditions, while in care homes such robots could help the elderly with their daily exercises.

With such technological leaps set to transform our lives we, the public, need to be involved in the discussion, shaping policy and priorities from the outset. "I think that the greatest risk with these kinds of technologies is that they come along and they are a big surprise to people," says Prescott. Which is why, before the conference, you are invited to post questions for experts to discuss. You can also follow the event on Twitter through the hashtag #robotsandyou.

Intelligent machines could turn education, healthcare and daily life into optimised, tailored experiences. Getting society on side is big, and it's clever.