Smart machines are here and they’re changing how the world works. As artificial intelligence (AI) becomes ever more embedded in business and society, the EU has declared the need for a ‘human-centric’ approach to AI.

In other words, we need to figure out how humans can best interact and collaborate with machines, both practically and in a more overarching way, when it comes to the socio-economic, legal and ethical implications. That means working on practical, ethical and legal frameworks for AI development, oversight and leadership.

To that end, TU Dublin is leading CISC (Collaborative Intelligence for Safety Critical systems), which is a Marie Skłodowska Curie Actions Innovative Training Network funded to the tune of €3.6m by the EU’s Horizon 2020 research and innovation programme. A four-year project, it involves 13 other partners from higher education and industry in Ireland, Austria, Denmark, Italy and Serbia, including multinationals such as Iveco and Pilz.

“The idea of the researcher as the lone wolf is disappearing,” says Dr Maria Chiara Leva, who is leading the project from TU Dublin, “because the complexity of what we face is such that, on your own, you can’t tackle it all. So research is becoming more and more collaborative in nature, like this project, where we are bringing together expertise from different areas.”

How CISC aims to improve human-AI interaction

The network aims to hire, train and mentor early-stage researchers (ESR) or PhD students as collaborative intelligence (CI) scientists. CISC aims to give these 14 world-class, leading CI scientists the right expertise and skills in AI, human factors, neuro-ergonomics and system safety engineering to enable the development of a collaborative intelligence system. It also intends to set up a blueprint for postgraduate training in this area.

Safety critical systems used to bring to mind oil and gas, but now applications of AI in these systems can arise in manufacturing, Internet of Things, healthcare and transport, among other sectors.

These systems are designed to minimise disruption, risk, losses and accidents, and are what is known as ‘human in the loop’ scenarios, as humans and AI work together to achieve the desired results. As technology has improved and machine malfunctions have diminished, human error is responsible for as many as 80% of accidents.

“Systems are becoming more autonomous,” says Dr Leva, “but the role of the human is not disappearing because of that. Most of the time there is this very unrealistic expectation that the human will come in and save the day when things go wrong with the automation, but that is very difficult.”


Ensuring humans know how to work with AI

Ideally, AI enhances humans’ cognitive skills and creativity. It can do things like enabling robotics and analysis of huge data sets, freeing workers from low-level tasks and extending their physical capabilities. Enabling the best results can involve technology such as wearable sensors and optimised interfaces to give the person the most useful and relevant information from the machine.

“The paradox is that automation is very difficult to achieve,” adds Dr Leva, “unless the design has been such to allow the human being to be involved and also to cater for the capability and knowledge that can get lost in translation. The less you tell the person to do, the less they know how to do when there’s an issue.”

Vital real-world results from live labs
As part of the project, the chosen researchers will work on live labs projects around real-world challenges such as alarm management in oil and gas, where humans need not to be overwhelmed by too much information in an emergency so they can intervene early and react promptly and appropriately.

Others may investigate human-robot collaboration in spaces that are not typically visible to people, such as deep underwater or at the nanoparticle level.

“The students are learning how to adapt their approach and their methods, and how to engage with the company to actually develop a solution that fits both sides, that fits the academic side and fits the company’s needs,” says Professor John D Kelleher, Dr Leva’s colleague at TU Dublin. “It’s sometimes frustrating for both sides, but that’s the real world.”

Along with the live labs work, the CISC project also aims to develop a CI framework – a world-leading ‘human-centric’ approach to AI – and an ethical and legal framework for CI in safety-critical scenarios to translate and test the EU guidelines on ethics in AI in risk sensitive applications such as large manufacturing, process industry, and critical infrastructures.

Project leadership proves worthwhile
“The project lead like (Maria) Chiara must give a huge amount of their time,” says Professor Kelleher. “They have to believe authentically in the challenge we’re looking at because it’s going to take over their lives. I am in awe of the amount of work and coordination she puts into it.”

He adds that the work and effort is worthwhile, however, and not just about the amount of funding awarded. “You put in this proposal, you wait a long time, and then you’re doing all the paperwork. And then there’s a ramp up phase, where you’re trying to get people in place.

“But the emotional payoff came when we first got all the early-stage researchers into the same room and realised, actually this programme can change people’s lives. Also, we are getting to mentor these young researchers and that is very rewarding.”

If you would like advice about accessing Horizon Europe support or further details, please contact horizonsupport@enterprise-ireland.com or visit www.horizoneurope.ie