"Never complain, never explain": why robots may not have to be explicable after all


Explicability in AI and robotics is at the centre of interdisciplinary discussions, which involve machine learning, computer science, ethics, and more. Transparency and explicability are generally seen as a need for systems that encompass AI. We offer a different point of view, arguing for rethinking the necessity of explicability at all costs, citing examples from videogames user-centred design, and existing applications in social robotics.

ICRA2023 Workshop on Explainable Robotics
Yegang Du
Yegang Du
Junior Researcher

My research interests include distributed computing, federated learning, HCI, and AIoT.