Explicability in AI and robotics is at the centre of interdisciplinary discussions, which involve machine learning, computer science, ethics, and more. Transparency and explicability are generally seen as a need for systems that encompass AI. We offer a different point of view, arguing for rethinking the necessity of explicability at all costs, citing examples from videogames user-centred design, and existing applications in social robotics.