||Getting modern artificial intelligence (AI) and machine learning (ML) services into the hands of front-line users requires addressing what’s being called the “last mile AI” challenge. This means translating the benefits of AI from lab to the field, moving from often idealised ML model training to messy systems settings involving humans and AI software needing to work effectively together. Rapid trust calibration is a key part of this challenge — how to enable front-line users to understand the capabilities and limitations of complex AI services to exploit their benefits while mitigating their weaknesses. In this talk I’ll look an explainable AI from the “last mile” perspective, considering in particular settings at or near the network edge, including mobile and field deployments. I’ll consider multiple perspectives including the explanation and traceability requirements of different kinds of system stakeholders, human/machine teaming, and the interconnected nature of explainability, trust, assurance and scrutiny.