Explainability is at the core of several phenomena related to the integration of knowledge held by different actors (including the currently popular case involving machine intelligence and humans). We propose that there are inescapable limits to explainability that arise from the joint effects of limits on knowledge (non-omniscience), the need for absorptive capacity for an explanation to be understood, and finite time. We propose a formalization of the process of explanation as traversal across overlapping knowledge graphs, with the explainer facing an optimal stopping problem. This conceptualization helps to understand why honest, optimizing, actors motivated to cooperate that can communicate perfectly may nonetheless fail to convince one another of something that is true- i.e., fail to explain what they know. This limit to explainability, we suggest, is a fundamental cause of irreducible differences in perspectives across actors, which in turn helps to understand the roles of trust and authority even in the absence of any conflict of interests