Temporal Ontologies and the Ethics of Explainable AI
When AI learns to compute time itself, prediction becomes more than forecasting.
Rather than serving merely as a parameter within probabilistic models, time becomes an active medium through which systems organise, interpret, and act upon reality. A key shift is underway: predictive analytics is no longer primarily concerned with forecasting probable outcomes from static datasets, but with structuring time itself as an object of computation. This marks a deeper reorientation at the intersection of AI, data science, and epistemology. What was once a discipline focused on extrapolating future states from historical data is now evolving toward something more foundational, the treatment of time as an operational domain. In this emerging paradigm, time is no longer a neutral backdrop against which events unfold, but a modellable and ultimately programmable dimension of uncertainty quantification.
This shift reflects a transformation in computational logic. Systems are no longer designed solely to estimate what is likely to happen but are increasingly tasked with organising sequences of action across temporal horizons. Predictive analytics thus becomes inseparable from temporal architecture. The central question moves from “what will occur?” to “how should sequences of events be structured, anticipated, and acted upon?” In this sense, time ceases to be a simple input variable and becomes the very operational medium for intelligence operations.
A temporal ontology can be understood as a formal attempt to define the structure of time in a way that is both computationally tractable and philosophically coherent. It specifies what kinds of temporal entities exist and how these entities relate to one another through ordering, overlap, causation, and duration. From a technical perspective, temporal ontologies move beyond simple timestamping and encode relational structures such as precedence, simultaneity or inclusion, enabling systems to reason about sequences, dependencies, and transitions rather than isolated data points often obscured by noise or sparsity. This, in turn, allows for richer forms of inference, particularly in environments where timing, sequencing and the interaction of temporal segments with varying attributes and active states are critical to decision-making.
Yet beneath these formal structures lie deep philosophical commitments. Any temporal ontology implicitly answers questions that have occupied thinkers for centuries: does only the present exist, or do past and future possess ontological status? Is time composed of discrete instants or continuous flows? Are events fundamental, or are they derivative of underlying processes? These questions are not merely abstract. When encoded into computational systems, they shape how machines interpret reality, assign causality, and generate predictions.
The emergence of temporal ontologies is marked by a convergence of disciplines that were once relatively distinct. Advances in AI, formal logic, and the scientific study of time are now coalescing into unified frameworks capable of operating within dynamic, real-world environments. This convergence reflects not only a technical integration but a deeper epistemic shift toward treating time as an active dimension of computation. One of the most significant developments within this trajectory is the transition from representing time to acting within it. Earlier systems were largely descriptive, capturing temporal relations retrospectively. Emerging systems, by contrast, are increasingly prescriptive, integrating temporal logic directly into decision-making processes and enabling continuous reasoning over streaming data as events unfold. At the same time, there is growing recognition that time in real-world contexts is rarely precise or uniform. Emerging approaches therefore incorporate uncertainty, ambiguity, and multi-scale variability, allowing systems to reason about vague temporal expressions and probabilistic sequences. This is particularly critical in domains such as intelligence analysis or climate modeling, where temporal boundaries are fluid, contested, and often strategically interpreted.
There is an additional depth to the integration of temporal ontology with physical and complex systems. Models that account for memory, path dependence and non-linear dynamics challenge the assumption that the present alone determines the future. Instead, they foreground the persistence of temporal structures, suggesting that the past continues to exert influence in ways that are both measurable and computationally exploitable. This shift effectively extends temporal reasoning beyond abstract representation into the domain of complex system behaviour, where history is actively embedded in system evolution. In such frameworks, temporal structure becomes a generative constraint, shaping trajectories through feedback and irreversible processes.
The development of temporal ontologies and predictive analytics is supported by a diverse ecosystem of academic institutions, each contributing from different disciplinary angles. Some focus on the philosophical foundations of time, exploring its metaphysical and logical properties, while others focus on computational implementation, developing formal systems capable of reasoning over temporal data at scale.
Universities such as Stanford University1 and the University of Oxford have been active in philosophical inquiries into time, hosting influential work on temporal logic and metaphysics. Meanwhile, institutions like the University of Cambridge2 and others have advanced formal and computational approaches, especially in semantics and logical reasoning. Interdisciplinary centers such as the Santa Fe Institute or the Max Planck Institute extend this work into the study of complex systems and historical epistemology. Together, these institutions form a distributed and interconnected network of expertise, reflecting the inherently cross-disciplinary nature of temporal research.
The Actions of Great Men Set the Standard for Others
Temporal ontologies and their applications carry far-reaching strategic implications. As predictive systems become capable not only of processing information but of structuring and acting within time, the locus of competitive advantage shifts accordingly. It is no longer sufficient to process data quickly; what matters increasingly is the ability to maintain temporal coherence across complex, multi-stage sequences of events. In practical terms, predictive analytics is evolving into a form of real-time risk governance. Systems are designed not only to anticipate future states but to intervene within ongoing processes, shaping outcomes as they unfold. This development blurs the boundary between prediction and control, raising significant questions about accountability, legitimacy, and oversight in algorithmically mediated environments.
At a deeper level, power increasingly resides in the design of the ontologies themselves. Those who define the categories, relations, and constraints of temporal systems effectively determine how reality is represented, segmented, and operationalised within computational frameworks. Temporal ontology, therefore, becomes a strategic asset, comparable in significance to infrastructure or capital, since it structures the very conditions under which decisions are made. This shift introduces a range of risks that are both technical and epistemic. One central challenge lies in the tendency of current predictive systems to project historical patterns into the future, thereby obscuring discontinuities, emergent phenomena, or structurally novel developments. This limitation becomes especially critical in highly dynamic environments, where past regularities may no longer hold.
From a technical standpoint, temporal reasoning introduces substantial complexity. Maintaining coherent, adaptive, and up-to-date ontologies in real time requires significant computational resources, as well as robust mechanisms for uncertainty management and structural change. Scalability, consistency, and interpretability become non-trivial constraints.
From a philosophical perspective, there is also a risk of conflating predictive performance with ontological validity. A system may generate highly accurate forecasts while still relying on fragile or implicit/biased assumptions about causality, temporality, or structural dependency. Without careful scrutiny, such systems can acquire an unwarranted epistemic authority, shaping decisions in ways that are difficult to interrogate or contest.
Despite the technical sophistication of temporal ontologies, their interpretive dimension remains unavoidable, particularly in the context of explainable AI (XAI) and human-in-the-loop ethical systems. Hermeneutics, understood as the theory of interpretation, becomes indispensable precisely because temporal systems are never purely objective. They are always embedded within frameworks of meaning that shape how events are understood. Time, in this sense, is not simply measured. It is interpreted, formalised, and validated through iterative processes of empirical observation in hybrid human–machine systems, where interpretation and computation increasingly co-produce temporal structure.
The designation of an “event,” the delimitation of a “crisis,” or the identification of a “trend” all involve acts of interpretation that reflect cultural, institutional, and political assumptions. These assumptions are not always interoperable (acceptable) across systems or contexts, which can lead to structural misalignment between interpretive frameworks. In some cases, such misalignment contributes to systemic conflict over how reality is categorised and acted upon, as competing models of interpretation attempt to assert dominance over shared operational environments. Implicitly, predictive systems inherit these assumptions, often without making them explicit. Every predictive model, therefore, constructs a narrative, whether intentionally or not. It selects certain pasts as relevant, projects certain futures as plausible, and encodes implicit judgments about causality and significance. These narratives, in turn, guide decision-making processes, sometimes with far-reaching consequences, including the institutionalisation of bias as if it were formalised “truth.”
Explainability Layer: Hermeneutics
The psychology of language, etymology, and surface-level translation are not sufficient on their own. Language alone, along with assumed meanings, does not provide full scientific explainability or a coherent, non-contradictory system of thought. Meaning exists beyond language and can be lost or altered through translation, across contexts. In this sense, meaning and, by extension, explainability in AI requires at least two main layers of design: epistemology, understood as the cognitive and formal framework that defines the verifiable nature, origin, and scope of the used research data; and hermeneutics, understood as the set of explicit interpretative rules and minimum standards for meaning construction that are broadly acceptable and consistently applied.
Hermeneutics layer provides the capacity to reveal the interpretive capability embedded within data systems. It allows us to critically examine not only the outputs of predictive models, but also the origin, construction, and scope of the scientific datasets and frameworks that generate and reuse them. In this sense, hermeneutics functions as a safeguard against the uncritical acceptance of computational neutrality, highlighting how biases may persist or be reproduced through apparently objective systems. Within this perspective, the reference systems most immediately accessible to human cognition, namely spatial and material representations, must be understood as interacting with temporal structures that are not purely abstract or linear.
Rather than committing to strict metaphysical positions such as “presentism” or “eternalism”3, time can be approached as a structured relational dimension whose properties emerge through interaction with physical and informational processes. In the field of trusted XAI, this invites further investigation into how temporal assumptions are embedded within models and datasets, and how these assumptions influence interpretability. Certain aspects of spacetime interaction may be explored as higher-order patterns of correlation across scales of observation, particularly where classical spatial representations intersect with dynamic temporal processes. While such ideas remain exploratory, they may still be valuable as hypotheses for further theoretical and empirical examination.
Imagine a clock with nine principal arms instead of two, along with many secondary arms, all representing Time. These arms move continuously at different speeds across fourteen interconnected “discs” representing material fields, including the space between them. Each disc is conceived as a 360-degree field of incidence reports, forming a time–space exploration state space of at least 5040 degrees. Within this framework, high-frequency analytical structure emerges through combinatorial complexity (combinations of 5040 taken nine at a time), producing a large primary dataset, a 28-digit number, about 5,74 octillion time-space positions. The granulation of measurable qualities is given by the secondary arms modulating the nine principal arms of the system, distributed across interactions among the fourteen discs. This model proposes that time can be treated as a structured, representable field, expressible mathematically and graphically, in which intersecting events and generative qualities are mapped within a densely interconnected temporal (9+) and spatial (5040) network accessible, in principle, to human cognition.
The fact that such a system cannot be fully computed or visualised manually is not a limitation of structure, but of method. Human rationalism has historically prioritised formalised, material demonstration as the primary source of truth, leading to present-day constraints in handling extremely large combinatorial systems. In response, machines have been developed to perform such calculations, shifting uncertainty quantification and incomplete knowledge into computational frameworks. Methods such as Bayesian inference and Monte Carlo sampling are now widely accepted as valid approaches to machine reasoning under uncertainty. However, analogous forms of probabilistic reasoning when performed by humans alone are often regarded as less rigorous or less reliable. This introduces a potentially high-risk dependency: machines generate inferences under uncertainty that are frequently treated as more credible than comparable human judgments operating under the same probabilistic constraints. This creates a paradox in which human interpretative authority is progressively displaced by computational outputs. In addition, the ethical dimension risks being diminished, insufficiently defined, externalised or overlooked, as if machine-based uncertainty quantification were inherently objective or exact. As a result, decision-making systems may embed structural biases toward machine-mediated inference, potentially reducing human epistemic agency. Over time, this dynamic could constrain humanity’s capacity to develop confident, self-directed interpretations of its position within the time–space continuum.
The human mind has limited computational capacity when compared to the scale of its ambitions. Yet it is continually animated by subtle cognitive and emotional-perceptual faculties, such as imagination, vision, and aspiration, which drive it to extend beyond its own limitations and generate novelty, both constructive and destructive. Across history, successive generations have produced both transformative innovations and destructive outcomes, often unfolding in cyclical patterns of rise and decline. History may therefore appear to “repeat itself,” not in terms of identical outcomes, but in the recurrence of structurally similar configurations of events. This suggests that what repeats is not, in fact, history itself but underlying time–space relational structures that manifest consistently across different temporal measurements.
Temporal ontologies may provide in-depth knowledge about time as a structure, along with an explainable formulation of temporal priors. Within this view, an event occurring today may be interpreted tomorrow, in ten years, or decades before, depending solely on when the human mind accesses, searches or investigates that particular prior. The relational structure between the temporal prior and the point of calculation remains constant, with uncertainty instead being redistributed into a posteriori modes of validation. Temporal priors exist independently, awaiting discovery, while the prioritisation of which patterns are selected for investigation becomes a separate question grounded in hermeneutics and closely tied to ethics and responsible epistemic practice.
Where to Begin the Reconstruction of Temporal Artifacts?
With the assistance of machines, contemporary systems can partially reconstruct fragmented or lost historical knowledge concerning temporal ontologies. This form of inquiry, an “archaeology of time itself”, is still only weakly supported by the tools available today. Nevertheless, fragments of lost knowledge persist, from different human civilisations, in the form of axiomatic formulations related to time–space formal relations. Some of these are formalised mathematically and can therefore be re-examined in present-day analytical environments, enabling the study of incidence structures, tangent interactions and related configurations through modelling, simulation, validation and modern formalisation in applied contexts.
While the human mind is not yet capable of fully comprehending the scale, plurality and entanglement of these time–space structures, partial understanding is increasingly achievable. The combination of current AI systems and emerging computational methods may support a gradual reconstruction, extending human interpretative capacity beyond its present limits. For temporal data and existing formulae based on combinatorics and large-scale arrangements, potentially yielding extremely large state spaces, the volume of required calculations and analytics often necessitates machine assistance for validation and exploration. This is not necessarily because the human mind fundamentally depends on machines, but because contemporary epistemic systems have increasingly externalised large-scale computation over recent centuries, partly because of methodological shifts in scientific practice. Within this context, there also exist reported cases of individuals who claim to intuit or anticipate future events without explicit computational reasoning. However, such cases typically remain outside mainstream scientific validation frameworks due to difficulties in reproducibility, formalisation, controlled testing conditions or other reasons. At the same time, access to large-scale computational models and temporal datasets raises important questions about governance, epistemic transparency and the distribution of technological capability. These questions become especially relevant in environments where advanced analytical tools are embedded within competitive economic or strategic systems. From this perspective, there is a broader open question regarding the extent to which humanity could collectively expand its capacity for structured temporal understanding, if such tools were developed and applied under more open, trusted, and ethically governed conditions, alongside a gradual and systematic reduction of outdated competition-based models.
The human mind, at present, does not yet appear fully capable of stepping outside the rationale and control systems developed in the current era. To formalise and legitimise results derived from applied temporal ontologies, it is necessary to operate within the formal logic of contemporary scientific frameworks. Alternative modes of validation such as intuition or revelation are generally considered non-formalisable and therefore lie outside the scope of scientific acceptance, particularly when they lack reproducibility or structured methodological grounding. At the same time, the notion of “inspiration” (a factor related to “breathing”, that seems more difficult to quantify) has historically been acknowledged by many inventors and key figures associated with major technological and industrial transitions. Innovation is not driven exclusively by strictly formal reasoning, but also by less formally defined cognitive processes that later become structured and validated through systematic inquiry.
The good news is that even under such restraints such as materially grounded and computationally intensive approaches, it remains possible to investigate time as a structured and measurable system governed by predictable and modellable relations. The primary distinction between methodologies lies not in the accessibility or availability of truth itself, but in the nature of chosen effort, formalisation cost and accepted validation pathways required to reach a given facet of scientific truth. In this sense, while tools and procedures may vary, the underlying orientation toward discovery remains consistent, with human intention acting as a shared guiding factor.
Why Temporal Priors Take Time?
One may imagine a scenario in which, within a decade or more, temporal ontologies produce sufficiently robust temporal priors to inform risk governance in sensitive domains such as health, education and security. In such a context, further development could involve the controlled refinement/ engineering of priors through minimal and carefully targeted adjustments, aimed at influencing outcomes related to human health and happiness, access to superior knowledge and awareness, access to longevity, ecological restoration (including quicker recovery of forests and pure water streams), disaster and risk mitigation. Such applications would, however, require carefully designed governance structures to ensure that these processes remain ethically constrained, transparent and socially accountable, without perturbing the underlying time–space matrix itself. The risks are not limited to the possibility of misuse by malicious actors against humanity, but may extend, in a broader conceptual sense, to the potential disruption or degradation of natural time–space structures, with possible implications beyond human systems and potentially affecting other forms of existence in the universe which is accessible to us. Therefore, while competitive and incentive-driven socio-economic models remain dominant, the space for responsibly developing highly sensitive or far-reaching technologies is still constrained. This is largely due to governance limitations, uneven access to oversight mechanisms, and the difficulty of ensuring alignment between technological capability and ethical safeguards.
There are cases where some individuals appear to act with limited awareness of the consequences of their actions, and in certain instances may demonstrate indifference or even apparent gratification in causing harm or observing suffering. While such cases are not representative of humanity as a species, they present a serious concern when combined with access to powerful technologies or systems. From this perspective, further development of such systems may depend less on purely technical feasibility and more on the maturation of global coordination frameworks, transparency mechanisms, strengthened international cooperation and a shared understanding of the importance of high ethical standards aimed at reducing the risk of misuse and unintended consequences. In this area, scientists and engineers are not fully insulated from value judgements, since the design and deployment of such systems inevitably involves normative choices about acceptable risk and societal impact. This does not imply abandoning scientific neutrality in methodology, but rather recognising the ethical responsibility embedded in system design and governance. The objective, therefore, is not only to advance capability, but also to shape architectures that are robust against misuse to the greatest extent possible, including by reducing pathways for harmful applications. Given the potential scale of impact, these questions cannot be treated as purely technical, nor left solely to ad hoc decision-making processes.
Access to creation, inspiration and knowledge is emerging from temporal ontologies, structured by underlying ethical and epistemic frameworks. In this sense, hardware systems alone are not foundational to human development; rather, they are expressions of deeper informational and interpretative layers which may be described as higher-order soft structures, including temporal ontologies. Within this framing, temporal ontologies extend predictive analytics beyond pattern recognition toward the modelling of complex temporal processes. This shift has significant implications: systems that incorporate such structures do not merely observe the world, but participate in shaping how future states are represented, evaluated and selected. In doing so, they encode assumptions about causality, define the boundaries of plausible futures and influence the conditions under which decisions are made. From this perspective, hermeneutics functions as an essential counterbalance, showing that every formalisation of time is also an interpretative act. The future of trusted predictive systems will depend not only on computational advancement, but also on the capacity to understand, critique and responsibly design the temporal frameworks within which such computation operates.
Brussels, 26 April 2026
Stanford Report: “Theoretical physicist Vedika Khemani was recognized for her work on “time crystals” – a new non-equilibrium phase of matter”;
“Time-crystals are a striking example of a new type of non-equilibrium quantum phase of matter,”
said Vedika Khemani https://news.stanford.edu/stories/2021/11/time-crystal-quantum-computer ; https://web.stanford.edu/group/times/about.html
Exploration of composite event operators, Opera Group Seminars: https://www.cst.cam.ac.uk/seminars/list/15001
“Eternalism seems logically to imply quadridimentionalism, whereas presentism seems logically to imply tridimentionalism.”
Bucchioni, G. (2019). Ontological pluralism and the ontology of time. Collège de France Symposium
https://www.college-de-france.fr/en/agenda/symposium/the-metaphysics-of-time-contemporary-perspectives/ontological-pluralism-and-the-ontology-of-time



