Index

Seeing A Truly Different Future Requires Different Tools
3 February 2026
Peter Coffee

In one of his many memorable covers for Byte magazine, Robert Tinney literally painted a picture of what I feel is the most important use of a computer. His January 1977 image, entitled “Crystal Ball” (print still available for purchase) depicts, it seems to me, our current half-century-later third step on the journey of “what computers are able to do, and who gets to do those things.”

I’ll label our first step, during the 1940s and 1950s, as “computerization” – which merely sped up a process whose inputs we could easily define, and whose outputs we expected and could readily confirm. Think COBOL and FORTRAN, initially on massive mainframes but soon after on “minicomputers,” as the means of expression and action – but only for the few with corporate or military-grade resources at their disposal.

I’ll call the second step “simulation” (1960s through early 2000s) – which also worked with well-structured input, processed through clear and well-specified actions and interactions. By the middle of the 1980s, this could be done at useful scale and speed on a desktop machine – using anything from FORTRAN, to Smalltalk, to “no code required” spreadsheets. What mattered, though, regardless of the tool that was chosen, was that a simulation could produce results that might accelerate our understanding of complexity. This step of the journey was still limited, however, to exploring worlds for which we specifically provided selective data and purpose-built logic as their descriptions.

Tinney’s painting looked beyond these limits. It showed a view through a window of a smoke-obscured skyline, foregrounded by ugly and unhealthy-looking industrial activity; on the desk in front of the window, though, there’s an image of a blue-sky and pastoral setting surrounding a futuristic city, with a computer that was presumably modeling a pathway to that future. I remember seeing it on first publication, and wanting…that. I was sure that someday, we’d have it.

Brief digression on the nature of “futurism”: the other details of Tinney’s painting reflect what the late Sir Arthur C. Clarke described as the dilemma of forecasting the future, that an accurate description of the future will be dismissed as crazy – but a believable description will fall hugely short of what’s really going to happen. Tinney went for recognizable believability, and showed that world model running on an 8-bit Altair 8800 (standard memory, 256 bytes) with floppy disks and punched paper tape as its only means of persistent data storage. Real world modeling needed a global connectivity, a massive capacity, and an abundance of computational power that few would have dared to predict we would one day have in hand-held devices – let alone in the vast resources of an “AI factory” data center. End of digression.

That said, let’s talk about what Tinney’s painting envisions correctly: a capability of connecting to the real world, measuring present state, inferring ongoing process, and telling us things that we did not tell it to figure out. This is what’s happening now, for example with the “Destination Earth” initiative of the European Commission, which seeks to build a “digital twin” of our planet. The effort combines a “data lake” compiled from space-based and other sources, a platform comprising calculation and decision-making tools, and a “digital twin engine” that assembles a variety of process models into something that can genuinely surprise us.

We find a similar spirit of discovery, rather than mere description, in work being done at MIT to improve weather forecasting in the complex and rapidly changing environment of the arctic regions. Researchers there have added machine learning to their decades of examining the arctic’s patterns and interactions: their newly advanced models warned, this winter, of a potential mid-December cold surge for the U.S. east coast weeks earlier than previous techniques enabled.

I didn’t quickly come up with a good word for the third step that follows after computerization and simulation. The best word that I can propose emerged, to give credit where it’s due, from this (condensed for brevity) exchange with ChatGPT:

I like it. Let’s call our three-step progression “Computerization; Simulation; Foresighting.” (I’m tempted by the parallelism, but I’m not going to push for “foresightification.”)

Let’s define success as making better foresight more possible, for more people; let’s make it readily applicable to domains like climate change, and the future of work in an epoch of ubiquitous AI, where our “unknown unknowns” require nothing less.