Institute Output

What If We Had Bigger Brains? Imagining Minds beyond Ours
Stephen Wolfram
We humans have perhaps 100 billion neurons in our brains. But what if we had many more? Or what if the AIs we built effectively had many more? What kinds of things might then become possible? At 100 billion neurons, we know, for example, that compositional language of the kind we humans use is possible. At the 100 million or so neurons of a cat, it doesn’t seem to be. But what would become possible with 100 trillion neurons? And is it even something we could imagine understanding?

What Can We Learn about Engineering and Innovation from Half a Century of the Game of Life Cellular Automaton?
Stephen Wolfram
Things are invented. Things are discovered. And somehow there’s an arc of progress that’s formed. But are there what amount to “laws of innovation” that govern that arc of progress?
There are some exponential and other laws that purport to at least measure overall quantitative aspects of progress (number of transistors on a chip; number of papers published in a year; etc.). But what about all the disparate innovations that make up the arc of progress? Do we have a systematic way to study those?

Nature's Compass: A visual exploration of hierarchy in biology and beyond
Willem Nielsen
A discussion of the computational essence of hierarchy in biology and its potential implications for everyday life.

Towards a Computational Formalization for Foundations of Medicine
Stephen Wolfram
As it’s practiced today, medicine is almost always about particulars: “this has gone wrong; this is how to fix it”. But might it also be possible to talk about medicine in a more general, more abstract way—and perhaps to create a framework in which one can study its essential features without engaging with all of its details?

Who Can Understand the Proof? A Window on Formalized Mathematics
Stephen Wolfram
For more than a century people had wondered how simple the axioms of logic (Boolean algebra) could be. On January 29, 2000, I found the answer—and made the surprising discovery that they could be about twice as simple as anyone knew. (I also showed that what I found was the simplest possible.)

On the Nature of Time
Stephen Wolfram
Time is a central feature of human experience. But what actually is it? In traditional scientific accounts it’s often represented as some kind of coordinate much like space (though a coordinate that for some reason is always systematically increasing for us). But while this may be a useful mathematical description, it’s not telling us anything about what time in a sense “intrinsically is”.

Foundations of Biological Evolution: More Results & More Surprises
Stephen Wolfram
A few months ago I introduced an extremely simple “adaptive cellular automaton” model that seems to do remarkably well at capturing the essence of what’s happening in biological evolution. But over the past few months I’ve come to realize that the model is actually even richer and deeper than I’d imagined. And here I’m going to describe some of what I’ve now figured out about the model—and about the often-surprising things it implies for the foundations of biological evolution.

Nestedly Recursive Functions
Stephen Wolfram
Integers. Addition. Subtraction. Maybe multiplication. Surely that’s not enough to be able to generate any serious complexity. In the early 1980s I had made the very surprising discovery that very simple programs based on cellular automata could generate great complexity. But how widespread was this phenomenon?

What’s Really Going On in Machine Learning? Some Minimal Models
Stephen Wolfram
It’s surprising how little is known about the foundations of machine learning. Yes, from an engineering point of view, an immense amount has been figured out about how to build neural nets that do all kinds of impressive and sometimes almost magical things. But at a fundamental level we still don’t really know why neural nets “work”—and we don’t have any kind of “scientific big picture” of what’s going on inside them.

Can AI Solve Science?
Stephen Wolfram
Particularly given its recent surprise successes, there’s a somewhat widespread belief that eventually AI will be able to “do everything”, or at least everything we currently do. So what about science? Over the centuries we humans have made incremental progress, gradually building up what’s now essentially the single largest intellectual edifice of our civilization. But despite all our efforts, there are still all sorts of scientific questions that remain. So can AI now come in and just solve all of them?

Observer Theory
Stephen Wolfram
We call it perception. We call it measurement. We call it analysis. But in the end it’s about how we take the world as it is, and derive from it the impression of it that we have in our minds.

Aggregation and Tiling as Multicomputational Processes
Stephen Wolfram
Multiway systems have a central role in our Physics Project, particularly in connection with quantum mechanics. But what’s now emerging is that multiway systems in fact serve as a quite general foundation for a whole new “multicomputational” paradigm for modeling.

Expression Evaluation and Fundamental Physics
Stephen Wolfram
It is shown that way the Wolfram Language rewrites and evaluates expressions mirrors the universe’s own evolution: both proceed through discrete events linked by causal relationships, form “spacetime-like” structures and branch into multiway histories analogous to quantum superpositions.

Generative AI Space and the Mental Imagery of Alien Minds
Stephen Wolfram
How do alien minds perceive the world? It’s an old and oft-debated question in philosophy. And it now turns out to also be a question that rises to prominence in connection with the concept of the ruliad that’s emerged from our Wolfram Physics Project.

What Is ChatGPT Doing … and Why Does It Work?
Stephen Wolfram
That ChatGPT can automatically generate something that reads even superficially like human-written text is remarkable, and unexpected. But how does it do it? And why does it work? My purpose here is to give a rough outline of what’s going on inside ChatGPT—and then to explore why it is that it can do so well in producing what we might consider to be meaningful text.

Alien Intelligence and the Concept of Technology
Stephen Wolfram
“We’re going to launch lots of tiny spacecraft into interstellar space, have them discover alien intelligence, then bring back its technology to advance human technology by a million years”.
But as I thought about it, I realized that beyond the “absurdly extreme moonshot” character of this pitch, there’s some science that I’ve done that makes it clear that it’s also fundamentally philosophically confused. The nature of the confusion is interesting, however, and untangling it will give us an opportunity to illuminate some deep features of both intelligence and technology—and in the end suggest a way to think about the long-term trajectory of the very concept of technology and its relation to our universe.

Games and Puzzles as Multicomputational Systems
Stephen Wolfram
Multicomputation is one of the core ideas of the Wolfram Physics Project—and in particular is at the heart of our emerging understanding of quantum mechanics. But how can one get an intuition for what is initially the rather abstract idea of multicomputation? A good approach, I believe, is to see it in action in familiar systems and situations. And I explore here what seems like a particularly good example: games and puzzles.

On the Concept of Motion
Stephen Wolfram
It seems like the kind of question that might have been hotly debated by ancient philosophers, but would have been settled long ago: how is it that things can move? And indeed with the view of physical space that’s been almost universally adopted for the past two thousand years it’s basically a non-question. As crystallized by the likes of Euclid it’s been assumed that space is ultimately just a kind of “geometrical background” into which any physical thing can be put—and then moved around.

The Physicalization of Metamathematics and Its Implications for the Foundations of Mathematics
Stephen Wolfram
One of the many surprising (and to me, unexpected) implications of our Physics Project is its suggestion of a very deep correspondence between the foundations of physics and mathematics. We might have imagined that physics would have certain laws, and mathematics would have certain theories, and that while they might be historically related, there wouldn’t be any fundamental formal correspondence between them.
But what our Physics Project suggests is that underneath everything we physically experience there is a single very general abstract structure—that we call the ruliad—and that our physical laws arise in an inexorable way from the particular samples we take of this structure.

The Concept of the Ruliad
Stephen Wolfram
I call it the ruliad. Think of it as the entangled limit of everything that is computationally possible: the result of following all possible computational rules in all possible ways. It’s yet another surprising construct that’s arisen from our Physics Project. And it’s one that I think has extremely deep implications—both in science and beyond.