Be part of the Next Transformation of Artificial Intelligence

“A good hockey player plays where the puck is. A great hockey player plays where the puck is going to be.”

Introduction

Artificial intelligence (AI) traces its origins back to the 1940s, when researchers first attempted to model the human nervous system mathematically.

Since then, progress has been continuous but not linear.
AI has evolved step by step, sometimes accelerating, sometimes pausing like climbing a staircase.

Over the past 20 years, this pace has dramatically increased.

Today, in 2026, it is clear:

  • AI is transforming our lives and the global economy
  • But its progress is beginning to slow

The key challenges are now evident:

  • Extreme energy consumption
  • Increasing difficulty of training models

The End of the Exponential Phase

Just a few years ago, the dominant narrative especially in Silicon Valleywas that AI had entered an unstoppable exponential phase, rapidly approaching Artificial General  Intelligence (AGI).

More cautious researchers, however, described this growth as a sigmoid curve.

By 2026, this plateau is becoming visible.

AI progress has not stopped but it is slowing and searching for its next leap.

According to this view:

  • We were in the exponential phase
  • But a plateau was inevitable

For large language models, limitations include:

  • Finite high-quality training data
  • Architectural constraints
  • Increasing computational cost

Create the Condition for Magic

Some expect quantum computing to unlock the next wave.

However practical quantum neural networks are likely decades away.

At the same time, one fact remains undeniable:

The human brain is still vastly more efficient than any existing AI system.

  • Operates on ~20 watts
  • Learns from minimal data
  • Handles uncertainty naturally

Our Hypothesis: The Next Leap Is Biological

At ReBuildAI, we believe:

The next major breakthrough in AI will come from biology.

AI itself was originally inspired by the brain but over time, it diverged.

We believe it is time to return.

By combining:

  • Nearly 100 years of neuroscience
  • New mathematical frameworks
  • Biophysical modeling

We aim to build systems that are:

  • Orders of magnitude more efficient
  • Fundamentally more scalable
  • Closed-loop biology inspired systems which can learn (not only predict)

How We Work

We believe the next breakthrough cannot come from:

  • Science alone
  • Or business alone

That’s why we support a multidisciplinary research team, including

  • Medicine
  • Biology
  • Biophysics
  • Mathematics
  • Theoretical physics
  • Business

Our approach:

  • Rapid experimentation
  • Real-system validation
  • Open-ended exploration

Core Principle

The human brain is approximately:

  • 100,000× more efficient than current Transformer-based models

Therefore, our guiding assumption is simple:

  • If we are on the right path, our systems must become visibly more efficient.

A Structural Inefficiency at the Core

The foundations of modern AI date back to the Perceptron model of the 1950s.

Later, in the 1980s, backpropagation made multi-layer neural networks trainable triggering the deep learning revolution.

This work, recognized with the 2024 Nobel Prize awarded to Geoffrey Hinton and John Hopfield, was undeniably transformative.

However, it introduced a critical constraint:

Training requires massive matrix operations → enormous computational cost

A Structural Inefficiency at the Core

We believe:

Biological intelligence does not rely on backpropagation.

This assumption may have steered AI toward:

Energy inefficiency & scaling limitations

Now, with modern scientific knowledge, we have an opportunity to:

Rethink the foundations of AI

Build more efficient mathematical models

AI has not reached its limits.

But its current trajectory has.

The next breakthrough will not come from scaling existing systems further but from ReBuilding Artificial Intelligence itself.