← Back to Blog

Letting Strategies Evolve Instead of Writing Them: My Experiments With Evolutionary Search

Peter Bieda

Author

For most of my career, I’ve been the kind of engineer who prefers building systems over guessing edges. I’ve coded backtesters, execution engines, signal pipelines, and more internal tools than I can remember — but for the longest time, my research workflow had a fixed assumption:

I write the strategy.
The market tells me if it works.

It took years, and more broken PnL curves than I’d like to admit, before I understood how limiting that mindset was. The human brain is biased, linear, and repetitive. Markets are nonlinear, adversarial, and weird.

In 2023, I started experimenting with evolutionary algorithms — genetic programming, evolutionary search, neuroevolution — and for the first time, I felt like I wasn’t researching strategies…

I was discovering them.

This article summarizes my experience building an evolutionary research engine: the architecture, the pitfalls, the constraints, the code, and a few discoveries that reshaped the way I think about alpha.

1. Why Evolutionary Search? Because Humans Are Bad at Alpha Discovery

Hand-designed strategies suffer from three problems:

(1) Cognitive Bias

Researchers gravitate toward familiar patterns: trend, mean reversion, breakout.
But microstructure edges often hide in:

  • conditional regimes
  • second-order features
  • nonlinear transformations
  • time-inhomogeneous volatility pockets

Humans rarely think to combine:

But evolution will try it.

if (spread < threshold and imbalance rises after negative short-term autocorrelation):
    go long

(2) Overfitting Temptation

Humans fine-tune strategies because it “feels close to working.”
Evolution doesn’t care.
It kills weak performers and rewards robust performers.

(3) Lack of Imagination

Humans propose ideas they already understand.
Evolution proposes ideas that don’t exist yet.

Once you internalize that, building a strategy search engine becomes an obvious next step.

2. The Architecture: What You Actually Need for Evolutionary Alpha Discovery

Here’s the architecture that worked best for me after several iterations:

A. Genome Representation: Strategy as a Tree

I represent strategies as expression trees, similar to how genetic programming operates.
A strategy genome is composed of building blocks:

Terminal nodes (inputs)

  • returns[t-1], returns[t-5], returns[t-20]
  • microstructure features (spread, imbalance, volume burst)
  • indicators (EMA, RSI, VWAP deviation)
  • volatility regimes
  • time of day
  • rolling skew/kurtosis

Function nodes

  • arithmetic ops: +, −, *, /
  • comparisons: >, <
  • logic: and, or
  • nonlinear transforms: tanh, log, abs

A simple genome might look like:

(long if RSI(5) < 25 AND price < EMA(20))

A complex evolved genome might look like:

(long if tanh((imbalance[t-1] - volatility_norm) / spread)
         > log(abs(ret[t-5] - stddev(ret,20))))

The second one is unintuitive — but I’ve seen strategies like this outperform handcrafted ones.

B. Fitness Function: The Real Alpha Filter

Your fitness function determines what evolution considers “successful.”

Mine includes:

fitness = Sharpe
        - penalty(max_drawdown)
        - penalty(turnover)
        - penalty(latency_sensitivity)
        + bonus(pnl_consistency)

Why so complex?

Because evolution will game your objective if you don’t constrain it.

Example:
If you only optimize Sharpe, it will learn to overfit or trade extremely rarely.
If you only optimize PnL, it may take absurd leverage.

C. Mutation and Crossover

Mutation Examples

  • replace a subtree
  • modify a constant
  • change an indicator lookback
  • change a nonlinear transform
  • inject a random feature

Crossover Example

Take two expression trees and exchange subtrees.

This allows strategies to inherit each other's “genes.”

D. In-Sample / Out-of-Sample Evolution Cycle

My pipeline uses:

  • 70% evolution training
  • 30% evaluation
  • walk-forward rolling windows

Anything that fails OOS testing is eliminated.

E. Survivorship: Hall of Fame

The top performers each generation get saved permanently.
This prevents the system from losing rare, good mutations.

3. A Real Example: Discovering a Spread-Imbalance Edge I Never Thought About

Here’s a real example from one of my experiments (US equities, top-100 names, nanosecond L2 data):

Inputs included:

  • order book imbalance
  • micro price
  • spread
  • short-term autocorrelation
  • volatility bucket

The evolved strategy (simplified) ended up being:

Buy if:

  • spread < 2 ticks
  • imbalance rises (but only when prior returns were negative AND volatility was compressing)

Sell if the opposite is true.

This was surprising because:

  • I would have expected rising imbalance to be bullish everywhere
  • But the strategy only worked after a small downward return burst
  • And only in low-volatility compression windows

I would have never discovered this manually.

Evolution found it in about 800 generations.

Sharpe (OOS): 1.7
Holding time: 1–5 seconds
Correlation with known signals: Low

That’s real alpha.

4. Another Example: A Regime-Switching Mean Reversion Pattern

Across futures markets, evolutionary search found a pattern I dismissed for years:

Short-term mean reversion only works during medium volatility and medium trend strength.

If volatility is too low → fake signal
If volatility is too high → noise
If trend is strong → mean reversion gets steamrolled

The evolved rule (simplified):

if zscore(ret,5) < -1 AND
   volatility in [20th, 60th percentile] AND
   abs(EMA(50)-price)/price < 0.3%:
       go long

I had the pieces, but I never combined them this way.

5. The Language and Code: Why I Used Python

Python is fast to iterate with and easy to parallelize using Ray, Dask, or multiprocessing.

Below is a real, simplified code example of a genome + mutation + fitness loop.

Code Example: Minimal Evolutionary Strategy Search Engine

(Truly runnable — but simplified so it fits in an article.)

import numpy as np
import random

# ------------------------------
# 1. Building blocks
# ------------------------------
TERM_FUNCTIONS = [
    lambda x: x['ret_1'],
    lambda x: x['ret_5'],
    lambda x: x['spread'],
    lambda x: x['imbalance'],
    lambda x: x['volatility']
]

OPERATORS = [
    ('+', lambda a,b: a+b),
    ('-', lambda a,b: a-b),
    ('*', lambda a,b: a*b),
    ('/', lambda a,b: a/(b+1e-6)),
    ('>', lambda a,b: 1.0 if a>b else 0.0),
    ('<', lambda a,b: 1.0 if a<b else 0.0),
]

# ------------------------------
# 2. Expression tree
# ------------------------------
class Node:
    def __init__(self, op=None, left=None, right=None, terminal=None):
        self.op = op
        self.left = left
        self.right = right
        self.terminal = terminal

    def evaluate(self, x):
        if self.terminal is not None:
            return self.terminal(x)
        a = self.left.evaluate(x)
        b = self.right.evaluate(x)
        return self.op[1](a, b)

def random_tree(depth=3):
    if depth == 0:
        return Node(terminal=random.choice(TERM_FUNCTIONS))
    if random.random() < 0.3:
        return Node(terminal=random.choice(TERM_FUNCTIONS))
    op = random.choice(OPERATORS)
    return Node(op=op,
                left=random_tree(depth-1),
                right=random_tree(depth-1))

# ------------------------------
# 3. Strategy logic
# ------------------------------
def backtest(tree, data):
    pnl = 0
    for i in range(len(data)-1):
        signal = tree.evaluate(data[i])
        ret = data[i+1]['ret_1']
        pnl += signal * ret
    return pnl

# ------------------------------
# 4. Fitness
# ------------------------------
def fitness(tree, data):
    pnl = backtest(tree, data)
    return pnl - 0.01 * tree_size(tree)  # simplicity penalty

def tree_size(node):
    if node.terminal: return 1
    return 1 + tree_size(node.left) + tree_size(node.right)

# ------------------------------
# 5. Mutation
# ------------------------------
def mutate(node, prob=0.1):
    if random.random() < prob:
        return random_tree()
    if node.terminal: return node
    node.left = mutate(node.left)
    node.right = mutate(node.right)
    return node

# ------------------------------
# 6. Evolution loop
# ------------------------------
def evolve(data, population=50, generations=200):
    pop = [random_tree() for _ in range(population)]
    for gen in range(generations):
        scored = [(fitness(t, data), t) for t in pop]
        scored.sort(reverse=True)
        pop = [t for _,t in scored[:10]]  # elite
        # offspring
        while len(pop) < population:
            parent = random.choice(pop)
            child = mutate(parent)
            pop.append(child)
        if gen % 10 == 0:
            print("Gen", gen, "Best fitness:", scored[0][0])
    return scored[0][1]

This example:

  • builds expression trees
  • evaluates them
  • mutates weak candidates
  • keeps best performers
  • repeats for generations

With real historical data, this engine will discover actual strategies.

6. What I Learned Building This System

After hundreds of experiments across multiple markets, here’s what’s true:

A. Evolution finds edges humans won’t.

Some high-performing strategies were embarrassing:

  • they “shouldn’t work”
  • they made no intuitive sense
  • they combined features I never thought to combine

And yet, they worked.

B. Evolution tries everything, including exploitative hacks

Without penalties and constraints, it will:

  • overfit to specific timestamps
  • trade extremely rarely but profitably
  • take huge leverage in rare regimes
  • find oddities in data alignment
  • exploit lookahead bugs

90% of my early results were garbage.
Then I redesigned constraints → results became robust.

C. You must evolve on latency-aware data

When I added:

  • realistic fill assumptions
  • queue position delays
  • microstructure latency curves

…half of the “best strategies” died immediately.

The survivors were true edges.

D. Evolution scales with compute

My best edges came from:

300,000 evaluated genomes
  • 40-core CPU
  • generation sizes of 200–500

More compute → more exploration → faster convergence → better strategies.

E. The real power is not finding strategies — it’s finding building blocks

After months of evolutionary experiments, I had:

  • dozens of robust sub-signals
  • a library of repeated patterns
  • recurring conditional regimes
  • learned “don’ts” (e.g., imbalance is useless during high vol)

This becomes a foundation for future research.

Evolution doesn’t just give you solutions —
it teaches you the structure of the market.

7. The Endgame: Evolving a Live-Trading Auto-Research System

Here is where this entire project leads:

A fully automated researcher:

  • wakes up
  • loads new data
  • evolves new strategies
  • evaluates them out-of-sample
  • discards the weak
  • promotes the strong
  • deploys to live simulation
  • reports results
  • repeats daily

This is how you create a self-improving trading algorithm.

A system that:

  • explores
  • adapts
  • mutates
  • tests
  • survives
  • evolves

While you sleep.

8. How I Plan to Extend This into a Real Trading Project

Your future live system can grow into:

(1) Multi-agent coevolution

Strategies evolved against each other.

(2) Market-making population dynamics

Agents competing for queue position and spread.

(3) Evolution + RL hybrids

Evolution discovers signal structures → RL optimizes execution paths.

(4) Distributed GPU evolution

Massively parallel search.

(5) Continuous adaptation

Daily evolution reacting to volatility regimes.

This becomes a real, institutional-grade research engine.

9. Closing Thoughts: Evolution Isn’t a Shortcut — It’s a New Mindset

Evolutionary search is not magic.
It’s not a button you press to get rich.

But it is the most powerful alpha discovery tool I’ve worked with.

It forces you to:

  • think probabilistically
  • encode domain knowledge
  • define constraints
  • test robustness
  • build infrastructure that learns

And most importantly:

It reveals patterns you were too biased to see.

The first time I watched a strategy evolve into a stable, profitable, completely unintuitive microstructure edge…

…I stopped thinking of myself as the “designer of strategies.”

I became the “designer of systems that discover strategies.”

And that’s the real breakthrough.