02. Double Slope Trend Persistence and Recovery Study (S&P 500)

January 2026


This note documents an event-based quantitative study designed to analyze trend persistence and recovery behavior after a combined slope filter is triggered.

The objective is not signal generation, but understanding:

  • how long slope-defined trends tend to persist
  • how deep drawdowns occur inside positive regimes
  • how often and how fast price recovers after adverse excursions

This information is directly used to calibrate screening filters and option management horizons.


Study architecture overview

The script uses adjusted close prices.

Key characteristics:

  • event-based segmentation
  • rolling regression slopes
  • forward drawdown and recovery tracking
  • conditional probability estimation

Parameter layer

SLOPE_FAST = 100
SLOPE_SLOW = 150

DD_LEVELS = (-0.10, -0.15, -0.20)
DTE_WINDOWS = (11, 18, 25)

Meaning

Two slope horizons are used:

  • fast slope: short-to-medium trend component
  • slow slope: structural trend component

A regime is defined only when both slopes are positive, enforcing a double confirmation.

Drawdown levels represent adverse excursions of interest:

  • −10% (moderate stress)
  • −15% (high stress)
  • −20% (severe stress)

DTE windows reflect typical option maturities used operationally.

This allows the study to directly map price behavior → option time horizons.


Rolling slope computation

def rolling_slope(close, window, winsor_pct):

The slope is computed using linear regression on rolling windows:

slope, _ = np.polyfit(x, y, 1)

Before regression, prices are winsorized:

winsorize(arr, WINSOR_PCT)

Why winsorization is applied

Rolling regressions are sensitive to:

  • data spikes
  • isolated gaps
  • corporate action noise

Winsorization clips extreme values and stabilizes slope estimation without distorting overall trend direction.


Regime definition logic

The combined slope regime is defined inside analyze_post_trend().

Entry condition

cond_now = (sf[i] > 0) and (ss[i] > 0)
cond_prev = (sf[i - 1] > 0) and (ss[i - 1] > 0)

if not in_regime and cond_now and not cond_prev:
    entry_idx = i

A new regime begins only when:

  • both slopes turn positive
  • the previous bar was not already in regime

This enforces event-based segmentation, not continuous labeling.


Exit condition

elif in_regime and (not cond_now or i == len(prices) - 1):

The regime ends when:

  • either slope becomes non-positive
  • or the price series ends

This creates self-contained trend episodes.


Price segment extraction (event window)

After a valid trend entry is detected (combined slope condition), the analysis does not operate on the full price series anymore.
Instead, it isolates a contiguous price segment corresponding to a single trend episode.

Code reference:

segment = prices[entry_idx: exit_idx + 1]

What is segment?

segment is a time-ordered slice of the price series representing the life cycle of one trend regime:

  • start: first day when both slope filters turn positive
  • end: first day when at least one slope turns non-positive (exit condition)

In practice:

  • each segment corresponds to one complete trend episode
  • all performance statistics are computed within this bounded window

This transforms the analysis from:

continuous time series analysis

into:

event-based regime analysis

which is statistically cleaner for conditional behavior studies.


Why segment-level analysis matters

If we computed returns and drawdowns on the full price history, the results would mix:

  • trend regimes
  • non-trend regimes
  • transitional noise

By isolating segment, the study explicitly conditions on:

price behavior after a confirmed combined slope signal

This aligns the statistics with the operational logic of the screening filter.


Structure of the extracted segment

Suppose the following entry and exit indices:

entry_idx = 100
exit_idx  = 130

Then:

segment = prices[100:131]

produces a price vector like:

[100.0, 102.3, 101.8, 104.5, ..., 108.2]

This vector represents:

  • day 0 → entry price
  • day N → last day of regime

All metrics are computed relative to this local window.


Entry normalization reference

Immediately after extraction, the first element of the segment is used as the reference anchor:

entry_price = segment[0]

All subsequent calculations use this value as baseline:

  • cumulative returns
  • drawdowns
  • recovery conditions

This guarantees that each episode is:

  • internally normalized
  • independent from absolute price level
  • comparable across tickers

Segment length constraint

Before processing the segment, a minimal length check is applied:

if len(segment) < 2:
    continue

This avoids:

  • single-bar pseudo-events
  • numerical artifacts
  • meaningless drawdown statistics

Only episodes with at least two observations are considered valid.


Conceptual summary

In this study:

  • the time series is the raw data
  • the segment is the analytical unit

Each segment represents:

one realized market regime instance after the combined slope filter triggers

All downstream statistics (duration, drawdown depth, recovery timing) are computed per segment, not per day.

This event-based framing is essential for extracting behavior that is relevant to systematic screening logic.


Episode-level metrics

Once a regime is detected, the script computes:

returns = segment / entry_price - 1.0
drawdown = segment / np.maximum.accumulate(segment) - 1.0

And stores:

"duration_days"
"max_return_pct"
"max_drawdown_pct"

These describe:

  • trend lifespan
  • upside potential
  • internal risk

Each regime is treated as an independent event.


Drawdown timing and recovery tracking

For each drawdown threshold:

for lvl in DD_LEVELS:

The script detects the first occurrence:

hit = np.where(drawdown <= lvl)[0]

And records:

  • days to drawdown
  • whether price recovers to break-even
  • days to recovery (from entry)

Recovery is defined strictly:

returns[first_hit:] >= 0

Meaning:

price must return to or exceed the original entry price after hitting the drawdown.

This avoids counting partial rebounds.


Event aggregation

Each episode generates a structured dictionary:

row = {
    "start_date",
    "end_date",
    "duration_days",
    "max_return_pct",
    "max_drawdown_pct",
    ...
}

All episodes across all tickers are concatenated into a single DataFrame.

This produces a dataset of trend regime realizations, not raw daily observations.


Statistical summaries

The final block computes:

Distribution statistics

stats.describe()
stats.quantile()

Providing:

  • median trend duration
  • median max return
  • median max drawdown

These are robust central estimates useful for operational calibration.


Conditional recovery probabilities

prob = valid.mean()

This computes:

P(recovery | drawdown level reached)

For each stress level:

  • −10%
  • −15%
  • −20%

This answers:

how often trends survive adverse excursions and recover.


Recovery time vs option horizons

P(recovery within dte days | DD hit & recovery)

This directly maps price recovery behavior to:

  • short DTE (11 days)
  • medium DTE (18 days)
  • longer DTE (25 days)

This is operationally relevant for:

  • rolling decisions
  • repair strategies
  • strike management

Drawdown timing probability

P(DD ≤ −X% within Y days)

This measures:

how quickly adverse moves typically occur inside positive slope regimes.

This is critical for risk control and hedging timing.


Methodological advantages

This approach differs from standard backtests:

  • no compounding assumptions
  • no portfolio aggregation
  • no return optimization

It focuses on structural behavior of trends:

  • persistence
  • stress behavior
  • recovery dynamics

Which is exactly what screening filters are meant to control.


Limitations

Important caveats:

  • episodes are not independent across time
  • slope windows impose smoothing delay
  • no volatility regime conditioning
  • no sector stratification
  • survivorship bias from static ticker universe

This is intentional: the goal is not academic perfection but operational signal calibration.


Closing note

This study is part of a broader effort to formalize screening logic using empirical behavior rather than heuristic assumptions.

Instead of asking:

“Does this filter produce alpha?”

The question here is:

“What kind of trend behavior am I selecting into when this filter triggers?”

Understanding persistence, drawdowns and recovery speed is fundamental when building systematic option-selling workflows.

AIQ Notes

Independent Trader · AI-assisted Coding & Systematic Analysis
G. D. P.