Apex Wallet - Bitcoin Halving Cycle & Profit ProjectionOverview The Apex Wallet Bitcoin Halving Cycle Profit is a strategic macro-analysis tool designed for Bitcoin investors and long-term holders. It provides a visual framework of Bitcoin's 4-year cycles by identifying past halving dates and projecting future ones automatically. The script highlights key accumulation and profit-taking windows based on historical cycle performance.
Dynamic Cycle Intelligence
Halving Milestones: Automatically detects and marks all major halving events (2012, 2016, 2020, 2024) with precise timestamps.
Predictive Projections: Using an estimated 1,460-day cycle, the script projects up to 30 future halving events to help plan long-term investment horizons.
Timeframe Optimization: Built specifically for Weekly (W) and Monthly (M) charts to provide a clean, high-level perspective of market structure.
Key Strategy Visuals
Profit Windows: Visualizes "Start" and "End" profit zones with automated vertical lines and color-coded labels based on user-defined offsets from the halving.
DCA Chain Signals: Identifies strategic Dollar Cost Averaging (DCA) points throughout the cycle to assist in disciplined accumulation.
Heatmap Shading: Features dynamic background shading that intensifies as the cycle progresses toward historical peak performance periods.
How to Use:
Switch to a Weekly or Monthly Bitcoin chart.
Use the Green Labels (Profit START) to identify early cycle strength.
Monitor the Red Labels (Profit END) for historical cycle exhaustion zones.
Statistics
Full Dashboard V16 - Final Fix M15 & PA SignalsTable (Multi timefram)
- show Trend
- show rsi
- show Stoch
- show prev candle (default hide)
- show curr candle (default hide)
- shows the time when the candlestick will close.
--- can config show/hide all column
Graph
- show rsi 89/21
Signal
- show signal with tp/sl (default hide)
Fix bug
10 Youtube Opening Range Strategies + Backtest 1. Quick Flip Scalper
A strategy centered on fading or following the initial move relative to the Opening Range (OR).
LONG Rules:
Reversal Mode: If the Opening Range is Bearish (Red), enter Long when price drops below the Opening Range Low (ORL).
Continuity Mode: If the Opening Range is Bullish (Green), enter Long when price drops below the Opening Range Low (ORL) (Buying the deep pullback/trap).
SHORT Rules:
Reversal Mode: If the Opening Range is Bullish (Green), enter Short when price breaks above the Opening Range High (ORH).
Continuity Mode: If the Opening Range is Bearish (Red), enter Short when price breaks above the Opening Range High (ORH) (Selling the deep pullback/trap).
2. First Candle Scalper
Identical to the Quick Flip Scalper but restricts entries to the very first retest only.
LONG Rules:
Same as Quick Flip Long, but only triggers once per session.
SHORT Rules:
Same as Quick Flip Short, but only triggers once per session.
3. Smart Money Trap (SMT)
Identifies a "fakeout" breakout followed immediately by a reversal candlestick pattern.
LONG Rules:
Condition: The previous candle low was below the ORL, but the candle closed back inside (above ORL).
Trigger: Must have a Bullish Engulfing or Bullish Rejection pattern closing above the ORL.
SHORT Rules:
Condition: The previous candle high was above the ORH, but the candle closed back inside (below ORH).
Trigger: Must have a Bearish Engulfing or Bearish Rejection pattern closing below the ORH.
4. Trident Pattern (TG Capital)
A London-session exclusive strategy requiring a Fair Value Gap (FVG) and a Doji confirmation.
LONG Rules:
Filter: Price is Above the 200 EMA (if enabled).
Setup: A Bullish FVG forms.
Confirmation: A Doji candle wicks down into the 50% level of the FVG.
Trigger: Enter on the next candle close.
SHORT Rules:
Filter: Price is Below the 200 EMA (if enabled).
Setup: A Bearish FVG forms.
Confirmation: A Doji candle wicks up into the 50% level of the FVG.
Trigger: Enter on the next candle close.
5. OTE Framework (MBB Trader)
Simulates an Optimal Trade Entry by combining a Liquidity Sweep with a Market Structure Shift (SMR).
LONG Rules:
Sweep: Price drops below the lowest low of the last 20 candles.
Structure: A Bullish SMR forms (Low → High → Lower Low → Higher High).
SHORT Rules:
Sweep: Price breaks above the highest high of the last 20 candles.
Structure: A Bearish SMR forms (High → Low → Higher High → Lower Low).
6. Liquidity Trap (Marco Trades)
A contrarian strategy that buys/sells purely on sweeps of major structural levels.
LONG Rules:
Trigger: Price sweeps (drops below) the lowest low of the last 50 candles.
SHORT Rules:
Trigger: Price sweeps (breaks above) the highest high of the last 50 candles.
7. Trojan Horse (Trader Mayne)
Uses Trend EMAs (50 & 200) to identify direction, then enters on a Lower Timeframe Breaker.
LONG Rules:
Trend: 50 EMA > 200 EMA (Uptrend).
Trigger: Price sweeps a recent 10-candle low, then immediately breaks a recent 5-candle high.
SHORT Rules:
Trend: 50 EMA < 200 EMA (Downtrend).
Trigger: Price sweeps a recent 10-candle high, then immediately breaks a recent 5-candle low.
8. Simplified SMT (9:30 Range)
Focuses on the 9:30 AM range. Waits for a breakout and a confirmed failure to sustain it.
LONG Rules:
Context: Price previously broke above the ORH.
Trigger: Price returns to the ORH (Retest) with a Bullish Engulfing/Rejection pattern.
SHORT Rules:
Context: Price previously broke below the ORL.
Trigger: Price returns to the ORL (Retest) with a Bearish Engulfing/Rejection pattern.
9. 9:30 One-Candle (Scarface)
Uses the high/low of the single 9:30 candle as the range.
LONG Rules:
Setup: Price closes above the 9:30 High.
Trigger: Price pulls back and touches/dips into the 9:30 High (Retest).
SHORT Rules:
Setup: Price closes below the 9:30 Low.
Trigger: Price pulls back and touches/wicks into the 9:30 Low (Retest).
10. London Breakout (Joovier)
Based on the 3 AM - 9 AM EST box.
LONG Rules:
Trigger: A candle's Body (Open and Close) forms completely above the Box High after the session opens.
SHORT Rules:
Trigger: A candle's Body (Open and Close) forms completely below the Box Low after the session opens.
⚠️ DISCLAIMER & LIMITATION OF LIABILITY
1. NO AFFILIATION / INDEPENDENT PROJECT This script is an independent coding project created solely for testing, research, and entertainment purposes. The creator of this indicator is not associated, affiliated, endorsed by, or in any way connected to the strategy authors or influencers mentioned within the tool (including but not limited to TG Capital, MBB Trader, Marco Trades, Trader Mayne, Scarface, or Joovier).
The strategy names are used strictly for identification purposes to credit the original concept creators.
This code represents an independent interpretation of public trading concepts. It may not reflect the exact, proprietary, or private methods taught by these individuals.
This is not an official product from any of the aforementioned parties.
2. FOR EDUCATIONAL PURPOSES ONLY This indicator is strictly for educational and informational purposes. It is not a signal service and does not constitute investment, financial, or trading advice. The buy/sell labels generated by this script are merely visual representations of specific code logic and should not be interpreted as instructions to execute trades.
3. EXCLUSION OF LIABILITY By using this script, you explicitly agree that:
The creator assumes no responsibility or liability for any direct, indirect, consequential, or incidental losses or damages resulting from the use of this tool.
You engage in trading entirely at your own risk.
You release the creator from any legal responsibility regarding your trading activities or financial results.
4. HYPOTHETICAL PERFORMANCE The statistics displayed on the "Dashboard" (Win Rate, P&L, etc.) are hypothetical and based on historical backtesting data.
Past performance is not indicative of future results.
These results do not account for slippage, spreads, commission fees, or real-time liquidity issues.
Strategies that performed well in the past may fail in current or future market conditions.
5. HIGH-RISK WARNING Trading in financial markets (Stocks, Forex, Crypto, Futures) involves a high degree of risk and is not suitable for all investors. You could lose some or all of your initial investment. You should not trade with money that you cannot afford to lose.
IF YOU DO NOT AGREE WITH THESE TERMS, DO NOT USE THIS SCRIPT.
SHFE vs COMEX Silver USD Spread (FX Adjusted)This indicator converts Shanghai Futures Exchange silver pricing (CNY per kilogram) into U.S. dollars per troy ounce using the live USD/CNY exchange rate. It compares the FX-adjusted Shanghai price with COMEX silver futures pricing and displays:
• Shanghai silver (converted to USD/oz)
• COMEX silver (USD/oz)
• The spread between the two markets (Shanghai − COMEX)
The tool helps visualize cross-market pricing differences and how currency movements influence silver valuation between Chinese and U.S. futures markets.
This is an analytical comparison tool and does not provide trading signals.
Notes:
• Requires access to SHFE and COMEX futures data on TradingView
• Uses USDCNY from the current chart (or selected FX symbol)
• Spread values are calculated mechanically from price and FX conversion
3-Daumen-Regel mit 4 Daumen, YTD-Linie, SMA200 und ATR
The script calculates the following values and displays them in a table:
- YTD line
- SMA
- ATR and ATR
- Difference to YTD
- Difference to SMA200
The table also includes a four-point rating for:
- the first 5 trading days of the year
- price relative to SMA
- price relative to YTD line
- the first month of the trading year
Price Above VWAP FilterPrice above VWAP
this shows either a zero or one if the price is above or below the vwap
Straddle Premium TrackerStraddle Premium Trackefr is used to combine CALL and PUT of premiums of same strike price
SGX/GIFT Nifty Non-Indian Hours BoxThis scripts draws a box around the high and low of both the PRE-MARKET and POST-MARKET hours of SGX Nifty.
PineStats█ OVERVIEW
PineStats is a comprehensive statistical analysis library for Pine Script v6, providing 104 functions across 6 modules. Built for quantitative traders, researchers, and indicator developers who need professional-grade statistics without reinventing the wheel.
For building mean-reversion strategies, analyzing return distributions, measuring correlations, or testing for market regimes.
█ MODULES
CORE STATISTICS (20 functions)
• Central tendency: mean, median, WMA, EMA
• Dispersion: variance, stdev, MAD, range
• Standardization: z-score, robust z-score, normalize, percentile
• Distribution shape: skewness, kurtosis
PROBABILITY DISTRIBUTIONS (17 functions)
• Normal: PDF, CDF, inverse CDF (quantile function)
• Power-law: Hill estimator, MLE alpha, survival function
• Exponential: PDF, CDF, rate estimation
• Normality testing: Jarque-Bera test
ENTROPY (9 functions)
• Shannon entropy (information theory)
• Tsallis entropy (non-extensive, fat-tail sensitive)
• Permutation entropy (ordinal patterns)
• Approximate entropy (regularity measure)
• Entropy-based regime detection
PROBABILITY (21 functions)
• Win rates and expected value
• First passage time estimation
• TP/SL probability analysis
• Conditional probability and Bayes updates
• Streak and drawdown probabilities
REGRESSION (19 functions)
• Linear regression: slope, intercept, forecast
• Goodness of fit: R², adjusted R², standard error
• Statistical tests: t-statistic, p-value, significance
• Trend analysis: strength, angle, acceleration
• Quadratic regression
CORRELATION (18 functions)
• Pearson, Spearman, Kendall correlation
• Covariance, beta, alpha (Jensen's)
• Rolling correlation analysis
• Autocorrelation and cross-correlation
• Information ratio, tracking error
█ QUICK START
import HenriqueCentieiro/PineStats/1 as stats
// Z-score for mean reversion
z = stats.zscore(close, 20)
// Test if returns are normally distributed
returns = (close - close ) / close
isGaussian = stats.is_normal(returns, 100, 0.05)
// Regression channel
= stats.linreg_channel(close, 50, 2.0)
// Correlation with benchmark
spyReturns = request.security("SPY", timeframe.period, close/close - 1)
beta = stats.beta(returns, spyReturns, 60)
█ USE CASES
✓ Mean Reversion — z-scores, percentiles, Bollinger-style analysis
✓ Regime Detection — entropy measures, correlation regimes
✓ Risk Analysis — drawdown probability, VaR via quantiles
✓ Strategy Evaluation — expected value, win rates, R:R analysis
✓ Distribution Analysis — normality tests, fat-tail detection
✓ Multi-Asset — beta, alpha, correlation, relative strength
█ NOTES
• All functions return `na` on invalid inputs
• Designed for Pine Script v6
• Fully documented in the library header
• Part of the Pine ecosystem: PineStats, PineQuant, PineCriticality, PineWavelet
█ REFERENCES
• Abramowitz & Stegun — Normal CDF approximation
• Acklam's algorithm — Inverse normal CDF
• Hill estimator — Power-law tail estimation
• Tsallis statistics — Non-extensive entropy
Full documentation in the library header.
mean(src, length)
Calculates the arithmetic mean (simple moving average) over a lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Arithmetic mean of the last `length` values, or `na` if inputs invalid
wma_custom(src, length)
Calculates weighted moving average with linearly decreasing weights
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Weighted moving average, or `na` if inputs invalid
ema_custom(src, length)
Calculates exponential moving average
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Exponential moving average, or `na` if inputs invalid
median(src, length)
Calculates the median value over a lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Median value, or `na` if inputs invalid
variance(src, length)
Calculates population variance over a lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Population variance, or `na` if inputs invalid
stdev(src, length)
Calculates population standard deviation over a lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Population standard deviation, or `na` if inputs invalid
mad(src, length)
Calculates Median Absolute Deviation (MAD) - robust dispersion measure
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: MAD value, or `na` if inputs invalid
data_range(src, length)
Calculates the range (highest - lowest) over a lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Range value, or `na` if inputs invalid
zscore(src, length)
Calculates z-score (number of standard deviations from mean)
Parameters:
src (float) : Source series
length (simple int) : Lookback period for mean and stdev calculation (must be >= 2)
Returns: Z-score, or `na` if inputs invalid or stdev is zero
zscore_robust(src, length)
Calculates robust z-score using median and MAD (resistant to outliers)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 2)
Returns: Robust z-score, or `na` if inputs invalid or MAD is zero
normalize(src, length)
Normalizes value to range using min-max scaling
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Normalized value in , or `na` if inputs invalid or range is zero
percentile(src, length)
Calculates percentile rank of current value within lookback window
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Percentile rank (0 to 100), or `na` if inputs invalid
winsorize(src, length, lower_pct, upper_pct)
Winsorizes values by clamping to percentile bounds (reduces outlier impact)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
lower_pct (simple float) : Lower percentile bound (0-100, e.g., 5 for 5th percentile)
upper_pct (simple float) : Upper percentile bound (0-100, e.g., 95 for 95th percentile)
Returns: Winsorized value clamped to bounds
skewness(src, length)
Calculates sample skewness (measure of distribution asymmetry)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 3)
Returns: Skewness value (negative = left tail, positive = right tail), or `na` if invalid
kurtosis(src, length)
Calculates excess kurtosis (measure of distribution tail heaviness)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 4)
Returns: Excess kurtosis (>0 = heavy tails, <0 = light tails), or `na` if invalid
count_valid(src, length)
Counts non-na values in lookback window (useful for data quality checks)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Count of valid (non-na) values
sum(src, length)
Calculates sum over lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Sum of values, or `na` if inputs invalid
cumsum(src)
Calculates cumulative sum (running total from first bar)
Parameters:
src (float) : Source series
Returns: Cumulative sum
change(src, length)
Returns the change (difference) from n bars ago
Parameters:
src (float) : Source series
length (simple int) : Number of bars to look back (must be >= 1)
Returns: Current value minus value from `length` bars ago
roc(src, length)
Calculates Rate of Change (percentage change from n bars ago)
Parameters:
src (float) : Source series
length (simple int) : Number of bars to look back (must be >= 1)
Returns: Percentage change as decimal (0.05 = 5%), or `na` if invalid
normal_pdf_standard(x)
Calculates the standard normal probability density function (PDF)
Parameters:
x (float) : The value to evaluate
Returns: PDF value at x for standard normal N(0,1)
normal_pdf(x, mu, sigma)
Calculates the normal probability density function (PDF)
Parameters:
x (float) : The value to evaluate
mu (float) : Mean of the distribution (default: 0)
sigma (float) : Standard deviation (default: 1, must be > 0)
Returns: PDF value at x for normal N(mu, sigma²)
normal_cdf_standard(x)
Calculates the standard normal cumulative distribution function (CDF)
Parameters:
x (float) : The value to evaluate
Returns: Probability P(X <= x) for standard normal N(0,1)
@description Uses Abramowitz & Stegun approximation (formula 7.1.26), accurate to ~1.5e-7
normal_cdf(x, mu, sigma)
Calculates the normal cumulative distribution function (CDF)
Parameters:
x (float) : The value to evaluate
mu (float) : Mean of the distribution (default: 0)
sigma (float) : Standard deviation (default: 1, must be > 0)
Returns: Probability P(X <= x) for normal N(mu, sigma²)
normal_inv_standard(p)
Calculates the inverse standard normal CDF (quantile function)
Parameters:
p (float) : Probability value (must be in (0, 1))
Returns: x such that P(X <= x) = p for standard normal N(0,1)
@description Uses Acklam's algorithm, accurate to ~1.15e-9
normal_inv(p, mu, sigma)
Calculates the inverse normal CDF (quantile function)
Parameters:
p (float) : Probability value (must be in (0, 1))
mu (float) : Mean of the distribution
sigma (float) : Standard deviation (must be > 0)
Returns: x such that P(X <= x) = p for normal N(mu, sigma²)
power_law_alpha(src, length, tail_pct)
Estimates power-law exponent (alpha) using Hill estimator
Parameters:
src (float) : Source series (typically absolute returns or drawdowns)
length (simple int) : Lookback period (must be >= 10 for reliable estimates)
tail_pct (simple float) : Percentage of data to use for tail estimation (default: 0.1 = top 10%)
Returns: Estimated alpha (tail index), typically 2-4 for financial data
@description Alpha < 2 indicates infinite variance (very heavy tails)
@description Alpha < 3 indicates infinite kurtosis
@description Alpha > 4 suggests near-Gaussian behavior
power_law_alpha_mle(src, length, x_min)
Estimates power-law alpha using maximum likelihood (Clauset method)
Parameters:
src (float) : Source series (positive values expected)
length (simple int) : Lookback period (must be >= 20)
x_min (float) : Minimum threshold for power-law behavior
Returns: Estimated alpha using MLE
power_law_pdf(x, alpha, x_min)
Calculates power-law probability density (Pareto Type I)
Parameters:
x (float) : Value to evaluate (must be >= x_min)
alpha (float) : Power-law exponent (must be > 1)
x_min (float) : Minimum value / scale parameter (must be > 0)
Returns: PDF value
power_law_survival(x, alpha, x_min)
Calculates power-law survival function P(X > x)
Parameters:
x (float) : Value to evaluate (must be >= x_min)
alpha (float) : Power-law exponent (must be > 1)
x_min (float) : Minimum value / scale parameter (must be > 0)
Returns: Probability of exceeding x
power_law_ks(src, length, alpha, x_min)
Tests if data follows power-law using simplified Kolmogorov-Smirnov
Parameters:
src (float) : Source series
length (simple int) : Lookback period
alpha (float) : Estimated alpha from power_law_alpha()
x_min (float) : Threshold value
Returns: KS statistic (lower = better fit, typically < 0.1 for good fit)
is_power_law(src, length, tail_pct, ks_threshold)
Simple test if distribution appears to follow power-law
Parameters:
src (float) : Source series
length (simple int) : Lookback period
tail_pct (simple float) : Tail percentage for alpha estimation
ks_threshold (simple float) : Maximum KS statistic for acceptance (default: 0.1)
Returns: true if KS test suggests power-law fit
exp_pdf(x, lambda)
Calculates exponential probability density function
Parameters:
x (float) : Value to evaluate (must be >= 0)
lambda (float) : Rate parameter (must be > 0)
Returns: PDF value
exp_cdf(x, lambda)
Calculates exponential cumulative distribution function
Parameters:
x (float) : Value to evaluate (must be >= 0)
lambda (float) : Rate parameter (must be > 0)
Returns: Probability P(X <= x)
exp_lambda(src, length)
Estimates exponential rate parameter (lambda) using MLE
Parameters:
src (float) : Source series (positive values)
length (simple int) : Lookback period
Returns: Estimated lambda (1/mean)
jarque_bera(src, length)
Calculates Jarque-Bera test statistic for normality
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 10)
Returns: JB statistic (higher = more deviation from normality)
@description Under normality, JB ~ chi-squared(2). JB > 6 suggests non-normality at 5% level
is_normal(src, length, significance)
Tests if distribution is approximately normal
Parameters:
src (float) : Source series
length (simple int) : Lookback period
significance (simple float) : Significance level (default: 0.05)
Returns: true if Jarque-Bera test does not reject normality
shannon_entropy(src, length, n_bins)
Calculates Shannon entropy from a probability distribution
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 10)
n_bins (simple int) : Number of histogram bins for discretization (default: 10)
Returns: Shannon entropy in bits (log base 2)
@description Higher entropy = more randomness/uncertainty, lower = more predictability
shannon_entropy_norm(src, length, n_bins)
Calculates normalized Shannon entropy
Parameters:
src (float) : Source series
length (simple int) : Lookback period
n_bins (simple int) : Number of histogram bins
Returns: Normalized entropy where 0 = perfectly predictable, 1 = maximum randomness
tsallis_entropy(src, length, q, n_bins)
Calculates Tsallis entropy with q-parameter
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 10)
q (float) : Entropic index (q=1 recovers Shannon entropy)
n_bins (simple int) : Number of histogram bins
Returns: Tsallis entropy value
@description q < 1: emphasizes rare events (fat tails)
@description q = 1: equivalent to Shannon entropy
@description q > 1: emphasizes common events
optimal_q(src, length)
Estimates optimal q parameter from kurtosis
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Estimated q value that best captures the distribution's tail behavior
@description Uses relationship: q ≈ (5 + kurtosis) / (3 + kurtosis) for kurtosis > 0
tsallis_q_gaussian(x, q, beta)
Calculates Tsallis q-Gaussian probability density
Parameters:
x (float) : Value to evaluate
q (float) : Tsallis q parameter (must be < 3)
beta (float) : Width parameter (inverse temperature, must be > 0)
Returns: q-Gaussian PDF value
@description q=1 recovers standard Gaussian
permutation_entropy(src, length, order)
Calculates permutation entropy (ordinal pattern complexity)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 20)
order (simple int) : Embedding dimension / pattern length (2-5, default: 3)
Returns: Normalized permutation entropy
@description Measures complexity of temporal ordering patterns
@description 0 = perfectly predictable sequence, 1 = random
approx_entropy(src, length, m, r)
Calculates Approximate Entropy (ApEn) - regularity measure
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 50)
m (simple int) : Embedding dimension (default: 2)
r (simple float) : Tolerance as fraction of stdev (default: 0.2)
Returns: Approximate entropy value (higher = more irregular/complex)
@description Lower ApEn indicates more self-similarity and predictability
entropy_regime(src, length, q, n_bins)
Detects market regime based on entropy level
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback period
q (float) : Tsallis q parameter (use optimal_q() or default 1.5)
n_bins (simple int) : Number of histogram bins
Returns: Regime indicator: -1 = trending (low entropy), 0 = transition, 1 = ranging (high entropy)
entropy_risk(src, length)
Calculates entropy-based risk indicator
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback period
Returns: Risk score where 1 = maximum divergence from Gaussian 1
hit_rate(src, length)
Calculates hit rate (probability of positive outcome) over lookback
Parameters:
src (float) : Source series (positive values count as hits)
length (simple int) : Lookback period
Returns: Hit rate as decimal
hit_rate_cond(condition, length)
Calculates hit rate for custom condition over lookback
Parameters:
condition (bool) : Boolean series (true = hit)
length (simple int) : Lookback period
Returns: Hit rate as decimal
expected_value(src, length)
Calculates expected value of a series
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Expected value (mean)
expected_value_trade(win_prob, take_profit, stop_loss)
Calculates expected value for a trade with TP and SL levels
Parameters:
win_prob (float) : Probability of hitting TP (0-1)
take_profit (float) : Take profit in price units or %
stop_loss (float) : Stop loss in price units or % (positive value)
Returns: Expected value per trade
@description EV = (win_prob * TP) - ((1 - win_prob) * SL)
breakeven_winrate(take_profit, stop_loss)
Calculates breakeven win rate for given TP/SL ratio
Parameters:
take_profit (float) : Take profit distance
stop_loss (float) : Stop loss distance
Returns: Required win rate for breakeven (EV = 0)
reward_risk_ratio(take_profit, stop_loss)
Calculates the reward-to-risk ratio
Parameters:
take_profit (float) : Take profit distance
stop_loss (float) : Stop loss distance
Returns: R:R ratio
fpt_probability(src, length, target, max_bars)
Estimates probability of price reaching target within N bars
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback for volatility estimation
target (float) : Target move (in same units as src, e.g., % return)
max_bars (simple int) : Maximum bars to consider
Returns: Probability of reaching target within max_bars
@description Based on random walk with drift approximation
fpt_mean(src, length, target)
Estimates mean first passage time to target level
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback for volatility estimation
target (float) : Target move
Returns: Expected number of bars to reach target (can be infinite)
fpt_historical(src, length, target)
Counts historical bars to reach target from each point
Parameters:
src (float) : Source series (typically price or returns)
length (simple int) : Lookback period
target (float) : Target move from each starting point
Returns: Array of first passage times (na if target not reached within lookback)
tp_probability(src, length, tp_distance, sl_distance)
Estimates probability of hitting TP before SL
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback for estimation
tp_distance (float) : Take profit distance (positive)
sl_distance (float) : Stop loss distance (positive)
Returns: Probability of TP being hit first
trade_probability(src, length, tp_pct, sl_pct)
Calculates complete trade probability and EV analysis
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback period
tp_pct (float) : Take profit percentage
sl_pct (float) : Stop loss percentage
Returns: Tuple:
cond_prob(condition_a, condition_b, length)
Calculates conditional probability P(B|A) from historical data
Parameters:
condition_a (bool) : Condition A (the given condition)
condition_b (bool) : Condition B (the outcome)
length (simple int) : Lookback period
Returns: P(B|A) = P(A and B) / P(A)
bayes_update(prior, likelihood, false_positive)
Updates probability using Bayes' theorem
Parameters:
prior (float) : Prior probability P(H)
likelihood (float) : P(E|H) - probability of evidence given hypothesis
false_positive (float) : P(E|~H) - probability of evidence given hypothesis is false
Returns: Posterior probability P(H|E)
streak_prob(win_rate, streak_length)
Calculates probability of N consecutive wins given win rate
Parameters:
win_rate (float) : Single-trade win probability
streak_length (simple int) : Number of consecutive wins
Returns: Probability of streak
losing_streak_prob(win_rate, streak_length)
Calculates probability of experiencing N consecutive losses
Parameters:
win_rate (float) : Single-trade win probability
streak_length (simple int) : Number of consecutive losses
Returns: Probability of losing streak
drawdown_prob(src, length, dd_threshold)
Estimates probability of drawdown exceeding threshold
Parameters:
src (float) : Source series (returns)
length (simple int) : Lookback period
dd_threshold (float) : Drawdown threshold (as positive decimal, e.g., 0.10 = 10%)
Returns: Historical probability of exceeding drawdown threshold
prob_to_odds(prob)
Calculates odds from probability
Parameters:
prob (float) : Probability (0-1)
Returns: Odds (prob / (1 - prob))
odds_to_prob(odds)
Calculates probability from odds
Parameters:
odds (float) : Odds ratio
Returns: Probability (0-1)
implied_prob(decimal_odds)
Calculates implied probability from decimal odds (betting)
Parameters:
decimal_odds (float) : Decimal odds (e.g., 2.5 means $2.50 return per $1 bet)
Returns: Implied probability
logit(prob)
Calculates log-odds (logit) from probability
Parameters:
prob (float) : Probability (must be in (0, 1))
Returns: Log-odds
inv_logit(log_odds)
Calculates probability from log-odds (inverse logit / sigmoid)
Parameters:
log_odds (float) : Log-odds value
Returns: Probability (0-1)
linreg_slope(src, length)
Calculates linear regression slope
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 2)
Returns: Slope coefficient (change per bar)
linreg_intercept(src, length)
Calculates linear regression intercept
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 2)
Returns: Intercept (predicted value at oldest bar in window)
linreg_value(src, length)
Calculates predicted value at current bar using linear regression
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Predicted value at current bar (end of regression line)
linreg_forecast(src, length, offset)
Forecasts value N bars ahead using linear regression
Parameters:
src (float) : Source series
length (simple int) : Lookback period for regression
offset (simple int) : Bars ahead to forecast (positive = future)
Returns: Forecasted value
linreg_channel(src, length, mult)
Calculates linear regression channel with bands
Parameters:
src (float) : Source series
length (simple int) : Lookback period
mult (simple float) : Standard deviation multiplier for bands
Returns: Tuple:
r_squared(src, length)
Calculates R-squared (coefficient of determination)
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: R² value where 1 = perfect linear fit
adj_r_squared(src, length)
Calculates adjusted R-squared (accounts for sample size)
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Adjusted R² value
std_error(src, length)
Calculates standard error of estimate (residual standard deviation)
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Standard error
residual(src, length)
Calculates residual at current bar
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Residual (actual - predicted)
residuals(src, length)
Returns array of all residuals in lookback window
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Array of residuals
t_statistic(src, length)
Calculates t-statistic for slope coefficient
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: T-statistic (slope / standard error of slope)
slope_pvalue(src, length)
Approximates p-value for slope t-test (two-tailed)
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Approximate p-value
is_significant(src, length, alpha)
Tests if regression slope is statistically significant
Parameters:
src (float) : Source series
length (simple int) : Lookback period
alpha (simple float) : Significance level (default: 0.05)
Returns: true if slope is significant at alpha level
trend_strength(src, length)
Calculates normalized trend strength based on R² and slope
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Trend strength where sign indicates direction
trend_angle(src, length)
Calculates trend angle in degrees
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Angle in degrees (positive = uptrend, negative = downtrend)
linreg_acceleration(src, length)
Calculates trend acceleration (second derivative)
Parameters:
src (float) : Source series
length (simple int) : Lookback period for each regression
Returns: Acceleration (change in slope)
linreg_deviation(src, length)
Calculates deviation from regression line in standard error units
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Deviation in standard error units (like z-score)
quadreg_coefficients(src, length)
Fits quadratic regression and returns coefficients
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 4)
Returns: Tuple: for y = a*x² + b*x + c
quadreg_value(src, length)
Calculates quadratic regression value at current bar
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Predicted value from quadratic fit
correlation(x, y, length)
Calculates Pearson correlation coefficient between two series
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period (must be >= 3)
Returns: Correlation coefficient
covariance(x, y, length)
Calculates sample covariance between two series
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period (must be >= 2)
Returns: Covariance value
beta(asset, benchmark, length)
Calculates beta coefficient (slope of regression of y on x)
Parameters:
asset (float) : Asset returns series
benchmark (float) : Benchmark returns series
length (simple int) : Lookback period
Returns: Beta coefficient
@description Beta = Cov(asset, benchmark) / Var(benchmark)
alpha(asset, benchmark, length, risk_free)
Calculates alpha (Jensen's alpha / intercept)
Parameters:
asset (float) : Asset returns series
benchmark (float) : Benchmark returns series
length (simple int) : Lookback period
risk_free (float) : Risk-free rate (default: 0)
Returns: Alpha value (excess return not explained by beta)
spearman(x, y, length)
Calculates Spearman rank correlation coefficient
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period (must be >= 3)
Returns: Spearman correlation
@description More robust to outliers than Pearson correlation
kendall_tau(x, y, length)
Calculates Kendall's tau rank correlation (simplified)
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period (must be >= 3)
Returns: Kendall's tau
correlation_change(x, y, length, change_period)
Calculates change in correlation over time
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period for correlation
change_period (simple int) : Period over which to measure change
Returns: Change in correlation
correlation_regime(x, y, length, ma_length)
Detects correlation regime based on level and stability
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period for correlation
ma_length (simple int) : Moving average length for smoothing
Returns: Regime: -1 = negative, 0 = uncorrelated, 1 = positive
correlation_stability(x, y, length, stability_length)
Calculates correlation stability (inverse of volatility)
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback for correlation
stability_length (simple int) : Lookback for stability calculation
Returns: Stability score where 1 = perfectly stable
relative_strength(asset, benchmark, length)
Calculates relative strength of asset vs benchmark
Parameters:
asset (float) : Asset price series
benchmark (float) : Benchmark price series
length (simple int) : Smoothing period
Returns: Relative strength ratio (normalized)
tracking_error(asset, benchmark, length)
Calculates tracking error (standard deviation of excess returns)
Parameters:
asset (float) : Asset returns
benchmark (float) : Benchmark returns
length (simple int) : Lookback period
Returns: Tracking error (annualize by multiplying by sqrt(252) for daily data)
information_ratio(asset, benchmark, length)
Calculates information ratio (risk-adjusted excess return)
Parameters:
asset (float) : Asset returns
benchmark (float) : Benchmark returns
length (simple int) : Lookback period
Returns: Information ratio
capture_ratio(asset, benchmark, length, up_capture)
Calculates up/down capture ratio
Parameters:
asset (float) : Asset returns
benchmark (float) : Benchmark returns
length (simple int) : Lookback period
up_capture (simple bool) : If true, calculate up capture; if false, down capture
Returns: Capture ratio
autocorrelation(src, length, lag)
Calculates autocorrelation at specified lag
Parameters:
src (float) : Source series
length (simple int) : Lookback period
lag (simple int) : Lag for autocorrelation (default: 1)
Returns: Autocorrelation at specified lag
partial_autocorr(src, length)
Calculates partial autocorrelation at lag 1
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: PACF at lag 1 (equals ACF at lag 1)
autocorr_test(src, length, max_lag)
Tests for significant autocorrelation (Ljung-Box inspired)
Parameters:
src (float) : Source series
length (simple int) : Lookback period
max_lag (simple int) : Maximum lag to test
Returns: Sum of squared autocorrelations (higher = more autocorrelation)
cross_correlation(x, y, length, lag)
Calculates cross-correlation at specified lag
Parameters:
x (float) : First series
y (float) : Second series (lagged)
length (simple int) : Lookback period
lag (simple int) : Lag to apply to y (positive = y leads x)
Returns: Cross-correlation at specified lag
cross_correlation_peak(x, y, length, max_lag)
Finds lag with maximum cross-correlation
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period
max_lag (simple int) : Maximum lag to search (both directions)
Returns: Tuple:
Monte Carlo Simulation BandsMonte Carlo Simulation v2.4.2
Plots a one-bar-ahead price distribution band built from many simulated paths. The green band shows empirical percentiles of simulated final prices—these are distribution bounds, not a confidence interval of the mean.
What It Does
Simulates many one-bar price paths using a directional random walk with volatility scaling (uniform shocks, not Gaussian GBM).
Plots Mean Forecast, Median Forecast, and configurable percentile bounds (default 5th/95th).
Optional rolling HTF-days mean line (yellow) for trend context.
Optional labels and forward projection lines.
Alerts when the confirmed close breaks above or below the percentile band.
Non-Repainting & HTF Behavior (Fail-Closed)
All calculations are gated to confirmed bars only via explicit no_repaint_ok gate (barstate.isconfirmed).
If you select an HTF Resolution, the script uses a strict request.security(..., lookahead_off, gaps_off) pipeline.
If HTF data is unavailable, outputs are na—no silent fallback to chart timeframe.
A separate "HTF Alignment (lagged)" plot shows the prior HTF close (htf_price ) as visual proof of no look-ahead.
Volatility Source & Scaling
If "Use Historical Volatility" is enabled, volatility is estimated from log returns on the selected resolution (HTF if set, otherwise chart).
Annualization adapts to session type:
Equities: 6.5 hours/day, 252 trading days/year
Crypto: 24 hours/day, 365 days/year
Substeps increase path smoothness within the same one-bar horizon—they do not extend the forecast to multiple bars.
Key Inputs
• Prob Up / Prob Down — Must satisfy Prob Up + Prob Down ≤ 1.0. If violated, simulation is skipped and table shows "✗ PROB>1".
• # Simulations / # Substeps — Higher = smoother/more stable, but slower. Default 100×100 is a good balance.
• Lower/Upper Percentile — Define the band width (e.g., 5 and 95 for a 90% distribution band).
• Run On Last Bar Only — Performance mode (recommended). Skips historical computation; updates on each new confirmed bar.
• Resolution (HTF) — Leave blank for chart timeframe, or set to Weekly/Monthly for HTF-aligned simulation.
• Crypto 24/7 Session? — Enable for crypto markets to use correct annualization (365d, 24h).
How to Use (Quickstart)
Start with defaults and keep Run On Last Bar Only = true for speed.
Set Prob Up and Prob Down so their sum ≤ 1.0 (e.g., 0.5 + 0.5 = 1.0 for neutral).
Enable "Use Historical Volatility" and set a Volatility Lookback (e.g., 20 bars) for data-driven vol.
Set Resolution (HTF) if you want the model to run on higher timeframe data (e.g., 1W). Expect updates only when a new HTF interval starts.
Choose percentiles (e.g., 5 and 95) to define your distribution band width.
Enable alerts for "Price Above Upper Percentile" or "Price Below Lower Percentile" to get notified of breakouts.
Limitations & Disclosures
Forecast horizon is one bar only. Substeps do not create a multi-bar forecast.
Model uses uniform shocks with direction chosen from Prob Up/Down. This is not Geometric Brownian Motion (GBM) and is not calibrated to any option-implied distribution.
Bounds are percentiles of final simulated prices, not a statistical confidence interval of the mean.
HTF mode updates at the start of a new HTF interval (first chart bar where the HTF timestamp changes), so the band appears "step-like" in realtime.
Historical volatility requires enough bars for the selected lookback; until then, values may be na.
Performance depends on Sims × Substeps; extreme settings (e.g., 500×500) can be slow.
This indicator does not predict direction—it shows a probabilistic range based on your inputs.
Options Visualizer: Smart Money Barriers [V6]Options Visualizer: Institutional Barriers & Expected Move
The Options Visualizer is analysis tool designed for traders who want to gain an edge by monitoring the "Smart Money" (options market makers and institutional hedgers). This script helps you visualize key option market dynamics directly on your chart, allowing you to see statistical support/resistance levels and massive "walls" of liquidity.
Key Features
1. Institutional Walls (Manual Mode)
Input high Open Interest (OI) data from exchanges like Deribit or Coinglass.
Call Wall (Resistance): The strike price with the highest concentration of Call options. Market makers often defend these levels to prevent paying out buyers.
Put Wall (Support): The strike price with the highest concentration of Put options, acting as a "floor" for price action.
2. Auto-Probability Mode (Statistical Barriers)
Enable Auto Mode to calculate theoretical barriers based on a 2-Standard Deviation (95% Probability) model.
This visualizes the "extreme" ends of market expectations, where a reversal or significant resistance is mathematically likely.
3. Expected Move (68% Range Box)
The blue dotted box represents the 1-Standard Deviation (68% probability) move.
Historically, 68% of the time, the price at expiration will settle within this range. Staying outside this box signals an "over-extended" market.
The Math Behind the Magic
The script utilizes the standard Expected Move formula used by professional floor traders:
Expected Move = Current Price * (IV / 100) * SquareRoot(Days To Expiry / 365)
68% Probability (The Blue Box): Derived from 1-Standard Deviation (1-Sigma). It assumes a normal distribution of price returns.
95% Probability (Auto Mode Walls): Derived from 2-Standard Deviations (2-Sigma). This covers the vast majority of expected market outcomes, making these levels powerful institutional-grade support and resistance zones.
Implied Volatility (IV): Unlike historical volatility, IV represents the market's forward-looking "fear gauge" based on option pricing.
How to Use This Tool
1. Setup:
Look up the current Implied Volatility (IV) and Max Pain/Open Interest for your asset (use Coinglass or Deribit Metrics).
2. Inputs:
Enter the Days Until Expiration (e.g., if monthly options expire this Friday, enter the remaining days).
Enter the IV % (e.g., 55 for 55%).
3. Execution:
Trend Trading: If price stays within the Blue Box, the trend is "normal."
Mean Reversion: If price hits the Call/Put Wall (Red/Green dashed lines), look for exhaustion and potential reversal signals.
Breakouts: A sustained candle close outside the 95% Auto Walls suggests a "Black Swan" event or a massive short/gamma squeeze.
Why Use This Tool?
Traditional indicators (RSI, MACD) look at the past. This tool looks at current market expectations and positioning. By seeing where the "walls" are built, you can significantly improve your risk management and trading edge.
MANUAL:
Mode 1: Manual Institutional Data (Recommended for Specific Expiries)
This mode uses real-world Open Interest (OI) data, offering the most accurate view of where large institutions are actively defending their positions.¨
🛑 How to use the Manual Mode:
1. Disable the Enable Auto Probability Mode checkbox in the indicator settings.
2. Find the Data: Navigate to specialized crypto options analytics websites:
Coinglass Options (Look for "Open Interest by Strike")
Deribit Metrics (Look for Max Pain charts)
3. Identify Key Levels & Input them into the script settings:
Manual Call Wall Strike: Find the Highest Red Bar on the OI chart. This is the strike price with the most Call options, acting as massive institutional resistance.
Manual Put Wall Strike: Find the Highest Green Bar on the OI chart. This is the strike price with the most Put options, acting as a solid price floor (support).
Manual Max Pain Level: Locate the value labeled as Max Pain on the source website. This is the price where the most options would expire worthless for buyers.
Mode 2: Auto Probability Barriers (Statistical Mode)
If you don't want to manually input data, the Auto Mode calculates theoretical barriers based purely on math and volatility, providing highly probable, yet slightly less precise, support/resistance levels.
✅ How to use the Auto Mode:
Enable the Enable Auto Probability Mode checkbox in the indicator settings.
The script will automatically set the Call/Put Walls at the 2-Standard Deviation (95% probability) range.
You still need to update the Implied Volatility (IV) % and Days Until Expiration to ensure the calculations are accurate for today's market conditions.
US Stock Market Performance by Sector[Dots3Red]This indicator displays the annual performance of the U.S. stock market by sector.
Selected major sectors
IND – Industrials
TECH – Technology
HTH – Healthcare
FIN – Financials
COMM – Communication Services
CONSCYC – Consumer Cyclical
CONSSTAP – Consumer Staples
ENERGY – Energy
REAL ESTATE – Real Estate
BASMAT – Basic Materials
The data is presented in a table below the main chart.
Green cell — the sector was bullish during that year
Red cell — the sector was bearish during that year
The table automatically sorts sectors by performance, placing the best-performing sector at the top for each year.
NOTE:
Annual performance is calculated starting from 2020 by default (arbitrarily chosen) and can be adjusted by the user.
Sharpe Ratio [Alpha Extract]A sophisticated risk-adjusted return measurement system that calculates annualized Sharpe Ratio with dynamic color-coded visualization distinguishing return quality across positive and negative performance regimes. Utilizing rolling period calculations with smoothed moving average comparison, this indicator delivers institutional-grade performance assessment with overbought/oversold threshold detection for extreme risk-adjusted return conditions. The system's four-tier color classification combined with histogram fills and background highlighting provides comprehensive visual feedback on whether current returns justify their volatility risk across varying market cycles.
🔶 Advanced Sharpe Ratio Calculation Engine
Implements classic Sharpe Ratio methodology measuring mean daily return divided by return standard deviation with annualization factor for consistent interpretation. The system calculates daily percentage returns, computes rolling mean and standard deviation over configurable periods, applies square root of 365 scaling for annualized comparison, and generates unbounded ratio values where higher positive readings indicate superior risk-adjusted performance.
// Core Sharpe Ratio Framework
Daily_Return = close / close - 1
Mean_Return = ta.sma(Daily_Return, Period)
StdDev_Return = ta.stdev(Daily_Return, Period)
Sharpe_Ratio = (Mean_Return / StdDev_Return) * sqrt(365)
🔶 Dynamic Four-Tier Color Classification
Features sophisticated color logic distinguishing between strong positive returns (green), weakening positive returns (yellow), weakening negative returns (orange), and strong negative returns (red) based on relationship to smoothed average. The system compares current Sharpe against SMA-smoothed baseline, applying green when positive and accelerating, yellow when positive but decelerating, orange when negative but improving, and red when negative and deteriorating for nuanced regime assessment.
🔶 Smoothed Baseline Comparison Framework
Implements SMA smoothing of Sharpe Ratio with configurable period to establish momentum reference line for trend determination within risk-adjusted returns. The system calculates simple moving average of raw Sharpe values, uses this smoothed line as directional benchmark, and determines whether current risk-adjusted performance is strengthening or weakening relative to recent average for color classification logic.
🔶 Extreme Threshold Detection System
Provides overbought and oversold level identification with configurable upper and lower bounds marking exceptional risk-adjusted return extremes. The system defaults to +4.3 for overbought threshold (extremely favorable risk-return profile) and -2.3 for oversold threshold (severely unfavorable risk-return profile), applying dashed horizontal reference lines and background highlighting when Sharpe breaches these statistical extremes requiring attention.
🔶 Histogram Fill Visualization Architecture
Creates gradient-filled histogram between Sharpe Ratio line and zero baseline using dynamic color matching with 30% transparency for intuitive positive/negative return distinction. The system fills area above zero with bullish colors (green/yellow) and below zero with bearish colors (orange/red), providing immediate visual confirmation of whether returns are compensating for volatility risk or destroying risk-adjusted value.
🔶 Background Zone Highlighting Framework
Implements subtle background coloring when Sharpe enters extreme overbought or oversold zones, alerting traders to statistically significant risk-adjusted return conditions. The system applies semi-transparent red background when ratio exceeds +4.3 (exceptionally strong risk-adjusted returns potentially unsustainable) and green background when below -2.3 (severely poor risk-adjusted returns potentially reversionary), creating visual alerts without obscuring price action.
🔶 Annualization Methodology Integration
Utilizes standard square root of time scaling (sqrt(365)) to convert rolling period Sharpe calculations into annualized format for cross-temporal comparison. The system applies this mathematical transformation ensuring Sharpe values represent expected annual risk-adjusted returns regardless of calculation period length, enabling consistent interpretation whether using 100-day or 200-day rolling windows.
🔶 Zero-Line Reference System
Provides critical zero-line plot serving as boundary between positive risk-adjusted returns (capital allocation justified by return/risk profile) and negative risk-adjusted returns (strategy destroying value on risk-adjusted basis). The system emphasizes this threshold as decision point where values above zero suggest continuation while values below zero indicate reconsideration of exposure.
🔶 Momentum-Based Color
Transitions Implements intelligent color switching logic that considers both absolute Sharpe value and its momentum relative to smoothed average, creating four distinct regimes for granular performance assessment. The system enables identification of bullish acceleration (green), bullish deceleration (yellow), bearish improvement (orange), and bearish acceleration (red) for nuanced position management beyond simple positive/negative classification.
🔶 Configurable Period Optimization
Features adjustable calculation period and smoothing length enabling optimization across different trading timeframes and volatility regimes. The system defaults to 150-period calculation (approximately 6-7 months of daily data) with 30-period smoothing, but allows customization from short-term tactical assessment to long-term strategic evaluation based on investment horizon and strategy requirements.
🔶 Performance Optimization Framework
Employs efficient rolling calculations with streamlined daily return processing and optimized standard deviation computation for smooth real-time updates. The system includes minimal computational overhead through single-pass mean and variance calculations, enabling consistent performance across extended historical periods while maintaining accuracy of risk-adjusted return measurements.
This indicator delivers sophisticated risk-adjusted return analysis through classic Sharpe Ratio methodology with enhanced visual classification distinguishing return quality and momentum. Unlike simple return-focused indicators, Sharpe Ratio penalizes volatility ensuring traders evaluate whether returns justify the risk undertaken. The system's four-tier color coding, smoothed baseline comparison, and extreme threshold detection make it essential for portfolio managers and systematic traders seeking objective performance assessment beyond raw price gains. High positive Sharpe values indicate efficient return generation relative to volatility risk, while negative values signal value destruction on risk-adjusted basis requiring strategy reassessment. The indicator excels at identifying periods when risk-taking is rewarded (green zones) versus periods when volatility exceeds returns (red zones) across cryptocurrency, forex, and equity markets for optimal capital allocation decisions.
Smart Auto-Step Openndicator Name: 15m Reversal Strategy (Polymarket)
Short Description: A mean-reversion strategy designed for the 15-minute timeframe. It identifies overextended short-term trends and signals entries on the probability of a reversal candle.
Top 40 Best Performing Nasdaq Stocks with Advanced Stats ScreenWelcome to the CustomQuantLabs Advanced Stats Screener. This dashboard is designed for traders who need more than just price action—it provides a comprehensive, institutional-grade view of the "Top 40" performing assets in the Nasdaq (or any watchlist of your choice) at a single glance.
Instead of flipping through 40 different charts, this screener aggregates Performance Metrics and Advanced Statistical Risk Models into one clean, heatmap-style dashboard. It helps you instantly identify outliers, trend leaders, and potential mean-reversion setups.
Key Features
1. Multi-Timeframe Performance Heatmap Instantly spot momentum. The dashboard tracks returns across 5 key timeframes, color-coded with a dynamic heatmap (Bright Green for leaders, Bright Red for laggards):
Week% (Short-term momentum)
Month% & Quarter% (Medium-term trend)
6M% & 12M% (Long-term secular trend)
2. Institutional Risk Metrics (Advanced Stats) We go beyond simple percentage changes. This screener calculates complex statistical formulas for every single ticker in real-time:
Kelly Criterion (%): A money management formula used to determine optimal position size based on win probability and return ratio. A higher Kelly % suggests a statistically stronger "edge" based on recent history.
Sharpe Ratio: Measures risk-adjusted return. How much return are you getting for every unit of risk? (Values > 1.0 are generally considered good).
Sortino Ratio: Similar to Sharpe, but only penalizes downside volatility. This is crucial for distinguishing between "good volatility" (upside pumps) and "bad volatility" (crashes).
Z-Score: A mean-reversion metric. It measures how many standard deviations the current price is from its 20-day mean.
High Positive Z-Score (>2): Price may be overextended to the upside.
Low Negative Z-Score (<-2): Price may be oversold.
Volatility (%): A dynamic measure of the asset's daily range, helping you gauge the "personality" of the stock before entering.
Customization & Settings
Fully Customizable Watchlist: While pre-loaded with top Nasdaq performers (like NVDA, AMD, PLTR, MU), you can easily edit the "Symbols" input in the settings to track Crypto, Forex, or your own custom stock portfolio.
Smart Theme Detection: Includes a toggle for Dark Mode (ProjectSyndicate style) and Light Mode (Clean white style).
Compact Mode: You can toggle specific columns on or off to fit the table on smaller screens.
How to Use
Add the script to your chart.
Open Settings (Gear Icon).
Paste your list of 40 tickers into the "Ticker List" text area (separated by commas).
Use the Z-Score to find overbought/oversold setups and the Relative Strength (Week/Month) to find breakout candidates.
Disclaimer: This tool is for informational purposes only. The "Top 40" list requires manual updating if the market leaders change. All statistical metrics (Kelly, Sharpe, etc.) are based on historical data and do not guarantee future performance.
Built by CustomQuantLabs.
Volume + ATR Robust Z-Score Suite (MAD)Measure relevant volumes together with high-volatility candles, providing initiative signals based on volume. Mark the relevant candle and use it as support or resistance.
Smart Auto-Step Open (1H Base)The "Big Brother" to the 15m Open: While the 15m Open is perfect for scalping entries, this indicator is designed for Trend Direction & Bias. It automatically identifies the major Hourly and Daily opening levels, giving you the "Big Picture" context instantly.
🧠 Smart Auto-Step Logic: This script detects your timeframe and automatically upgrades the level to the next major resistance:
Intraday Mode (1s – 1H): Locks to the 1-Hour Open. This is your primary "Bull/Bear" line for the session.
Swing Mode (4H): Automatically switches to the 4-Hour Open.
Daily Mode (D): Automatically switches to the Daily Open.
Noise Filter: Hides automatically on intermediate frames (like 2H or 3H) to keep your chart clean.
✨ Luxury Visuals:
Floating Labels: No ugly boxes. Text floats cleanly in the right-side margin.
Custom Typography: Includes a "Luxury" setting that uses Bold Serif Unicode characters (e.g., 𝟏𝐇 𝐎𝐩𝐞𝐧) for a high-end, institutional look.
Dark Mode Optimized: Defaulted to Bright White for maximum contrast.
🚀 Key Features:
Zero-Lag Anchor: Uses time-based coordinates to ensure the line never repaints.
Smart Visibility: Works perfectly even if you are viewing the 1H chart itself (prevents the "disappearing line" bug).
Price Tags: Displays the exact price with a $ symbol.
PRO Strategy (The "Confluence" Setup): Load this indicator together with the "15m Open" version.
When Price is above the 15m Open AND the 1H Open → Strong Buy Signal.
When Price is below both → Strong Sell Signal.
Settings:
Font Style: Modern, Luxury, or Hacker.
Offset: Move the label right/left.
Color: Fully customizable.
Trend Strength [OmegaTools]Trend Strength is a quantitative regime oscillator designed to measure directional pressure and trend quality by blending price structure, return-dependence, realized intrabar expansion, and volume participation into a single normalized signal. The goal is not to predict, but to classify market state: when price action is in an expansionary/distributionary phase versus when it is in a contractionary/accumulation phase, so you can align execution and risk with the prevailing environment.
Core concept and methodology
The indicator aggregates four components computed on stable rolling windows and mapped into comparable ranges:
1. Price location / structural positioning (100-bar range)
A normalized price-location metric (position of close within the rolling high–low range) is transformed into a non-linear “strength” profile. This emphasizes meaningful departures from the middle of the range and penalizes indecision, producing a structure-aware contribution rather than a raw oscillator.
2. Return-dependence / directional persistence (100 bars)
A correlation term measures the relationship between the current return (close − close ) and the prior price level (close ). This helps detect environments where movement is more persistent or more mean-reverting, providing a statistical component that complements pure price-location signals.
3. Realized expansion / volatility proxy (50-bar accumulation, 300-bar normalization)
Intrabar expansion is approximated via the absolute candle body relative to the full range, aggregated over a short window to represent realized “effort” and then normalized over a longer window. This captures whether price is moving with meaningful body expansion versus compressing and stalling.
4. Volume participation (11-bar accumulation, 300-bar normalization)
A rolling volume sum is normalized over a longer window to quantify participation. This helps separate “thin” moves from moves supported by broader activity, without relying on exchange-specific volume assumptions.
The final oscillator is a weighted blend of these four normalized components, scaled for readability. The output is intentionally centered around two actionable regimes rather than a symmetric overbought/oversold framework.
How to read the oscillator
Trend Strength is designed around two main thresholds:
- Distribution / Expansion regime (oscillator above 0)
When the oscillator is above 0, the market is classified as being in a higher-pressure expansion regime. This often corresponds to directional continuation potential, stronger impulse behavior, and reduced suitability for tight mean-reversion tactics.
- Accumulation / Contraction regime (oscillator below −1.3)
When the oscillator is below −1.3, the market is classified as being in a contraction/accumulation regime. This frequently corresponds to compression, rotation, and lower directional efficiency, where breakouts may be more fragile and mean-reversion tactics may be more appropriate (depending on instrument and session conditions).
Values between 0 and −1.3 are treated as transitional/neutral, where the market is not clearly committing to either regime.
Continuous Mode vs Standard Mode
Trend Strength includes an optional Continuous Mode to improve interpretability during regime transitions:
- Standard Mode colors only when the oscillator is firmly in one of the two regimes (above 0 or below −1.3). Neutral zones remain uncolored, keeping the display conservative.
- Continuous Mode adds persistence logic: once a regime is confirmed, intermediate values are rendered with a lighter shade of the last confirmed regime until the opposite regime is confirmed. This reduces visual noise, helps maintain a consistent directional bias framework, and is particularly useful for intraday execution and session trend management.
Visual design and bar coloring
The oscillator line is color-coded:
- Purple: distribution / expansion regime
- Orange: accumulation / contraction regime
Neutral/transitional values are displayed in grey (or lightly shaded in Continuous Mode based on last confirmed regime).
Optionally, the indicator can color price bars using the same regime logic, allowing rapid at-a-glance regime recognition directly on the chart.
Practical use cases
- Regime filter for strategies: enable trend-following logic only in expansion regimes; enable mean-reversion or range logic in contraction regimes.
- Risk adjustment: increase/decrease position sizing or tighten/widen stops based on regime classification.
- Confirmation layer: combine with structure tools (market structure, VWAP, key levels) to validate whether conditions support continuation or imply compression.
- Session management: identify when a session is behaving as a trend day versus a rotational day, improving trade selection and reducing overtrading.
Notes
Trend Strength is a regime classifier and contextual tool. It does not guarantee future direction and should be integrated into a complete decision process (risk management, market structure, session context, and instrument-specific behavior).
© OmegaTools
Sigmoid Allocation Indicator & DashboardTL;DR This sigmoid-based allocation indicator tells you percentage of your portfolio to invest based on how much the market has dropped.
Market at all-time high? → Stay defensive, invest less (e.g., 30%)
Market crashed hard? → Get aggressive, invest more (e.g., 100%)
The "sigmoid" part just means the transition between these two extremes follows a smooth S-shaped curve.
Description
This indicator is a sigmoid-based allocation system that dynamically adjusts a portfolio exposure based on market drawdown.
It compares multiple steepness curves (K values) to find your optimal risk profile for leveraged ETF strategies, but it can also be used to scale in-out from stocks, crypto and to understand whether to use leverage or not.
The Sigmoid Allocation Dashboard helps you to dynamically adjust a portfolio allocation based on how much a market has dropped from its all-time high.
I've implemented it using a sigmoid (S-curve) function, that dynamically calculates the optimal allocation percentages. Depending on the market conditions, the S curves transition between defensive and aggressive allocations.
The Math Behind It (if you are a geek like me)
This indicator uses the sigmoid function to create smooth S-curve transitions:
α(D) = α_min + (α_max - α_min) × σ(k × (D - D_mid))
Where:
σ(x) = 1 / (1 + e^(-x)) ← Standard sigmoid function
You can also check it here:
// Sigmoid function: σ(x) = 1 / (1 + e^(-x))
sigmoid(float x) =>
1.0 / (1.0 + math.exp(-x))
// Alpha calculation: α(D) = α_min + (α_max - α_min) × σ(k × (D - D_mid))
calcAlpha(float drawdown, float k, float a_min, float a_max, float d_midpoint) =>
sig_input = k * (drawdown - d_midpoint) / 100.0
a_min + (a_max - a_min) * sigmoid(sig_input)
User parameters (you can tweak this):
Allocation Min (%): Your baseline allocation when markets are at ATH (default: 30%)
Allocation Max (%): Your maximum allocation during deep drawdowns (default: 100%)
D_mid (%): The drawdown level where you want to be at the midpoint (default: 25%)
Why do I like sigmoid and not a linear line?
Unlike linear models, the sigmoid creates "floors" and "ceilings" for your allocation. It transitions smoothly, no sudden jumps, and you never exceed your defined min/max bounds.
Understand the K Values (Steepness)
The K parameter controls how quickly your allocation shifts from defensive to aggressive.
Lower K (for example K=5) will give you a gradual transition, but at 0% drawdown you are already at a 46% allocation.
A higher like (like K=40) will give you a sharp transition, but at 0% drawdown you are close to the minimum allocation. On the other hand, a higher K will give close to 100% allocation when the markets are at new lows.
The example below illustrates this well, then the S&P 500 reached new lows in October 2022:
Different K values will affect the sigmoid curves (and you allocations differently). The chart below illustrates well how K affects the sigmoid curves:
Read the Dashboard
The main dashboard shows:
Current drawdown from ATH
Allocation % for each K value
Suggested action (Defensive → MAX LONG)
Use the Reference Chart
The static reference panel shows what your allocation would be at various drawdown levels (0%, 10%, 20%, 30%, 40%, 50%), helping you plan ahead.
Identify Zones
The color-coded chart background shows:
- 🟢 Green Zone: Aggressive positioning - "Buy the Dip"
- 🟡 Yellow Zone: Transition zone - Scaling in/out
- 🔴 Red Zone: Defensive positioning - Protect ya gains
Use Cases
Use case 1: Leveraged ETF Portfolio Management (this is my main use case)
When holding leveraged ETFs like TQQQ or UPRO, volatility makes it important to:
- Reduce exposure near all-time highs (when crashes hurt most)
- Increase exposure during drawdowns (when recovery potential is highest)
Example Strategy:
- At ATH: Hold 30% TQQQ, 70% cash/bonds or other uncorrelated assets
- At 25% drawdown: Hold 65% TQQQ, 35% cash/bonds
- At 40%+ drawdown: Hold 100% TQQQ
Use case 2: Diversified Leveraged Portfolio
Compare different K values for different assets:
- Use K = 10 for broad market (QQQ/SPY exposure via TQQQ/UPRO)
- Use K = 25 for sector bets (TECL, SOXL, TMF) that you want to scale into faster
Use case 3: Systematic Rebalancing Signals
Use the alerts to trigger rebalancing:
- Alert when K3 allocation crosses above 90% (time to add)
- Alert when drawdown exceeds your D_mid threshold
- Alert when market returns to within 5% of ATH
Tips for Best Results
It works best in longer time frames
Adjust the ATR lookback window
Match your risk tolerance level
I use this for index investing and stocks and haven't tried with crypto
Thanks for using the indicator and let me know if you have any feedback :)
- Henrique Centieiro
world market Zones (IST) + Prev Day S/R + Pivot🧠 PART 1 — SESSION VOLATILITY ENGINE (SCRIPT 1)
This part does time-based market behavior mapping, not price indicators.
✅ What it Detects
All times are locked to IST (Asia/Kolkata):
Zone Purpose Why it matters
London (13:00–17:30) EU money flow Trend initiations often start here
NY (18:30–23:30) US volatility Expansion + reversals
Overlap (17:30–21:30) Highest liquidity window Breakouts + fakeouts
EIA (Wed 20:30–21:30) Crude inventory release Explosive oil moves
IMPORTANT FOR ANALYSING session START SHOCK POINTS.
🧠 What this section REALLY gives you
You now see:
When liquidity enters
When algos reset
When news shock candles form
Where false breakouts happen (often at session flips)
This is behavioral timing, not lagging math.
Not suitable for:
1D+ charts (session logic loses meaning)
Assets without clear London/NY behavior
🏆 What type of trader this script is for
This is NOT indicator trading.
This is for traders who:
✔ Trade liquidity sweeps
✔ Watch session opens
✔ Understand dealer positioning
✔ Trade crude, indices, forex
It’s basically a smart money timing + institutional level combo.
HAPPY TRADING
LiveTracker by N&MLiveTracker is a real-time trade execution and accounting engine built on top of statistically validated backtest states.
It mirrors live trading conditions with precise fee modeling, partial take-profits, trailing stops, and liquidation logic.
Each trade is tracked with both mark-to-market PnL and “net if closed now” metrics for full transparency.
Designed as a modular Pine Script® library, it enables reliable, state-driven live execution without repainting.
Z-Score STDEMA BandsZ-Score STDEMA Bands is a mean-reversion and regime-strength indicator built on normalized price deviation.
The indicator converts price into a Z-Score, measuring how many standard deviations the current price is from its moving average over a configurable lookback. This makes signals comparable across assets and timeframes.
On top of the Z-Score, the script applies an EMA of the Z-Score and dynamically builds upper and lower STDEMA bands using the rolling standard deviation of the Z-Score itself. These bands adapt to volatility in deviation, not price.
How to read it:
Z-Score (orange line): Distance from mean in standard deviations.
Horizontal levels (±1, ±2, ±3): Statistical extremes and mean-reversion zones.
Green/Red bands: EMA-based dynamic deviation envelopes.
Blue bars: Strong positive deviation (bullish expansion beyond statistical expectation).
Yellow bars: Strong negative deviation (bearish expansion beyond statistical expectation).
Use cases:
Identify overextended price conditions in a normalized framework.
Detect trend strength vs. mean-reversion (expansion outside bands).
Filter trades by statistical significance, not raw price movement.
P/E Ratio (TTM)This indicator plots the trailing P/E ratio (TTM) using GAAP EPS (TTM) sourced directly from TradingView’s fundamental data. It includes valuation‑zone color coding, yearly labels, and a clean, compressed visual layout suitable for most equities.
The goal is to provide a fast, intuitive view of how expensive or cheap a stock is relative to its historical earnings power.
Note:
The indicator caps P/E values around 120 for visual clarity.
Negative P/E ratios are intentionally excluded, since P/E is undefined when EPS is negative.
You can adjust the cap or remove it entirely if you prefer a full‑range view.
This tool is especially useful for identifying periods when a company is trading at historically elevated or discounted valuation levels.






















