[Sumit Ingole] 200-EMA SUMIT INGOLE
Indicator Name: 200 EMA Strategy Pro
Overview
The 200-period Exponential Moving Average (EMA) is widely regarded as the "Golden Line" by professional traders and institutional investors. This indicator is a powerful tool designed to identify the long-term market trend and filter out short-term market noise.
By giving more weight to recent price data than a simple moving average, this EMA reacts more fluidly to market shifts while remaining a rock-solid trend confirmation tool.
Key Features
Trend Filter: Instantly distinguish between a Bull market and a Bear market.
Price above 200 EMA: Bullish Bias
Price below 200 EMA: Bearish Bias
Dynamic Support & Resistance: Acts as a psychological floor or ceiling where major institutions often place buy or sell orders.
Institutional Benchmark: Since many hedge funds and banks track this specific level, price reactions near the 200 EMA are often highly significant.
Reduced Lag: Optimized exponential calculation ensures you stay ahead of the curve compared to traditional lagging indicators.
How to Trade with 200 EMA
Trend Confirmation: Only look for "Buy" setups when the price is trading above the 200 EMA to ensure you are trading with the primary trend.
Mean Reversion: When the price stretches too far away from the 200 EMA, it often acts like a magnet, pulling the price back toward it.
The "Death Cross" & "Golden Cross": Use this in conjunction with shorter EMAs (like the 50 EMA) to identify major trend reversals.
Exit Strategy: Can be used as a trailing stop-loss for long-term positional trades.
Best Used On:
Timeframes: Daily (1D), 4-Hour (4H), and Weekly (1W) for maximum accuracy.
Assets: Highly effective for Stocks, Forex (Major pairs), and Crypto (BTC/ETH).
Disclaimer: This tool is for educational and analytical purposes only. Trading involves risk, and it is recommended to use this indicator alongside other technical analysis tools for better confirmation.
Indicatori e strategie
PineStats█ OVERVIEW
PineStats is a comprehensive statistical analysis library for Pine Script v6, providing 104 functions across 6 modules. Built for quantitative traders, researchers, and indicator developers who need professional-grade statistics without reinventing the wheel.
For building mean-reversion strategies, analyzing return distributions, measuring correlations, or testing for market regimes.
█ MODULES
CORE STATISTICS (20 functions)
• Central tendency: mean, median, WMA, EMA
• Dispersion: variance, stdev, MAD, range
• Standardization: z-score, robust z-score, normalize, percentile
• Distribution shape: skewness, kurtosis
PROBABILITY DISTRIBUTIONS (17 functions)
• Normal: PDF, CDF, inverse CDF (quantile function)
• Power-law: Hill estimator, MLE alpha, survival function
• Exponential: PDF, CDF, rate estimation
• Normality testing: Jarque-Bera test
ENTROPY (9 functions)
• Shannon entropy (information theory)
• Tsallis entropy (non-extensive, fat-tail sensitive)
• Permutation entropy (ordinal patterns)
• Approximate entropy (regularity measure)
• Entropy-based regime detection
PROBABILITY (21 functions)
• Win rates and expected value
• First passage time estimation
• TP/SL probability analysis
• Conditional probability and Bayes updates
• Streak and drawdown probabilities
REGRESSION (19 functions)
• Linear regression: slope, intercept, forecast
• Goodness of fit: R², adjusted R², standard error
• Statistical tests: t-statistic, p-value, significance
• Trend analysis: strength, angle, acceleration
• Quadratic regression
CORRELATION (18 functions)
• Pearson, Spearman, Kendall correlation
• Covariance, beta, alpha (Jensen's)
• Rolling correlation analysis
• Autocorrelation and cross-correlation
• Information ratio, tracking error
█ QUICK START
import HenriqueCentieiro/PineStats/1 as stats
// Z-score for mean reversion
z = stats.zscore(close, 20)
// Test if returns are normally distributed
returns = (close - close ) / close
isGaussian = stats.is_normal(returns, 100, 0.05)
// Regression channel
= stats.linreg_channel(close, 50, 2.0)
// Correlation with benchmark
spyReturns = request.security("SPY", timeframe.period, close/close - 1)
beta = stats.beta(returns, spyReturns, 60)
█ USE CASES
✓ Mean Reversion — z-scores, percentiles, Bollinger-style analysis
✓ Regime Detection — entropy measures, correlation regimes
✓ Risk Analysis — drawdown probability, VaR via quantiles
✓ Strategy Evaluation — expected value, win rates, R:R analysis
✓ Distribution Analysis — normality tests, fat-tail detection
✓ Multi-Asset — beta, alpha, correlation, relative strength
█ NOTES
• All functions return `na` on invalid inputs
• Designed for Pine Script v6
• Fully documented in the library header
• Part of the Pine ecosystem: PineStats, PineQuant, PineCriticality, PineWavelet
█ REFERENCES
• Abramowitz & Stegun — Normal CDF approximation
• Acklam's algorithm — Inverse normal CDF
• Hill estimator — Power-law tail estimation
• Tsallis statistics — Non-extensive entropy
Full documentation in the library header.
mean(src, length)
Calculates the arithmetic mean (simple moving average) over a lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Arithmetic mean of the last `length` values, or `na` if inputs invalid
wma_custom(src, length)
Calculates weighted moving average with linearly decreasing weights
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Weighted moving average, or `na` if inputs invalid
ema_custom(src, length)
Calculates exponential moving average
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Exponential moving average, or `na` if inputs invalid
median(src, length)
Calculates the median value over a lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Median value, or `na` if inputs invalid
variance(src, length)
Calculates population variance over a lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Population variance, or `na` if inputs invalid
stdev(src, length)
Calculates population standard deviation over a lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Population standard deviation, or `na` if inputs invalid
mad(src, length)
Calculates Median Absolute Deviation (MAD) - robust dispersion measure
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: MAD value, or `na` if inputs invalid
data_range(src, length)
Calculates the range (highest - lowest) over a lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Range value, or `na` if inputs invalid
zscore(src, length)
Calculates z-score (number of standard deviations from mean)
Parameters:
src (float) : Source series
length (simple int) : Lookback period for mean and stdev calculation (must be >= 2)
Returns: Z-score, or `na` if inputs invalid or stdev is zero
zscore_robust(src, length)
Calculates robust z-score using median and MAD (resistant to outliers)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 2)
Returns: Robust z-score, or `na` if inputs invalid or MAD is zero
normalize(src, length)
Normalizes value to range using min-max scaling
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Normalized value in , or `na` if inputs invalid or range is zero
percentile(src, length)
Calculates percentile rank of current value within lookback window
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Percentile rank (0 to 100), or `na` if inputs invalid
winsorize(src, length, lower_pct, upper_pct)
Winsorizes values by clamping to percentile bounds (reduces outlier impact)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
lower_pct (simple float) : Lower percentile bound (0-100, e.g., 5 for 5th percentile)
upper_pct (simple float) : Upper percentile bound (0-100, e.g., 95 for 95th percentile)
Returns: Winsorized value clamped to bounds
skewness(src, length)
Calculates sample skewness (measure of distribution asymmetry)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 3)
Returns: Skewness value (negative = left tail, positive = right tail), or `na` if invalid
kurtosis(src, length)
Calculates excess kurtosis (measure of distribution tail heaviness)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 4)
Returns: Excess kurtosis (>0 = heavy tails, <0 = light tails), or `na` if invalid
count_valid(src, length)
Counts non-na values in lookback window (useful for data quality checks)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Count of valid (non-na) values
sum(src, length)
Calculates sum over lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Sum of values, or `na` if inputs invalid
cumsum(src)
Calculates cumulative sum (running total from first bar)
Parameters:
src (float) : Source series
Returns: Cumulative sum
change(src, length)
Returns the change (difference) from n bars ago
Parameters:
src (float) : Source series
length (simple int) : Number of bars to look back (must be >= 1)
Returns: Current value minus value from `length` bars ago
roc(src, length)
Calculates Rate of Change (percentage change from n bars ago)
Parameters:
src (float) : Source series
length (simple int) : Number of bars to look back (must be >= 1)
Returns: Percentage change as decimal (0.05 = 5%), or `na` if invalid
normal_pdf_standard(x)
Calculates the standard normal probability density function (PDF)
Parameters:
x (float) : The value to evaluate
Returns: PDF value at x for standard normal N(0,1)
normal_pdf(x, mu, sigma)
Calculates the normal probability density function (PDF)
Parameters:
x (float) : The value to evaluate
mu (float) : Mean of the distribution (default: 0)
sigma (float) : Standard deviation (default: 1, must be > 0)
Returns: PDF value at x for normal N(mu, sigma²)
normal_cdf_standard(x)
Calculates the standard normal cumulative distribution function (CDF)
Parameters:
x (float) : The value to evaluate
Returns: Probability P(X <= x) for standard normal N(0,1)
@description Uses Abramowitz & Stegun approximation (formula 7.1.26), accurate to ~1.5e-7
normal_cdf(x, mu, sigma)
Calculates the normal cumulative distribution function (CDF)
Parameters:
x (float) : The value to evaluate
mu (float) : Mean of the distribution (default: 0)
sigma (float) : Standard deviation (default: 1, must be > 0)
Returns: Probability P(X <= x) for normal N(mu, sigma²)
normal_inv_standard(p)
Calculates the inverse standard normal CDF (quantile function)
Parameters:
p (float) : Probability value (must be in (0, 1))
Returns: x such that P(X <= x) = p for standard normal N(0,1)
@description Uses Acklam's algorithm, accurate to ~1.15e-9
normal_inv(p, mu, sigma)
Calculates the inverse normal CDF (quantile function)
Parameters:
p (float) : Probability value (must be in (0, 1))
mu (float) : Mean of the distribution
sigma (float) : Standard deviation (must be > 0)
Returns: x such that P(X <= x) = p for normal N(mu, sigma²)
power_law_alpha(src, length, tail_pct)
Estimates power-law exponent (alpha) using Hill estimator
Parameters:
src (float) : Source series (typically absolute returns or drawdowns)
length (simple int) : Lookback period (must be >= 10 for reliable estimates)
tail_pct (simple float) : Percentage of data to use for tail estimation (default: 0.1 = top 10%)
Returns: Estimated alpha (tail index), typically 2-4 for financial data
@description Alpha < 2 indicates infinite variance (very heavy tails)
@description Alpha < 3 indicates infinite kurtosis
@description Alpha > 4 suggests near-Gaussian behavior
power_law_alpha_mle(src, length, x_min)
Estimates power-law alpha using maximum likelihood (Clauset method)
Parameters:
src (float) : Source series (positive values expected)
length (simple int) : Lookback period (must be >= 20)
x_min (float) : Minimum threshold for power-law behavior
Returns: Estimated alpha using MLE
power_law_pdf(x, alpha, x_min)
Calculates power-law probability density (Pareto Type I)
Parameters:
x (float) : Value to evaluate (must be >= x_min)
alpha (float) : Power-law exponent (must be > 1)
x_min (float) : Minimum value / scale parameter (must be > 0)
Returns: PDF value
power_law_survival(x, alpha, x_min)
Calculates power-law survival function P(X > x)
Parameters:
x (float) : Value to evaluate (must be >= x_min)
alpha (float) : Power-law exponent (must be > 1)
x_min (float) : Minimum value / scale parameter (must be > 0)
Returns: Probability of exceeding x
power_law_ks(src, length, alpha, x_min)
Tests if data follows power-law using simplified Kolmogorov-Smirnov
Parameters:
src (float) : Source series
length (simple int) : Lookback period
alpha (float) : Estimated alpha from power_law_alpha()
x_min (float) : Threshold value
Returns: KS statistic (lower = better fit, typically < 0.1 for good fit)
is_power_law(src, length, tail_pct, ks_threshold)
Simple test if distribution appears to follow power-law
Parameters:
src (float) : Source series
length (simple int) : Lookback period
tail_pct (simple float) : Tail percentage for alpha estimation
ks_threshold (simple float) : Maximum KS statistic for acceptance (default: 0.1)
Returns: true if KS test suggests power-law fit
exp_pdf(x, lambda)
Calculates exponential probability density function
Parameters:
x (float) : Value to evaluate (must be >= 0)
lambda (float) : Rate parameter (must be > 0)
Returns: PDF value
exp_cdf(x, lambda)
Calculates exponential cumulative distribution function
Parameters:
x (float) : Value to evaluate (must be >= 0)
lambda (float) : Rate parameter (must be > 0)
Returns: Probability P(X <= x)
exp_lambda(src, length)
Estimates exponential rate parameter (lambda) using MLE
Parameters:
src (float) : Source series (positive values)
length (simple int) : Lookback period
Returns: Estimated lambda (1/mean)
jarque_bera(src, length)
Calculates Jarque-Bera test statistic for normality
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 10)
Returns: JB statistic (higher = more deviation from normality)
@description Under normality, JB ~ chi-squared(2). JB > 6 suggests non-normality at 5% level
is_normal(src, length, significance)
Tests if distribution is approximately normal
Parameters:
src (float) : Source series
length (simple int) : Lookback period
significance (simple float) : Significance level (default: 0.05)
Returns: true if Jarque-Bera test does not reject normality
shannon_entropy(src, length, n_bins)
Calculates Shannon entropy from a probability distribution
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 10)
n_bins (simple int) : Number of histogram bins for discretization (default: 10)
Returns: Shannon entropy in bits (log base 2)
@description Higher entropy = more randomness/uncertainty, lower = more predictability
shannon_entropy_norm(src, length, n_bins)
Calculates normalized Shannon entropy
Parameters:
src (float) : Source series
length (simple int) : Lookback period
n_bins (simple int) : Number of histogram bins
Returns: Normalized entropy where 0 = perfectly predictable, 1 = maximum randomness
tsallis_entropy(src, length, q, n_bins)
Calculates Tsallis entropy with q-parameter
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 10)
q (float) : Entropic index (q=1 recovers Shannon entropy)
n_bins (simple int) : Number of histogram bins
Returns: Tsallis entropy value
@description q < 1: emphasizes rare events (fat tails)
@description q = 1: equivalent to Shannon entropy
@description q > 1: emphasizes common events
optimal_q(src, length)
Estimates optimal q parameter from kurtosis
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Estimated q value that best captures the distribution's tail behavior
@description Uses relationship: q ≈ (5 + kurtosis) / (3 + kurtosis) for kurtosis > 0
tsallis_q_gaussian(x, q, beta)
Calculates Tsallis q-Gaussian probability density
Parameters:
x (float) : Value to evaluate
q (float) : Tsallis q parameter (must be < 3)
beta (float) : Width parameter (inverse temperature, must be > 0)
Returns: q-Gaussian PDF value
@description q=1 recovers standard Gaussian
permutation_entropy(src, length, order)
Calculates permutation entropy (ordinal pattern complexity)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 20)
order (simple int) : Embedding dimension / pattern length (2-5, default: 3)
Returns: Normalized permutation entropy
@description Measures complexity of temporal ordering patterns
@description 0 = perfectly predictable sequence, 1 = random
approx_entropy(src, length, m, r)
Calculates Approximate Entropy (ApEn) - regularity measure
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 50)
m (simple int) : Embedding dimension (default: 2)
r (simple float) : Tolerance as fraction of stdev (default: 0.2)
Returns: Approximate entropy value (higher = more irregular/complex)
@description Lower ApEn indicates more self-similarity and predictability
entropy_regime(src, length, q, n_bins)
Detects market regime based on entropy level
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback period
q (float) : Tsallis q parameter (use optimal_q() or default 1.5)
n_bins (simple int) : Number of histogram bins
Returns: Regime indicator: -1 = trending (low entropy), 0 = transition, 1 = ranging (high entropy)
entropy_risk(src, length)
Calculates entropy-based risk indicator
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback period
Returns: Risk score where 1 = maximum divergence from Gaussian 1
hit_rate(src, length)
Calculates hit rate (probability of positive outcome) over lookback
Parameters:
src (float) : Source series (positive values count as hits)
length (simple int) : Lookback period
Returns: Hit rate as decimal
hit_rate_cond(condition, length)
Calculates hit rate for custom condition over lookback
Parameters:
condition (bool) : Boolean series (true = hit)
length (simple int) : Lookback period
Returns: Hit rate as decimal
expected_value(src, length)
Calculates expected value of a series
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Expected value (mean)
expected_value_trade(win_prob, take_profit, stop_loss)
Calculates expected value for a trade with TP and SL levels
Parameters:
win_prob (float) : Probability of hitting TP (0-1)
take_profit (float) : Take profit in price units or %
stop_loss (float) : Stop loss in price units or % (positive value)
Returns: Expected value per trade
@description EV = (win_prob * TP) - ((1 - win_prob) * SL)
breakeven_winrate(take_profit, stop_loss)
Calculates breakeven win rate for given TP/SL ratio
Parameters:
take_profit (float) : Take profit distance
stop_loss (float) : Stop loss distance
Returns: Required win rate for breakeven (EV = 0)
reward_risk_ratio(take_profit, stop_loss)
Calculates the reward-to-risk ratio
Parameters:
take_profit (float) : Take profit distance
stop_loss (float) : Stop loss distance
Returns: R:R ratio
fpt_probability(src, length, target, max_bars)
Estimates probability of price reaching target within N bars
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback for volatility estimation
target (float) : Target move (in same units as src, e.g., % return)
max_bars (simple int) : Maximum bars to consider
Returns: Probability of reaching target within max_bars
@description Based on random walk with drift approximation
fpt_mean(src, length, target)
Estimates mean first passage time to target level
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback for volatility estimation
target (float) : Target move
Returns: Expected number of bars to reach target (can be infinite)
fpt_historical(src, length, target)
Counts historical bars to reach target from each point
Parameters:
src (float) : Source series (typically price or returns)
length (simple int) : Lookback period
target (float) : Target move from each starting point
Returns: Array of first passage times (na if target not reached within lookback)
tp_probability(src, length, tp_distance, sl_distance)
Estimates probability of hitting TP before SL
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback for estimation
tp_distance (float) : Take profit distance (positive)
sl_distance (float) : Stop loss distance (positive)
Returns: Probability of TP being hit first
trade_probability(src, length, tp_pct, sl_pct)
Calculates complete trade probability and EV analysis
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback period
tp_pct (float) : Take profit percentage
sl_pct (float) : Stop loss percentage
Returns: Tuple:
cond_prob(condition_a, condition_b, length)
Calculates conditional probability P(B|A) from historical data
Parameters:
condition_a (bool) : Condition A (the given condition)
condition_b (bool) : Condition B (the outcome)
length (simple int) : Lookback period
Returns: P(B|A) = P(A and B) / P(A)
bayes_update(prior, likelihood, false_positive)
Updates probability using Bayes' theorem
Parameters:
prior (float) : Prior probability P(H)
likelihood (float) : P(E|H) - probability of evidence given hypothesis
false_positive (float) : P(E|~H) - probability of evidence given hypothesis is false
Returns: Posterior probability P(H|E)
streak_prob(win_rate, streak_length)
Calculates probability of N consecutive wins given win rate
Parameters:
win_rate (float) : Single-trade win probability
streak_length (simple int) : Number of consecutive wins
Returns: Probability of streak
losing_streak_prob(win_rate, streak_length)
Calculates probability of experiencing N consecutive losses
Parameters:
win_rate (float) : Single-trade win probability
streak_length (simple int) : Number of consecutive losses
Returns: Probability of losing streak
drawdown_prob(src, length, dd_threshold)
Estimates probability of drawdown exceeding threshold
Parameters:
src (float) : Source series (returns)
length (simple int) : Lookback period
dd_threshold (float) : Drawdown threshold (as positive decimal, e.g., 0.10 = 10%)
Returns: Historical probability of exceeding drawdown threshold
prob_to_odds(prob)
Calculates odds from probability
Parameters:
prob (float) : Probability (0-1)
Returns: Odds (prob / (1 - prob))
odds_to_prob(odds)
Calculates probability from odds
Parameters:
odds (float) : Odds ratio
Returns: Probability (0-1)
implied_prob(decimal_odds)
Calculates implied probability from decimal odds (betting)
Parameters:
decimal_odds (float) : Decimal odds (e.g., 2.5 means $2.50 return per $1 bet)
Returns: Implied probability
logit(prob)
Calculates log-odds (logit) from probability
Parameters:
prob (float) : Probability (must be in (0, 1))
Returns: Log-odds
inv_logit(log_odds)
Calculates probability from log-odds (inverse logit / sigmoid)
Parameters:
log_odds (float) : Log-odds value
Returns: Probability (0-1)
linreg_slope(src, length)
Calculates linear regression slope
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 2)
Returns: Slope coefficient (change per bar)
linreg_intercept(src, length)
Calculates linear regression intercept
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 2)
Returns: Intercept (predicted value at oldest bar in window)
linreg_value(src, length)
Calculates predicted value at current bar using linear regression
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Predicted value at current bar (end of regression line)
linreg_forecast(src, length, offset)
Forecasts value N bars ahead using linear regression
Parameters:
src (float) : Source series
length (simple int) : Lookback period for regression
offset (simple int) : Bars ahead to forecast (positive = future)
Returns: Forecasted value
linreg_channel(src, length, mult)
Calculates linear regression channel with bands
Parameters:
src (float) : Source series
length (simple int) : Lookback period
mult (simple float) : Standard deviation multiplier for bands
Returns: Tuple:
r_squared(src, length)
Calculates R-squared (coefficient of determination)
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: R² value where 1 = perfect linear fit
adj_r_squared(src, length)
Calculates adjusted R-squared (accounts for sample size)
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Adjusted R² value
std_error(src, length)
Calculates standard error of estimate (residual standard deviation)
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Standard error
residual(src, length)
Calculates residual at current bar
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Residual (actual - predicted)
residuals(src, length)
Returns array of all residuals in lookback window
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Array of residuals
t_statistic(src, length)
Calculates t-statistic for slope coefficient
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: T-statistic (slope / standard error of slope)
slope_pvalue(src, length)
Approximates p-value for slope t-test (two-tailed)
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Approximate p-value
is_significant(src, length, alpha)
Tests if regression slope is statistically significant
Parameters:
src (float) : Source series
length (simple int) : Lookback period
alpha (simple float) : Significance level (default: 0.05)
Returns: true if slope is significant at alpha level
trend_strength(src, length)
Calculates normalized trend strength based on R² and slope
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Trend strength where sign indicates direction
trend_angle(src, length)
Calculates trend angle in degrees
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Angle in degrees (positive = uptrend, negative = downtrend)
linreg_acceleration(src, length)
Calculates trend acceleration (second derivative)
Parameters:
src (float) : Source series
length (simple int) : Lookback period for each regression
Returns: Acceleration (change in slope)
linreg_deviation(src, length)
Calculates deviation from regression line in standard error units
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Deviation in standard error units (like z-score)
quadreg_coefficients(src, length)
Fits quadratic regression and returns coefficients
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 4)
Returns: Tuple: for y = a*x² + b*x + c
quadreg_value(src, length)
Calculates quadratic regression value at current bar
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Predicted value from quadratic fit
correlation(x, y, length)
Calculates Pearson correlation coefficient between two series
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period (must be >= 3)
Returns: Correlation coefficient
covariance(x, y, length)
Calculates sample covariance between two series
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period (must be >= 2)
Returns: Covariance value
beta(asset, benchmark, length)
Calculates beta coefficient (slope of regression of y on x)
Parameters:
asset (float) : Asset returns series
benchmark (float) : Benchmark returns series
length (simple int) : Lookback period
Returns: Beta coefficient
@description Beta = Cov(asset, benchmark) / Var(benchmark)
alpha(asset, benchmark, length, risk_free)
Calculates alpha (Jensen's alpha / intercept)
Parameters:
asset (float) : Asset returns series
benchmark (float) : Benchmark returns series
length (simple int) : Lookback period
risk_free (float) : Risk-free rate (default: 0)
Returns: Alpha value (excess return not explained by beta)
spearman(x, y, length)
Calculates Spearman rank correlation coefficient
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period (must be >= 3)
Returns: Spearman correlation
@description More robust to outliers than Pearson correlation
kendall_tau(x, y, length)
Calculates Kendall's tau rank correlation (simplified)
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period (must be >= 3)
Returns: Kendall's tau
correlation_change(x, y, length, change_period)
Calculates change in correlation over time
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period for correlation
change_period (simple int) : Period over which to measure change
Returns: Change in correlation
correlation_regime(x, y, length, ma_length)
Detects correlation regime based on level and stability
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period for correlation
ma_length (simple int) : Moving average length for smoothing
Returns: Regime: -1 = negative, 0 = uncorrelated, 1 = positive
correlation_stability(x, y, length, stability_length)
Calculates correlation stability (inverse of volatility)
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback for correlation
stability_length (simple int) : Lookback for stability calculation
Returns: Stability score where 1 = perfectly stable
relative_strength(asset, benchmark, length)
Calculates relative strength of asset vs benchmark
Parameters:
asset (float) : Asset price series
benchmark (float) : Benchmark price series
length (simple int) : Smoothing period
Returns: Relative strength ratio (normalized)
tracking_error(asset, benchmark, length)
Calculates tracking error (standard deviation of excess returns)
Parameters:
asset (float) : Asset returns
benchmark (float) : Benchmark returns
length (simple int) : Lookback period
Returns: Tracking error (annualize by multiplying by sqrt(252) for daily data)
information_ratio(asset, benchmark, length)
Calculates information ratio (risk-adjusted excess return)
Parameters:
asset (float) : Asset returns
benchmark (float) : Benchmark returns
length (simple int) : Lookback period
Returns: Information ratio
capture_ratio(asset, benchmark, length, up_capture)
Calculates up/down capture ratio
Parameters:
asset (float) : Asset returns
benchmark (float) : Benchmark returns
length (simple int) : Lookback period
up_capture (simple bool) : If true, calculate up capture; if false, down capture
Returns: Capture ratio
autocorrelation(src, length, lag)
Calculates autocorrelation at specified lag
Parameters:
src (float) : Source series
length (simple int) : Lookback period
lag (simple int) : Lag for autocorrelation (default: 1)
Returns: Autocorrelation at specified lag
partial_autocorr(src, length)
Calculates partial autocorrelation at lag 1
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: PACF at lag 1 (equals ACF at lag 1)
autocorr_test(src, length, max_lag)
Tests for significant autocorrelation (Ljung-Box inspired)
Parameters:
src (float) : Source series
length (simple int) : Lookback period
max_lag (simple int) : Maximum lag to test
Returns: Sum of squared autocorrelations (higher = more autocorrelation)
cross_correlation(x, y, length, lag)
Calculates cross-correlation at specified lag
Parameters:
x (float) : First series
y (float) : Second series (lagged)
length (simple int) : Lookback period
lag (simple int) : Lag to apply to y (positive = y leads x)
Returns: Cross-correlation at specified lag
cross_correlation_peak(x, y, length, max_lag)
Finds lag with maximum cross-correlation
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period
max_lag (simple int) : Maximum lag to search (both directions)
Returns: Tuple:
Bias Daily 5.0Bias Daily Indicator with Breakout Alerts
Plots bullish, bearish, and consolidation bias levels based on previous daily candles. Alerts trigger when price breaks the previous daily high, low, or either, and bias lines show key levels on the chart.
5% D/ID or 15%W DropCan be used to trigger alerts for 5% daily drops or intra-day drops or 15% drops during the past 5 days. Useful for selling puts.
Bias Daily 3.0Bias Daily Indicator with Breakout Alerts
This indicator plots bullish, bearish, and consolidation bias levels based on previous daily candles. It draws horizontal lines at prior candle highs and lows and lets you track momentum shifts visually.
It also includes flexible alerts:
Break previous candle high
Break previous candle low
Break either high or low
Perfect for spotting key breakout levels and identifying market bias across all intraday and higher timeframes. Fully customizable line colors, styles, and number of previous levels displayed.
MONSTER KHAN GOLED KILLERTHIS INDICATORE work base on many strategies and work in gold btc and us oil but best for gold and btc use m5 to m15 time frame
Bias Daily (with Alerts)This indicator draws bullish/bearish bias lines from prior candles and sends alerts when price breaks the previous candle’s high or low. It’s non-repainting, works on all timeframes, and helps you spot momentum shifts and breakouts early.
NY Open 60-Min VarBox + Pure ICT FVG V8This is little indicator that shows the NY-Stock Exc. opening candles with a vertical line and a label. It works for different time scales. It also finds the bullish FVGs. It is a good tool for those who follow the opening of the exchange.
Multi-Timeframe EMA50 Structure + ATR Sniper SystemThis indicator is a comprehensive Trend-Following and Risk Management system designed for swing traders who focus on high-probability structural entries.
It combines three core concepts into one visual tool:
Multi-Timeframe Structure: Tracks the EMA50 across key swing timeframes (2D, 3D, 1W, 2W, 1M).
Momentum Health: Detects MACD Divergences on these timeframes to warn of potential reversals.
ATR Sniper Zone (Risk Control): Visually defines the "Buy Zone" and "Hard Stop Level" based on volatility (Daily ATR), preventing FOMO and ensuring consistent Risk/Reward ratios.
US 3H HARDCORE SCALPING ALGO (FINAL) utc+9English: User Manual1. OverviewThis indicator is a high-intensity scalping tool designed to capture volatility during the first 3 hours of the US market session. It combines trend filtering, value-based entries, and volume validation to identify high-probability trade setups.2. Key ComponentsTrend Filter (EMA 200): Determines the long-term market direction. Only buy signals are generated above the EMA 200, and only sell signals below it.Value Area (VWAP): Represents the Volume Weighted Average Price. It acts as a "magnet" for price and a baseline for fair value.Session Focus (KST 23:00 - 02:00): Highlights the US session opening hours in Korea Standard Time (Red background). It automatically calculates the 3-hour window regardless of the chart timeframe.Volume Filter: Ensures that signals are only generated when trading activity is higher than the 20-period average, filtering out "fake" breakouts.3. Entry ConditionsLong (Buy) Signal:Time: Must be within the Red Focus Zone.Trend: Price is above EMA 200 ($Close > EMA_{200}$).Value: Price is above VWAP.Reaction: The bar's low touches or dips below VWAP, but the bar closes back above it (Pullback recovery).Volume: Current volume is higher than the 20-period Volume SMA.Short (Sell) Signal:Time: Must be within the Red Focus Zone.Trend: Price is below EMA 200 ($Close < EMA_{200}$).Value: Price is below VWAP.Reaction: The bar's high touches or rises above VWAP, but the bar closes back below it (Rejection recovery).Volume: Current volume is higher than the 20-period Volume SMA.4. Visual ElementsYellow Line: EMA 200.Aqua Line: VWAP.Red Background: US 3-hour focus window.Information Label (Top Right): Real-time display of current trend, VWAP position, and session status.
Phantom Support & Resistance Auto [PT-IND-SR.001]Overview
Phantom Support & Resistance Auto is a context-focused support and resistance indicator designed to visualize price interaction zones derived from multiple market behaviors.
The script does not generate buy or sell signals.
Instead, it provides a structured map of potential reaction areas, allowing traders to better understand where price has historically reacted, consolidated, or extended liquidity.
This indicator is intended to be used as a decision-support and contextual analysis tool, not as a standalone trading system.
How the Script Works
The indicator combines several independent but complementary methods of identifying support and resistance.
Each method captures a different type of market behavior, and all components can be enabled or disabled independently.
1) High / Low Zones (Range Extremes)
This module tracks the highest high and lowest low over a configurable lookback period.
These levels represent recent range boundaries, which often act as reaction zones during consolidation or pullbacks.
They are visualized as extended horizontal levels to preserve historical context.
2) Pivot Zones (Filtered & Merged Levels)
Pivot zones are derived from confirmed pivot highs and lows.
To avoid excessive and overlapping levels, the script applies a merge tolerance based on either:
ATR distance, or Percentage distance from price
Nearby pivot levels are merged into a single zone, and each zone tracks how many times price has interacted with it.
This interaction count adjusts visual strength, creating a relative importance hierarchy rather than treating all levels equally.
An optional higher-timeframe source can be used to project structurally significant levels onto lower timeframes.
3) Wick Liquidity Zones
This module detects candles with disproportionately large wicks relative to their bodies.
Such candles often indicate liquidity grabs, stop runs, or rejection areas.
Detected wick levels are extended forward to highlight areas where liquidity was previously absorbed.
This component focuses on price rejection behavior, not trend direction.
4) PR Levels (Volatility-Adjusted Predicted Ranges)
PR levels are derived from a volatility-adjusted average price model.
Using ATR as a normalization factor, the script calculates a central average along with upper and lower projected zones.
These levels are adaptive, expanding and contracting with volatility, and are intended to represent probabilistic price ranges, not fixed targets.
5) MACD-Based Support & Resistance (Heikin Ashi Source)
This module derives dynamic support and resistance levels based on MACD momentum shifts, calculated from Heikin Ashi price data to reduce noise.
When MACD momentum transitions occur, recent highs and lows are captured and projected as potential reaction zones.
This component focuses on momentum-driven structural changes, rather than static price levels.
Why These Components Are Combined
Each component captures a different dimension of market behavior:
High / Low zones → Range extremes
Pivot zones → Structural reaction points
Wick zones → Liquidity and rejection behavior
PR levels → Volatility-normalized price ranges
MACD S&R → Momentum-based structural shifts
By combining these sources, the indicator provides a layered view of support and resistance, allowing traders to evaluate confluence, alignment, or divergence between different types of levels instead of relying on a single method.
The script does not assume all levels are equal; visual weighting helps distinguish structural levels from situational ones.
Visualization & Outputs
Color-coded horizontal zones with strength-based opacity
Optional glow effects for visual clarity
Independent toggles for each S&R source
A table showing percentage distances between projected PR levels, helping users contextualize price location within its current range
All visual components are configurable and can be selectively disabled to reduce chart clutter.
How to Use
Use this indicator as a context and mapping tool
Observe areas where multiple zone types align for higher contextual significance
Combine with your own entry logic, confirmations, and trade management rules
Suitable for multi-timeframe analysis and market structure studies
Risk Management Notice
This indicator should always be used as part of a well-defined risk management plan.
Support and resistance zones represent areas of potential interaction, not guaranteed reactions.
Users are responsible for applying appropriate:
Position sizing
Stop placement
Risk-to-reward rules
The indicator does not manage risk automatically and should not replace proper risk management practices.
What This Script Is NOT
It is not a buy/sell signal generator
It does not predict future price direction
It does not guarantee reactions at every level
It should not be used as a standalone trading strategy
Originality & Purpose
The originality of this script lies in its structured integration of multiple support and resistance methodologies, each preserved as a distinct analytical layer rather than blended into a single opaque output.
The purpose is to help traders understand where price has interacted with liquidity, structure, and volatility, not to automate trade decisions.
Futures Trend Signaler Final VersionFutures Trend Signaler is a compact, multi-timeframe EMA “trend dashboard” built for intraday futures/index trading.
It displays a clean table (1m + two lower timeframes you choose, e.g., 15s and 1s) that shows:
EMA 9 vs EMA 21 (short-term momentum / immediate trend direction)
EMA 21 vs EMA 50 (trend “sustainability” / broader continuation bias)
Price vs 1m EMA 9 (LTF/Ultra price position relative to the 1-minute momentum line)
Each cell is color-coded (green = bullish, red = bearish, gray = neutral/na) so you can read bias at a glance. When a new EMA crossover occurs, the table also flags it (and tracks the most recent bull/bear cross) so you can quickly see if momentum just flipped—without cluttering the chart with overlapping markers.
Fully customizable table position and text size. Designed to stay lightweight by using minimal higher/lower timeframe requests.
Disclaimer: This indicator is for informational/educational purposes only and is not financial advice. Always use proper risk management.
Pivot Points with Support/ResistanceA) Pivot Resistance Levels (R1, R2, R3…)
Resistance pivots are projected upside levels where price often pauses, rejects, or reverses. They are commonly used as profit targets for long trades and areas to watch for short setups when buyers show weakness.
B) Pivot Support Levels (S1, S2, S3…)
Support pivots are projected downside levels where price often stabilizes or bounces. They are commonly used as profit targets for short trades and areas to watch for long setups when sellers lose momentum.
C) Role in Market Structure
S/R pivots map out probable intraday supply and demand zones based on the prior session’s price action. They help define the day’s trading range and highlight high-probability reaction areas.
D) Trading Interpretation
Acceptance above resistance → bullish continuation
Rejection at resistance → potential pullback or reversal
Acceptance below support → bearish continuation
Rejection at support → potential bounce
Best used with trend context, volume, and confluence (CPR, VAH/VAL, Camarilla)
Pivot Points (PP/BC/TC)A) Central Pivot (CP)
The Central Pivot is the main equilibrium level for the session. It represents fair value where buyers and sellers are balanced. Price above CP shows bullish bias; price below CP shows bearish bias.
B) Top Central (TC)
The Top Central is the upper boundary of the CPR. It acts as short-term resistance in normal conditions and support in strong bullish trends. Acceptance above TC suggests upside continuation.
C) Bottom Central (BC)
The Bottom Central is the lower boundary of the CPR. It acts as short-term support in normal conditions and resistance in strong bearish trends. Acceptance below BC suggests downside continuation.
Dav1zoN ScalpThis script is a 5-minute scalping setup built around SuperTrend.
Entries are taken on SuperTrend flips on the 5-minute chart
Direction is confirmed with the 15-minute SMA200
Above SMA200 → only BUY trades
Below SMA200 → only SELL trades
This helps avoid sideways markets and low-quality signals
SuperTrend adapts to market volatility, while the higher-timeframe SMA200 keeps trades aligned with the main trend.
Baby ICT Simple Asia H/L + Sweeps + FVG + Alerts + Do-NothingBaby ICT Simple+ is a lightweight, rules-based TradingView indicator designed to help traders visualize key ICT-style concepts without complexity or signal-chasing. It focuses on Asia session liquidity, after, and fair value gapsto su
This tool is intentionally simple and is meant to be used alongside session timing, price action, and risk management — not as a buy/sell signal generator.
🔍 What This Indicator Displays
But
Automatically tracks and plots the Asia session high and low
Fully customizable line colors and width
These levels often act as liquidity pools before London and New York sessions
Liquidity Sweeps (Post-Asia)
Identifies the first time price takes liquidity above or below the Asia range
Sweep detection can be based on wicks or closes
Optional sweep labels help highlight potential stop-run behavior
Asia Break & Sweep Alerts
Alerts when price breaks the Asia high or low after the Asia session ends
Optional alerts for the first sweep only, helping traders focus on high-quality context
Fair Value Gaps (FVGs)
Detects classic 3-candle price imbalances on the active timeframe
Optional filter to show only FVGs that form after a liquidity sweep
Bullish and bearish FVGs are fully customizable with separate fill and border colors
“Do Nothing” Discipline Labels
Optional warning labels during a user-defined kill zone
Designed to discourage over-trading when:
No liquidity has been taken
Price is stuck mid-range
A sweep occurred but no clean displacement or fresh FVG followed
🧠 Intended Use
This indicator supports a “Baby ICT” approach, emphasizing:
Waiting for liquidity to be taken before looking for entries
Using Fair Value Gaps as entry zones, not signals
Avoiding mid-range and low-probability environments
Trading primarily during active sessions (London / New York)
Best used on:
5-minute charts
Index futures (ES, NQ) or liquid FX pairs
With session-based execution and strict risk control
🚫 What This Indicator Is NOT
❌ Not a buy/sell signal tool
❌ Not an automated trading strategy
❌ Not predictive or guaranteed
All trade decisions remain the responsibility of the trader.
⚠️ Risk Disclaimer
Trading involves risk. This indicator is provided for educational and informational purposes only and does not constitute financial advice. Always manage risk responsibly and test any tool thoroughly before using it in live markets.
✨ Final Notes
If you are looking for a clean, non-hype way to visualize:
Where liquidity is likely taken
Where price may rebalance
When it’s best to stand aside
Baby ICT Simple+ was built for that purpose.
Apexflow PRO: Anchored Fair Value + Regime Readiness [v6]## Apexflow PRO — Anchored Fair Value Cloud + Regime Readiness (Non-Repaint Signals)
**Apexflow PRO** is an overlay indicator built to answer one core trading question:
**“Is price currently cheap, fair, or expensive — and is the market in a regime where that matters?”**
Instead of throwing random signals at you, Apexflow PRO combines **anchored fair value**, **market regime detection**, **flow participation**, and **optional cross-market confirmation** into a single, easy-to-read system with a **Readiness Score (0–100)** and clean, non-spam alerts.
---
# What you see on the chart
### 1) Anchored Fair Value Cloud (the “tunnel”)
This is the heart of the indicator.
* **Midline = Anchored VWAP fair value**
* Resets by **Day / Week / Month** (you choose).
* **Cloud = 3-layer adaptive tunnel**
* **Core (blue)** = “fair pricing zone”
* **Upper red layers** = increasingly stretched/expensive
* **Lower teal layers** = increasingly stretched/cheap
**Interpretation (beginner-simple):**
* **Inside blue core** → “priced fairly”
* **Below the tunnel** → “cheap / discounted”
* **Above the tunnel** → “expensive / premium”
* **Outer layers** → “extreme stretch” zones (higher snap-back risk)
### 2) Regime label (context filter)
Apexflow labels the market environment as:
* **TRENDING**
* **CHOP/RANGE**
* **VOLATILE**
* **BREAKOUT**
This prevents “using the right tool in the wrong market.”
Example: mean reversion works better in chop; trend continuation works better in trend/breakout regimes.
### 3) Readiness Score (0–100)
This is the **strength of confluence** between the engines.
* Low score = mixed signals / noise
* High score = alignment / higher-quality conditions
### 4) BUY / SELL signals (non-spam)
Signals trigger only when:
* **Readiness crosses above your threshold** (first bar only)
* **Regime filter agrees**
* **Structure agrees** (reclaim midline / lose midline OR location within the tunnel)
* **Cooldown** prevents rapid repeats
---
# What’s behind it (advanced, but readable)
Apexflow PRO uses four engines:
## A) Anchored Fair Value Engine (core)
A true anchored VWAP-style accumulator:
* Aggregates **price × volume** and **volume**
* Resets on your chosen anchor period
* Produces a stable “fair value spine”
### Stable Mode (important)
When **Stable Mode = ON**, Apexflow **does not drift intrabar** on live candles.
The anchored midline and tunnel update on confirmed bar closes to avoid the classic “wiggling anchor” problem.
## B) Regime Engine (Trend/Chop/Breakout/Volatile)
Uses multiple independent measures (not just one):
* **ADX** = trend strength
* **Efficiency Ratio (ER)** = trend efficiency vs chop
* **BB Width %Rank** = compression / squeeze context
* **ATR %Rank** = volatility regime context
This produces both a **regime label** and a **regime confidence score** used in the composite.
## C) Flow Engine (participation + intent proxy)
A blended participation model:
* **Signed candle pressure** (body vs range scaled by volume)
* **Wick rejection bias** (rejection strength)
* **RVOL** (relative volume lift)
This helps distinguish “real moves” from low-quality drifts.
## D) Cross-Market Confirmation (optional)
A light macro filter to reduce false positives:
* **Equities:** VIX (inverse risk)
* **Forex:** DXY (inverse USD pressure)
* **Crypto:** BTC.D (risk tone proxy)
If the cross-market symbol is unavailable, the script **falls back gracefully** and automatically reduces its weight.
---
# How to use (simple rules)
## Trend Following mode (default)
Best when you want to ride directional moves.
**BUY idea:**
* Readiness crosses above threshold
* Regime is **TRENDING** or **BREAKOUT**
* Price is reclaiming the midline OR is occurring from the lower half of tunnel
**SELL idea:**
* Same logic in reverse (lose midline / upper half)
**Practical beginner rule:**
> In Trend mode, treat the midline like “bias.”
> If price is above the midline and score is strong → favor longs.
> If below and score is strong → favor shorts.
## Reversion mode
Best in chop/range markets.
* Signals are biased toward **mean reversion**
* Use tunnel extremes as “stretch zones”
* Targets often gravitate back toward the **midline / inner bands**
---
# Best settings & timeframes (starting points)
These are practical defaults (not magic):
### Crypto
* 15m / 1H / 4H
* Anchor Reset: **Week**
* Threshold: **60–70**
### Equities / Indices
* 5m / 15m / 1H
* Anchor Reset: **Day or Week**
* Threshold: **60–75**
### Forex
* 15m / 1H
* Anchor Reset: **Day**
* Threshold: **60–75**
If signals feel too frequent: raise **Threshold** or increase **Cooldown**.
If signals feel too rare: lower **Threshold** slightly or reduce **Cooldown**.
---
# Alerts
Included:
* **Apexflow PRO Long**
* **Apexflow PRO Short**
These fire only when the indicator triggers a confirmed, threshold-cross event (designed for clean alerting).
---
# Notes & limitations (honest)
* This is not a “predict the future” tool — it’s a **context + fair value + confluence** system.
* Cross-market filters are helpful, but not universal. If you trade niche assets, consider turning cross-market OFF or customizing the reference symbol.
* Always use risk management. A strong score is not a guarantee.
Price HighlightsThis script shows you price highlights that you define. You can choose what price interval and how many to show above and below the current price. I made this to help me choose a strike price quickly when trading options but also found it useful for visualizing price targets for quick futures scalps.
Dav1zoN PRO: MACD + RSI + ADXThis indicator is a momentum and trend-strength tool designed to stay clear and readable on all timeframes, especially lower TFs where most indicators become noisy or flat.
It combines MACD Histogram, RSI, and ADX into a single adaptive system, with automatic scaling and smoothing, so values stay proportional without using static horizontal levels.
SPX 0.5% Move + Volume Filter.5%+ move in SPX in 2 minute candle. Usage for creating an alert for web hook trigger or basic alert.
RSI Dav1zoNThe RSI Grid is a multi-timeframe momentum dashboard designed to give a quick, structured view of market bias across several timeframes at once.
Instead of checking RSI on each timeframe manually, the grid shows direction, RSI value, and projected price levels in one place.






















