LAX Murray Math LevelsLAX Murrey Math Levels
Specify a starting price, then an interval and Lookback window.
The indicator will then plot a moving window of murray math levels.
Cerca negli script per "如何用wind搜索股票的发行价和份数"
T7 JNSARJNSAR stands for Just Nifty -0.14% Stop & Reverse. This is a Trend Following Daily Bar Trading System for NIFTY -0.14% . Original idea belongs to ILLANGO @ I coded the pine version of this system based on a request from @stocksonfire. Use it at your own risk after validation at your end. Neither me or my company is responsible for any losses you may incur using this system. Hope you like this system and enjoy trading it !!!
Updated V3 code for the T7 JNSAR system earlier published here V2 and here V1
Following updates made to the code
1. Added a 22 Period Simple moving average filter over and above the standard JNSAR value for generating trading signals. This simple filter reduces the whipsaw trades drastically along with similar improvements in the max draw down and overall profitability of the system. The SMA filter is turned ON by default but can be turned OFF by user through the settings window.
2. Backtest option is now turned ON by default.
Also am republishing the trading rules here again with some modification
1. Go Long when the daily close is above the JNSAR line. Go Short when the daily close is below the JNSAR line. JNSAR line is the varying green line overlayed over the price chart. Once a signal comes at market close enter in the direction of the signal @ market price @ next day market open.
2. Trade only Nifty -0.14% Index. This system was developed and backtested only for NIFTY -0.14% Index. So trade in its Futures or Options, as you may deem fit. My recommendation is to choose futures for simplicity. If you want to reduce the trading cost and go with options, trade with deep in the money options, preferably 2 strikes far from the spot price.
3. Trade all signals. Markets trend only 30-35% of the time and hence the system is only accurate to that extend. But system tends to make enough money, in this small trending window, to keep the overall profitability in good health. But one never knows when a big trend may come and when it comes its absolutely imperative that you take it. To ensure that, trade all signals and don't be choosy about what signals you are going to trade. Also I wouldn't recommend using your own analysis to trade this system. Too many drivers will crash the car.
4. Like all trend following systems, this system will have many whipsaws during flat markets along with large trade and account drawdowns. Also some months and even years may not be profitable. But to trade this system profitably, it is necessary to take these in one's stride and keep trading. As the backtester results from 1990 to 2017 proves, this system is profitable overall thus far. Take confidence from that objective fact.
5. Trade with only that amount of money you can afford to loose. Initial capital that you need to have to trade one lot of NIFTY -0.14% should be atleast - (Margin Money required to take and hold 1 lot position + maximum drawdown amount per lot)*1.2. Be prepared to add more if need be, but the above formula will give a rough idea of what you need to have to start trading and be in the game always.
6. Place an After Market Order @ Market Price with your broker after market close so that you get to execute the trade next trading day @ Market open to capture near similar price as the daily open price seen on the chart. This execution mode will give you the best chance to minimize the slippage and mimic the backtester results as closely as practically possible.
7. Follow all the 6 rules above religiously, as if your life depends on it. If you cant, then don't trade this system; You will certainly loose money.
Happy Trading !!! As always am looking out for your valuable feedback.
QTALibrary "QTA"
This is simple library for basic Quantitative Technical Analysis for retail investors. One example of it being used can be seen here ().
calculateKellyRatio(returns)
Parameters:
returns (array) : An array of floats representing the returns from bets.
Returns: The calculated Kelly Ratio, which indicates the optimal bet size based on winning and losing probabilities.
calculateAdjustedKellyFraction(kellyRatio, riskTolerance, fedStance)
Parameters:
kellyRatio (float) : The calculated Kelly Ratio.
riskTolerance (float) : A float representing the risk tolerance level.
fedStance (string) : A string indicating the Federal Reserve's stance ("dovish", "hawkish", or neutral).
Returns: The adjusted Kelly Fraction, constrained within the bounds of .
calculateStdDev(returns)
Parameters:
returns (array) : An array of floats representing the returns.
Returns: The standard deviation of the returns, or 0 if insufficient data.
calculateMaxDrawdown(returns)
Parameters:
returns (array) : An array of floats representing the returns.
Returns: The maximum drawdown as a percentage.
calculateEV(avgWinReturn, winProb, avgLossReturn)
Parameters:
avgWinReturn (float) : The average return from winning bets.
winProb (float) : The probability of winning a bet.
avgLossReturn (float) : The average return from losing bets.
Returns: The calculated Expected Value of the bet.
calculateTailRatio(returns)
Parameters:
returns (array) : An array of floats representing the returns.
Returns: The Tail Ratio, or na if the 5th percentile is zero to avoid division by zero.
calculateSharpeRatio(avgReturn, riskFreeRate, stdDev)
Parameters:
avgReturn (float) : The average return of the investment.
riskFreeRate (float) : The risk-free rate of return.
stdDev (float) : The standard deviation of the investment's returns.
Returns: The calculated Sharpe Ratio, or na if standard deviation is zero.
calculateDownsideDeviation(returns)
Parameters:
returns (array) : An array of floats representing the returns.
Returns: The standard deviation of the downside returns, or 0 if no downside returns exist.
calculateSortinoRatio(avgReturn, downsideDeviation)
Parameters:
avgReturn (float) : The average return of the investment.
downsideDeviation (float) : The standard deviation of the downside returns.
Returns: The calculated Sortino Ratio, or na if downside deviation is zero.
calculateVaR(returns, confidenceLevel)
Parameters:
returns (array) : An array of floats representing the returns.
confidenceLevel (float) : A float representing the confidence level (e.g., 0.95 for 95% confidence).
Returns: The Value at Risk at the specified confidence level.
calculateCVaR(returns, varValue)
Parameters:
returns (array) : An array of floats representing the returns.
varValue (float) : The Value at Risk threshold.
Returns: The average Conditional Value at Risk, or na if no returns are below the threshold.
calculateExpectedPriceRange(currentPrice, ev, stdDev, confidenceLevel)
Parameters:
currentPrice (float) : The current price of the asset.
ev (float) : The expected value (in percentage terms).
stdDev (float) : The standard deviation (in percentage terms).
confidenceLevel (float) : The confidence level for the price range (e.g., 1.96 for 95% confidence).
Returns: A tuple containing the minimum and maximum expected prices.
calculateRollingStdDev(returns, window)
Parameters:
returns (array) : An array of floats representing the returns.
window (int) : An integer representing the rolling window size.
Returns: An array of floats representing the rolling standard deviation of returns.
calculateRollingVariance(returns, window)
Parameters:
returns (array) : An array of floats representing the returns.
window (int) : An integer representing the rolling window size.
Returns: An array of floats representing the rolling variance of returns.
calculateRollingMean(returns, window)
Parameters:
returns (array) : An array of floats representing the returns.
window (int) : An integer representing the rolling window size.
Returns: An array of floats representing the rolling mean of returns.
calculateRollingCoefficientOfVariation(returns, window)
Parameters:
returns (array) : An array of floats representing the returns.
window (int) : An integer representing the rolling window size.
Returns: An array of floats representing the rolling coefficient of variation of returns.
calculateRollingSumOfPercentReturns(returns, window)
Parameters:
returns (array) : An array of floats representing the returns.
window (int) : An integer representing the rolling window size.
Returns: An array of floats representing the rolling sum of percent returns.
calculateRollingCumulativeProduct(returns, window)
Parameters:
returns (array) : An array of floats representing the returns.
window (int) : An integer representing the rolling window size.
Returns: An array of floats representing the rolling cumulative product of returns.
calculateRollingCorrelation(priceReturns, volumeReturns, window)
Parameters:
priceReturns (array) : An array of floats representing the price returns.
volumeReturns (array) : An array of floats representing the volume returns.
window (int) : An integer representing the rolling window size.
Returns: An array of floats representing the rolling correlation.
calculateRollingPercentile(returns, window, percentile)
Parameters:
returns (array) : An array of floats representing the returns.
window (int) : An integer representing the rolling window size.
percentile (int) : An integer representing the desired percentile (0-100).
Returns: An array of floats representing the rolling percentile of returns.
calculateRollingMaxMinPercentReturns(returns, window)
Parameters:
returns (array) : An array of floats representing the returns.
window (int) : An integer representing the rolling window size.
Returns: A tuple containing two arrays: rolling max and rolling min percent returns.
calculateRollingPriceToVolumeRatio(price, volData, window)
Parameters:
price (array) : An array of floats representing the price data.
volData (array) : An array of floats representing the volume data.
window (int) : An integer representing the rolling window size.
Returns: An array of floats representing the rolling price-to-volume ratio.
determineMarketRegime(priceChanges)
Parameters:
priceChanges (array) : An array of floats representing the price changes.
Returns: A string indicating the market regime ("Bull", "Bear", or "Neutral").
determineVolatilityRegime(price, window)
Parameters:
price (array) : An array of floats representing the price data.
window (int) : An integer representing the rolling window size.
Returns: An array of floats representing the calculated volatility.
classifyVolatilityRegime(volatility)
Parameters:
volatility (array) : An array of floats representing the calculated volatility.
Returns: A string indicating the volatility regime ("Low" or "High").
method percentPositive(thisArray)
Returns the percentage of positive non-na values in this array.
This method calculates the percentage of positive values in the provided array, ignoring NA values.
Namespace types: array
Parameters:
thisArray (array)
_candleRange()
_PreviousCandleRange(barsback)
Parameters:
barsback (int) : An integer representing how far back you want to get a range
redCandle()
greenCandle()
_WhiteBody()
_BlackBody()
HighOpenDiff()
OpenLowDiff()
_isCloseAbovePreviousOpen(length)
Parameters:
length (int)
_isCloseBelowPrevious()
_isOpenGreaterThanPrevious()
_isOpenLessThanPrevious()
BodyHigh()
BodyLow()
_candleBody()
_BodyAvg(length)
_BodyAvg function.
Parameters:
length (simple int) : Required (recommended is 6).
_SmallBody(length)
Parameters:
length (simple int) : Length of the slow EMA
Returns: a series of bools, after checking if the candle body was less than body average.
_LongBody(length)
Parameters:
length (simple int)
bearWick()
bearWick() function.
Returns: a SERIES of FLOATS, checks if it's a blackBody(open > close), if it is, than check the difference between the high and open, else checks the difference between high and close.
bullWick()
barlength()
sumbarlength()
sumbull()
sumbear()
bull_vol()
bear_vol()
volumeFightMA()
volumeFightDelta()
weightedAVG_BullVolume()
weightedAVG_BearVolume()
VolumeFightDiff()
VolumeFightFlatFilter()
avg_bull_vol(userMA)
avg_bull_vol(int) function.
Parameters:
userMA (int)
avg_bear_vol(userMA)
avg_bear_vol(int) function.
Parameters:
userMA (int)
diff_vol(userMA)
diff_vol(int) function.
Parameters:
userMA (int)
vol_flat(userMA)
vol_flat(int) function.
Parameters:
userMA (int)
_isEngulfingBullish()
_isEngulfingBearish()
dojiup()
dojidown()
EveningStar()
MorningStar()
ShootingStar()
Hammer()
InvertedHammer()
BearishHarami()
BullishHarami()
BullishBelt()
BullishKicker()
BearishKicker()
HangingMan()
DarkCloudCover()
GaussianDistributionLibrary "GaussianDistribution"
This library defines a custom type `distr` representing a Gaussian (or other statistical) distribution.
It provides methods to calculate key statistical moments and scores, including mean, median, mode, standard deviation, variance, skewness, kurtosis, and Z-scores.
This library is useful for analyzing probability distributions in financial data.
Disclaimer:
I am not a mathematician, but I have implemented this library to the best of my understanding and capacity. Please be indulgent as I tried to translate statistical concepts into code as accurately as possible. Feedback, suggestions, and corrections are welcome to improve the reliability and robustness of this library.
mean(source, length)
Calculate the mean (average) of the distribution
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
Returns: Mean (μ)
stdev(source, length)
Calculate the standard deviation (σ) of the distribution
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
Returns: Standard deviation (σ)
skewness(source, length, mean, stdev)
Calculate the skewness (γ₁) of the distribution
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
mean (float) : the mean (average) of the distribution
stdev (float) : the standard deviation (σ) of the distribution
@return Skewness (γ₁)
skewness(source, length)
Overloaded skewness to calculate from source and length
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
@return Skewness (γ₁)
mode(mean, stdev, skewness)
Estimate mode - Most frequent value in the distribution (approximation based on skewness)
Parameters:
mean (float) : the mean (average) of the distribution
stdev (float) : the standard deviation (σ) of the distribution
skewness (float) : the skewness (γ₁) of the distribution
@return Mode
mode(source, length)
Overloaded mode to calculate from source and length
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
@return Mode
median(mean, stdev, skewness)
Estimate median - Middle value of the distribution (approximation)
Parameters:
mean (float) : the mean (average) of the distribution
stdev (float) : the standard deviation (σ) of the distribution
skewness (float) : the skewness (γ₁) of the distribution
@return Median
median(source, length)
Overloaded median to calculate from source and length
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
@return Median
variance(stdev)
Calculate variance (σ²) - Square of the standard deviation
Parameters:
stdev (float) : the standard deviation (σ) of the distribution
@return Variance (σ²)
variance(source, length)
Overloaded variance to calculate from source and length
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
@return Variance (σ²)
kurtosis(source, length, mean, stdev)
Calculate kurtosis (γ₂) - Degree of "tailedness" in the distribution
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
mean (float) : the mean (average) of the distribution
stdev (float) : the standard deviation (σ) of the distribution
@return Kurtosis (γ₂)
kurtosis(source, length)
Overloaded kurtosis to calculate from source and length
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
@return Kurtosis (γ₂)
normal_score(source, mean, stdev)
Calculate Z-score (standard score) assuming a normal distribution
Parameters:
source (float) : Distribution source (typically a price or indicator series)
mean (float) : the mean (average) of the distribution
stdev (float) : the standard deviation (σ) of the distribution
@return Z-Score
normal_score(source, length)
Overloaded normal_score to calculate from source and length
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
@return Z-Score
non_normal_score(source, mean, stdev, skewness, kurtosis)
Calculate adjusted Z-score considering skewness and kurtosis
Parameters:
source (float) : Distribution source (typically a price or indicator series)
mean (float) : the mean (average) of the distribution
stdev (float) : the standard deviation (σ) of the distribution
skewness (float) : the skewness (γ₁) of the distribution
kurtosis (float) : the "tailedness" in the distribution
@return Z-Score
non_normal_score(source, length)
Overloaded non_normal_score to calculate from source and length
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
@return Z-Score
method init(this)
Initialize all statistical fields of the `distr` type
Namespace types: distr
Parameters:
this (distr)
method init(this, source, length)
Overloaded initializer to set source and length
Namespace types: distr
Parameters:
this (distr)
source (float)
length (int)
distr
Custom type to represent a Gaussian distribution
Fields:
source (series float) : Distribution source (typically a price or indicator series)
length (series int) : Window length for the distribution (must be >= 30 for meaningful statistics)
mode (series float) : Most frequent value in the distribution
median (series float) : Middle value separating the greater and lesser halves of the distribution
mean (series float) : μ (1st central moment) - Average of the distribution
stdev (series float) : σ or standard deviation (square root of the variance) - Measure of dispersion
variance (series float) : σ² (2nd central moment) - Squared standard deviation
skewness (series float) : γ₁ (3rd central moment) - Asymmetry of the distribution
kurtosis (series float) : γ₂ (4th central moment) - Degree of "tailedness" relative to a normal distribution
normal_score (series float) : Z-score assuming normal distribution
non_normal_score (series float) : Adjusted Z-score considering skewness and kurtosis
FunctionDiscreteCosineTransformLibrary "FunctionDiscreteCosineTransform"
Discrete Cosine Transform (DCT)
The Discrete Cosine Transform (DCT) is a mathematical algorithm that converts a series of samples of a signal, typically in the time domain, into another domain called the frequency or spectral domain. It's commonly used for data compression and image/video coding applications such as JPEG and MPEG standards.
The DCT works by multiplying the input sequence with specific cosine functions that are pre-defined and then summing up these products to obtain a new series of values, which represent the frequency components of the original signal. The main advantage of the DCT over other transforms like Fourier Transform is its ability to handle non-stationary signals (i.e., signals with varying statistical properties) more effectively due to its localized basis functions.
In simple terms, the DCT can be thought of as a way to break down an image or video into different frequency components and then compress them without losing too much information. This compression technique is essential for efficient transmission and storage of digital media files over the internet or on devices with limited memory capacity.
~Mixtral4x7b
___
Reference:
lcamtuf.substack.com
dct(data, len)
Discrete Cosine Transform.
Parameters:
data (array) : Data source.
len (int) : Length of the sampling window.
Returns: List with frequency domain transformed information.
dct(data, len)
Discrete Cosine Transform.
Parameters:
data (float) : Data source.
len (int) : Length of the sampling window.
Returns: List with frequency domain transformed information.
idct(data, len)
Inverse Discrete Cosine Transform.
Parameters:
data (array) : Data source.
len (int) : Length of the sampling window.
Returns: List with time domain transformed information.
idct(data, len)
Inverse Discrete Cosine Transform.
Parameters:
data (float) : Data source.
len (int) : Length of the sampling window.
Returns: List with time domain transformed information.
Supertrend Advance Pullback StrategyHandbook for the Supertrend Advance Strategy
1. Introduction
Purpose of the Handbook:
The main purpose of this handbook is to serve as a comprehensive guide for traders and investors who are looking to explore and harness the potential of the Supertrend Advance Strategy. In the rapidly changing financial market, having the right tools and strategies at one's disposal is crucial. Whether you're a beginner hoping to dive into the world of trading or a seasoned investor aiming to optimize and diversify your portfolio, this handbook offers the insights and methodologies you need. By the end of this guide, readers should have a clear understanding of how the Supertrend Advance Strategy works, its benefits, potential pitfalls, and practical application in various trading scenarios.
Overview of the Supertrend Advance Pullback Strategy:
At its core, the Supertrend Advance Strategy is an evolution of the popular Supertrend Indicator. Designed to generate buy and sell signals in trending markets, the Supertrend Indicator has been a favorite tool for many traders around the world. The Advance Strategy, however, builds upon this foundation by introducing enhanced mechanisms, filters, and methodologies to increase precision and reduce false signals.
1. Basic Concept:
The Supertrend Advance Strategy relies on a combination of price action and volatility to determine the potential trend direction. By assessing the average true range (ATR) in conjunction with specific price points, this strategy aims to highlight the potential starting and ending points of market trends.
2. Methodology:
Unlike the traditional Supertrend Indicator, which primarily focuses on closing prices and ATR, the Advance Strategy integrates other critical market variables, such as volume, momentum oscillators, and perhaps even fundamental data, to validate its signals. This multidimensional approach ensures that the generated signals are more reliable and are less prone to market noise.
3. Benefits:
One of the main benefits of the Supertrend Advance Strategy is its ability to filter out false breakouts and minor price fluctuations, which can often lead to premature exits or entries in the market. By waiting for a confluence of factors to align, traders using this advanced strategy can increase their chances of entering or exiting trades at optimal points.
4. Practical Applications:
The Supertrend Advance Strategy can be applied across various timeframes, from intraday trading to swing trading and even long-term investment scenarios. Furthermore, its flexible nature allows it to be tailored to different asset classes, be it stocks, commodities, forex, or cryptocurrencies.
In the subsequent sections of this handbook, we will delve deeper into the intricacies of this strategy, offering step-by-step guidelines on its application, case studies, and tips for maximizing its efficacy in the volatile world of trading.
As you journey through this handbook, we encourage you to approach the Supertrend Advance Strategy with an open mind, testing and tweaking it as per your personal trading style and risk appetite. The ultimate goal is not just to provide you with a new tool but to empower you with a holistic strategy that can enhance your trading endeavors.
2. Getting Started
Navigating the financial markets can be a daunting task without the right tools. This section is dedicated to helping you set up the Supertrend Advance Strategy on one of the most popular charting platforms, TradingView. By following the steps below, you'll be able to integrate this strategy into your charts and start leveraging its insights in no time.
Setting up on TradingView:
TradingView is a web-based platform that offers a wide range of charting tools, social networking, and market data. Before you can apply the Supertrend Advance Strategy, you'll first need a TradingView account. If you haven't set one up yet, here's how:
1. Account Creation:
• Visit TradingView's official website.
• Click on the "Join for free" or "Sign up" button.
• Follow the registration process, providing the necessary details and setting up your login credentials.
2. Navigating the Dashboard:
• Once logged in, you'll be taken to your dashboard. Here, you'll see a variety of tools, including watchlists, alerts, and the main charting window.
• To begin charting, type in the name or ticker of the asset you're interested in the search bar at the top.
3. Configuring Chart Settings:
• Before integrating the Supertrend Advance Strategy, familiarize yourself with the chart settings. This can be accessed by clicking the 'gear' icon on the top right of the chart window.
• Adjust the chart type, time intervals, and other display settings to your preference.
Integrating the Strategy into a Chart:
Now that you're set up on TradingView, it's time to integrate the Supertrend Advance Strategy.
1. Accessing the Pine Script Editor:
• Located at the top-center of your screen, you'll find the "Pine Editor" tab. Click on it.
• This is where custom strategies and indicators are scripted or imported.
2. Loading the Supertrend Advance Strategy Script:
• Depending on whether you have the script or need to find it, there are two paths:
• If you have the script: Copy the Supertrend Advance Strategy script, and then paste it into the Pine Editor.
• If searching for the script: Click on the “Indicators” icon (looks like a flame) at the top of your screen, and then type “Supertrend Advance Strategy” in the search bar. If available, it will show up in the list. Simply click to add it to your chart.
3. Applying the Strategy:
• After pasting or selecting the Supertrend Advance Strategy in the Pine Editor, click on the “Add to Chart” button located at the top of the editor. This will overlay the strategy onto your main chart window.
4. Configuring Strategy Settings:
• Once the strategy is on your chart, you'll notice a small settings ('gear') icon next to its name in the top-left of the chart window. Click on this to access settings.
• Here, you can adjust various parameters of the Supertrend Advance Strategy to better fit your trading style or the specific asset you're analyzing.
5. Interpreting Signals:
• With the strategy applied, you'll now see buy/sell signals represented on your chart. Take time to familiarize yourself with how these look and behave over various timeframes and market conditions.
3. Strategy Overview
What is the Supertrend Advance Strategy?
The Supertrend Advance Strategy is a refined version of the classic Supertrend Indicator, which was developed to aid traders in spotting market trends. The strategy utilizes a combination of data points, including average true range (ATR) and price momentum, to generate buy and sell signals.
In essence, the Supertrend Advance Strategy can be visualized as a line that moves with the price. When the price is above the Supertrend line, it indicates an uptrend and suggests a potential buy position. Conversely, when the price is below the Supertrend line, it hints at a downtrend, suggesting a potential selling point.
Strategy Goals and Objectives:
1. Trend Identification: At the core of the Supertrend Advance Strategy is the goal to efficiently and consistently identify prevailing market trends. By recognizing these trends, traders can position themselves to capitalize on price movements in their favor.
2. Reducing Noise: Financial markets are often inundated with 'noise' - short-term price fluctuations that can mislead traders. The Supertrend Advance Strategy aims to filter out this noise, allowing for clearer decision-making.
3. Enhancing Risk Management: With clear buy and sell signals, traders can set more precise stop-loss and take-profit points. This leads to better risk management and potentially improved profitability.
4. Versatility: While primarily used for trend identification, the strategy can be integrated with other technical tools and indicators to create a comprehensive trading system.
Type of Assets/Markets to Apply the Strategy:
1. Equities: The Supertrend Advance Strategy is highly popular among stock traders. Its ability to capture long-term trends makes it particularly useful for those trading individual stocks or equity indices.
2. Forex: Given the 24-hour nature of the Forex market and its propensity for trends, the Supertrend Advance Strategy is a valuable tool for currency traders.
3. Commodities: Whether it's gold, oil, or agricultural products, commodities often move in extended trends. The strategy can help in identifying and capitalizing on these movements.
4. Cryptocurrencies: The volatile nature of cryptocurrencies means they can have pronounced trends. The Supertrend Advance Strategy can aid crypto traders in navigating these often tumultuous waters.
5. Futures & Options: Traders and investors in derivative markets can utilize the strategy to make more informed decisions about contract entries and exits.
It's important to note that while the Supertrend Advance Strategy can be applied across various assets and markets, its effectiveness might vary based on market conditions, timeframe, and the specific characteristics of the asset in question. As always, it's recommended to use the strategy in conjunction with other analytical tools and to backtest its effectiveness in specific scenarios before committing to trades.
4. Input Settings
Understanding and correctly configuring input settings is crucial for optimizing the Supertrend Advance Strategy for any specific market or asset. These settings, when tweaked correctly, can drastically impact the strategy's performance.
Grouping Inputs:
Before diving into individual input settings, it's important to group similar inputs. Grouping can simplify the user interface, making it easier to adjust settings related to a specific function or indicator.
Strategy Choice:
This input allows traders to select from various strategies that incorporate the Supertrend indicator. Options might include "Supertrend with RSI," "Supertrend with MACD," etc. By choosing a strategy, the associated input settings for that strategy become available.
Supertrend Settings:
1. Multiplier: Typically, a default value of 3 is used. This multiplier is used in the ATR calculation. Increasing it makes the Supertrend line further from prices, while decreasing it brings the line closer.
2. Period: The number of bars used in the ATR calculation. A common default is 7.
EMA Settings (Exponential Moving Average):
1. Period: Defines the number of previous bars used to calculate the EMA. Common periods are 9, 21, 50, and 200.
2. Source: Allows traders to choose which price (Open, Close, High, Low) to use in the EMA calculation.
RSI Settings (Relative Strength Index):
1. Length: Determines how many periods are used for RSI calculation. The standard setting is 14.
2. Overbought Level: The threshold at which the asset is considered overbought, typically set at 70.
3. Oversold Level: The threshold at which the asset is considered oversold, often at 30.
MACD Settings (Moving Average Convergence Divergence):
1. Short Period: The shorter EMA, usually set to 12.
2. Long Period: The longer EMA, commonly set to 26.
3. Signal Period: Defines the EMA of the MACD line, typically set at 9.
CCI Settings (Commodity Channel Index):
1. Period: The number of bars used in the CCI calculation, often set to 20.
2. Overbought Level: Typically set at +100, denoting overbought conditions.
3. Oversold Level: Usually set at -100, indicating oversold conditions.
SL/TP Settings (Stop Loss/Take Profit):
1. SL Multiplier: Defines the multiplier for the average true range (ATR) to set the stop loss.
2. TP Multiplier: Defines the multiplier for the average true range (ATR) to set the take profit.
Filtering Conditions:
This section allows traders to set conditions to filter out certain signals. For example, one might only want to take buy signals when the RSI is below 30, ensuring they buy during oversold conditions.
Trade Direction and Backtest Period:
1. Trade Direction: Allows traders to specify whether they want to take long trades, short trades, or both.
2. Backtest Period: Specifies the time range for backtesting the strategy. Traders can choose from options like 'Last 6 months,' 'Last 1 year,' etc.
It's essential to remember that while default settings are provided for many of these tools, optimal settings can vary based on the market, timeframe, and trading style. Always backtest new settings on historical data to gauge their potential efficacy.
5. Understanding Strategy Conditions
Developing an understanding of the conditions set within a trading strategy is essential for traders to maximize its potential. Here, we delve deep into the logic behind these conditions, using the Supertrend Advance Strategy as our focal point.
Basic Logic Behind Conditions:
Every strategy is built around a set of conditions that provide buy or sell signals. The conditions are based on mathematical or statistical methods and are rooted in the study of historical price data. The fundamental idea is to recognize patterns or behaviors that have been profitable in the past and might be profitable in the future.
Buy and Sell Conditions:
1. Buy Conditions: Usually formulated around bullish signals or indicators suggesting upward price momentum.
2. Sell Conditions: Centered on bearish signals or indicators indicating downward price momentum.
Simple Strategy:
The simple strategy could involve using just the Supertrend indicator. Here:
• Buy: When price closes above the Supertrend line.
• Sell: When price closes below the Supertrend line.
Pullback Strategy:
This strategy capitalizes on price retracements:
• Buy: When the price retraces to the Supertrend line after a bullish signal and is supported by another bullish indicator.
• Sell: When the price retraces to the Supertrend line after a bearish signal and is confirmed by another bearish indicator.
Indicators Used:
EMA (Exponential Moving Average):
• Logic: EMA gives more weight to recent prices, making it more responsive to current price movements. A shorter-period EMA crossing above a longer-period EMA can be a bullish sign, while the opposite is bearish.
RSI (Relative Strength Index):
• Logic: RSI measures the magnitude of recent price changes to analyze overbought or oversold conditions. Values above 70 are typically considered overbought, and values below 30 are considered oversold.
MACD (Moving Average Convergence Divergence):
• Logic: MACD assesses the relationship between two EMAs of a security’s price. The MACD line crossing above the signal line can be a bullish signal, while crossing below can be bearish.
CCI (Commodity Channel Index):
• Logic: CCI compares a security's average price change with its average price variation. A CCI value above +100 may mean the price is overbought, while below -100 might signify an oversold condition.
And others...
As the strategy expands or contracts, more indicators might be added or removed. The crucial point is to understand the core logic behind each, ensuring they align with the strategy's objectives.
Logic Behind Each Indicator:
1. EMA: Emphasizes recent price movements; provides dynamic support and resistance levels.
2. RSI: Indicates overbought and oversold conditions based on recent price changes.
3. MACD: Showcases momentum and direction of a trend by comparing two EMAs.
4. CCI: Measures the difference between a security's price change and its average price change.
Understanding strategy conditions is not just about knowing when to buy or sell but also about comprehending the underlying market dynamics that those conditions represent. As you familiarize yourself with each condition and indicator, you'll be better prepared to adapt and evolve with the ever-changing financial markets.
6. Trade Execution and Management
Trade execution and management are crucial aspects of any trading strategy. Efficient execution can significantly impact profitability, while effective management can preserve capital during adverse market conditions. In this section, we'll explore the nuances of position entry, exit strategies, and various Stop Loss (SL) and Take Profit (TP) methodologies within the Supertrend Advance Strategy.
Position Entry:
Effective trade entry revolves around:
1. Timing: Enter at a point where the risk-reward ratio is favorable. This often corresponds to confirmatory signals from multiple indicators.
2. Volume Analysis: Ensure there's adequate volume to support the movement. Volume can validate the strength of a signal.
3. Confirmation: Use multiple indicators or chart patterns to confirm the entry point. For instance, a buy signal from the Supertrend indicator can be confirmed with a bullish MACD crossover.
Position Exit Strategies:
A successful exit strategy will lock in profits and minimize losses. Here are some strategies:
1. Fixed Time Exit: Exiting after a predetermined period.
2. Percentage-based Profit Target: Exiting after a certain percentage gain.
3. Indicator-based Exit: Exiting when an indicator gives an opposing signal.
Percentage-based SL/TP:
• Stop Loss (SL): Set a fixed percentage below the entry price to limit potential losses.
• Example: A 2% SL on an entry at $100 would trigger a sell at $98.
• Take Profit (TP): Set a fixed percentage above the entry price to lock in gains.
• Example: A 5% TP on an entry at $100 would trigger a sell at $105.
Supertrend-based SL/TP:
• Stop Loss (SL): Position the SL at the Supertrend line. If the price breaches this line, it could indicate a trend reversal.
• Take Profit (TP): One could set the TP at a point where the Supertrend line flattens or turns, indicating a possible slowdown in momentum.
Swing high/low-based SL/TP:
• Stop Loss (SL): For a long position, set the SL just below the recent swing low. For a short position, set it just above the recent swing high.
• Take Profit (TP): For a long position, set the TP near a recent swing high or resistance. For a short position, near a swing low or support.
And other methods...
1. Trailing Stop Loss: This dynamic SL adjusts with the price movement, locking in profits as the trade moves in your favor.
2. Multiple Take Profits: Divide the position into segments and set multiple TP levels, securing profits in stages.
3. Opposite Signal Exit: Exit when another reliable indicator gives an opposite signal.
Trade execution and management are as much an art as they are a science. They require a blend of analytical skill, discipline, and intuition. Regularly reviewing and refining your strategies, especially in light of changing market conditions, is crucial to maintaining consistent trading performance.
7. Visual Representations
Visual tools are essential for traders, as they simplify complex data into an easily interpretable format. Properly analyzing and understanding the plots on a chart can provide actionable insights and a more intuitive grasp of market conditions. In this section, we’ll delve into various visual representations used in the Supertrend Advance Strategy and their significance.
Understanding Plots on the Chart:
Charts are the primary visual aids for traders. The arrangement of data points, lines, and colors on them tell a story about the market's past, present, and potential future moves.
1. Data Points: These represent individual price actions over a specific timeframe. For instance, a daily chart will have data points showing the opening, closing, high, and low prices for each day.
2. Colors: Used to indicate the nature of price movement. Commonly, green is used for bullish (upward) moves and red for bearish (downward) moves.
Trend Lines:
Trend lines are straight lines drawn on a chart that connect a series of price points. Their significance:
1. Uptrend Line: Drawn along the lows, representing support. A break below might indicate a trend reversal.
2. Downtrend Line: Drawn along the highs, indicating resistance. A break above might suggest the start of a bullish trend.
Filled Areas:
These represent a range between two values on a chart, usually shaded or colored. For instance:
1. Bollinger Bands: The area between the upper and lower band is filled, giving a visual representation of volatility.
2. Volume Profile: Can show a filled area representing the amount of trading activity at different price levels.
Stop Loss and Take Profit Lines:
These are horizontal lines representing pre-determined exit points for trades.
1. Stop Loss Line: Indicates the level at which a trade will be automatically closed to limit losses. Positioned according to the trader's risk tolerance.
2. Take Profit Line: Denotes the target level to lock in profits. Set according to potential resistance (for long trades) or support (for short trades) or other technical factors.
Trailing Stop Lines:
A trailing stop is a dynamic form of stop loss that moves with the price. On a chart:
1. For Long Trades: Starts below the entry price and moves up with the price but remains static if the price falls, ensuring profits are locked in.
2. For Short Trades: Starts above the entry price and moves down with the price but remains static if the price rises.
Visual representations offer traders a clear, organized view of market dynamics. Familiarity with these tools ensures that traders can quickly and accurately interpret chart data, leading to more informed decision-making. Always ensure that the visual aids used resonate with your trading style and strategy for the best results.
8. Backtesting
Backtesting is a fundamental process in strategy development, enabling traders to evaluate the efficacy of their strategy using historical data. It provides a snapshot of how the strategy would have performed in past market conditions, offering insights into its potential strengths and vulnerabilities. In this section, we'll explore the intricacies of setting up and analyzing backtest results and the caveats one must be aware of.
Setting Up Backtest Period:
1. Duration: Determine the timeframe for the backtest. It should be long enough to capture various market conditions (bullish, bearish, sideways). For instance, if you're testing a daily strategy, consider a period of several years.
2. Data Quality: Ensure the data source is reliable, offering high-resolution and clean data. This is vital to get accurate backtest results.
3. Segmentation: Instead of a continuous period, sometimes it's helpful to backtest over distinct market phases, like a particular bear or bull market, to see how the strategy holds up in different environments.
Analyzing Backtest Results:
1. Performance Metrics: Examine metrics like the total return, annualized return, maximum drawdown, Sharpe ratio, and others to gauge the strategy's efficiency.
2. Win Rate: It's the ratio of winning trades to total trades. A high win rate doesn't always signify a good strategy; it should be evaluated in conjunction with other metrics.
3. Risk/Reward: Understand the average profit versus the average loss per trade. A strategy might have a low win rate but still be profitable if the average gain far exceeds the average loss.
4. Drawdown Analysis: Review the periods of losses the strategy could incur and how long it takes, on average, to recover.
9. Tips and Best Practices
Successful trading requires more than just knowing how a strategy works. It necessitates an understanding of when to apply it, how to adjust it to varying market conditions, and the wisdom to recognize and avoid common pitfalls. This section offers insightful tips and best practices to enhance the application of the Supertrend Advance Strategy.
When to Use the Strategy:
1. Market Conditions: Ideally, employ the Supertrend Advance Strategy during trending market conditions. This strategy thrives when there are clear upward or downward trends. It might be less effective during consolidative or sideways markets.
2. News Events: Be cautious around significant news events, as they can cause extreme volatility. It might be wise to avoid trading immediately before and after high-impact news.
3. Liquidity: Ensure you are trading in assets/markets with sufficient liquidity. High liquidity ensures that the price movements are more reflective of genuine market sentiment and not due to thin volume.
Adjusting Settings for Different Markets/Timeframes:
1. Markets: Each market (stocks, forex, commodities) has its own characteristics. It's essential to adjust the strategy's parameters to align with the market's volatility and liquidity.
2. Timeframes: Shorter timeframes (like 1-minute or 5-minute charts) tend to have more noise. You might need to adjust the settings to filter out false signals. Conversely, for longer timeframes (like daily or weekly charts), you might need to be more responsive to genuine trend changes.
3. Customization: Regularly review and tweak the strategy's settings. Periodic adjustments can ensure the strategy remains optimized for the current market conditions.
10. Frequently Asked Questions (FAQs)
Given the complexities and nuances of the Supertrend Advance Strategy, it's only natural for traders, both new and seasoned, to have questions. This section addresses some of the most commonly asked questions regarding the strategy.
1. What exactly is the Supertrend Advance Strategy?
The Supertrend Advance Strategy is an evolved version of the traditional Supertrend indicator. It's designed to provide clearer buy and sell signals by incorporating additional indicators like EMA, RSI, MACD, CCI, etc. The strategy aims to capitalize on market trends while minimizing false signals.
2. Can I use the Supertrend Advance Strategy for all asset types?
Yes, the strategy can be applied to various asset types like stocks, forex, commodities, and cryptocurrencies. However, it's crucial to adjust the settings accordingly to suit the specific characteristics and volatility of each asset type.
3. Is this strategy suitable for day trading?
Absolutely! The Supertrend Advance Strategy can be adjusted to suit various timeframes, making it versatile for both day trading and long-term trading. Remember to fine-tune the settings to align with the timeframe you're trading on.
4. How do I deal with false signals?
No strategy is immune to false signals. However, by combining the Supertrend with other indicators and adhering to strict risk management protocols, you can minimize the impact of false signals. Always use stop-loss orders and consider filtering trades with additional confirmation signals.
5. Do I need any prior trading experience to use this strategy?
While the Supertrend Advance Strategy is designed to be user-friendly, having a foundational understanding of trading and market analysis can greatly enhance your ability to employ the strategy effectively. If you're a beginner, consider pairing the strategy with further education and practice on demo accounts.
6. How often should I review and adjust the strategy settings?
There's no one-size-fits-all answer. Some traders adjust settings weekly, while others might do it monthly. The key is to remain responsive to changing market conditions. Regular backtesting can give insights into potential required adjustments.
7. Can the Supertrend Advance Strategy be automated?
Yes, many traders use algorithmic trading platforms to automate their strategies, including the Supertrend Advance Strategy. However, always monitor automated systems regularly to ensure they're operating as intended.
8. Are there any markets or conditions where the strategy shouldn't be used?
The strategy might generate more false signals in markets that are consolidative or range-bound. During significant news events or times of unexpected high volatility, it's advisable to tread with caution or stay out of the market.
9. How important is backtesting with this strategy?
Backtesting is crucial as it allows traders to understand how the strategy would have performed in the past, offering insights into potential profitability and areas of improvement. Always backtest any new setting or tweak before applying it to live trades.
10. What if the strategy isn't working for me?
No strategy guarantees consistent profits. If it's not working for you, consider reviewing your settings, seeking expert advice, or complementing the Supertrend Advance Strategy with other analysis methods. Remember, continuous learning and adaptation are the keys to trading success.
Other comments
Value of combining several indicators in this script and how they work together
Diversification of Signals: Just as diversifying an investment portfolio can reduce risk, using multiple indicators can offer varied perspectives on potential price movements. Each indicator can capture a different facet of the market, ensuring that traders are not overly reliant on a single data point.
Confirmation & Reduced False Signals: A common challenge with many indicators is the potential for false signals. By requiring confirmation from multiple indicators before acting, the chances of acting on a false signal can be significantly reduced.
Flexibility Across Market Conditions: Different indicators might perform better under different market conditions. For example, while moving averages might excel in trending markets, oscillators like RSI might be more useful during sideways or range-bound conditions. A mashup strategy can potentially adapt better to varying market scenarios.
Comprehensive Analysis: With multiple indicators, traders can gauge trend strength, momentum, volatility, and potential market reversals all at once, providing a holistic view of the market.
How do the different indicators in the Supertrend Advance Strategy work together?
Supertrend: This is primarily a trend-following indicator. It provides traders with buy and sell signals based on the volatility of the price. When combined with other indicators, it can filter out noise and give more weight to strong, confirmed trends.
EMA (Exponential Moving Average): EMA gives more weight to recent price data. It can be used to identify the direction and strength of a trend. When the price is above the EMA, it's generally considered bullish, and vice versa.
RSI (Relative Strength Index): An oscillator that measures the magnitude of recent price changes to evaluate overbought or oversold conditions. By cross-referencing with other indicators like EMA or MACD, traders can spot potential reversals or confirmations of a trend.
MACD (Moving Average Convergence Divergence): This indicator identifies changes in the strength, direction, momentum, and duration of a trend in a stock's price. When the MACD line crosses above the signal line, it can be a bullish sign, and when it crosses below, it can be bearish. Pairing MACD with Supertrend can provide dual confirmation of a trend.
CCI (Commodity Channel Index): Initially developed for commodities, CCI can indicate overbought or oversold conditions. It can be used in conjunction with other indicators to determine entry and exit points.
In essence, the synergy of these indicators provides a balanced, comprehensive approach to trading. Each indicator offers its unique lens into market conditions, and when they align, it can be a powerful indication of a trading opportunity. This combination not only reduces the potential drawbacks of each individual indicator but leverages their strengths, aiming for more consistent and informed trading decisions.
Backtesting and Default Settings
• This indicator has been optimized to be applied for 1 hour-charts. However, the underlying principles of this strategy are supply and demand in the financial markets and the strategy can be applied to all timeframes. Daytraders can use the 1min- or 5min charts, swing-traders can use the daily charts.
• This strategy has been designed to identify the most promising, highest probability entries and trades for each stock or other financial security.
• The combination of the qualifiers results in a highly selective strategy which only considers the most promising swing-trading entries. As a result, you will normally only find a low number of trades for each stock or other financial security per year in case you apply this strategy for the daily charts. Shorter timeframes will result in a higher number of trades / year.
• Consequently, traders need to apply this strategy for a full watchlist rather than just one financial security.
• Default properties: RSI on (length 14, RSI buy level 50, sell level 50), EMA, RSI, MACD on, type of strategy pullback, SL/TP type: ATR (length 10, factor 3), trade direction both, quantity 5, take profit swing hl 5.1, highest / lowest lookback 2, enable ATR trail (ATR length 10, SL ATR multiplier 1.4, TP multiplier 2.1, lookback = 4, trade direction = both).
StatMetricsLibrary "StatMetrics"
A utility library for common statistical indicators and ratios used in technical analysis.
Includes Z-Score, correlation, PLF, SRI, Sharpe, Sortino, Omega ratios, and normalization tools.
zscore(src, len)
Calculates the Z-score of a series
Parameters:
src (float) : The input price or series (e.g., close)
len (simple int) : The lookback period for mean and standard deviation
Returns: Z-score: number of standard deviations the input is from the mean
corr(x, y, len)
Computes Pearson correlation coefficient between two series
Parameters:
x (float) : First series
y (float) : Second series
len (simple int) : Lookback period
Returns: Correlation coefficient between -1 and 1
plf(src, longLen, shortLen, smoothLen)
Calculates the Price Lag Factor (PLF) as the difference between long and short Z-scores, normalized and smoothed
Parameters:
src (float) : Source series (e.g., close)
longLen (simple int) : Long Z-score period
shortLen (simple int) : Short Z-score period
smoothLen (simple int) : Hull MA smoothing length
Returns: Smoothed and normalized PLF oscillator
sri(signal, len)
Computes the Statistical Reliability Index (SRI) based on trend persistence
Parameters:
signal (float) : A price or signal series (e.g., smoothed PLF)
len (simple int) : Lookback period for smoothing and deviation
Returns: Normalized trend reliability score
sharpe(src, len)
Calculates the Sharpe Ratio over a period
Parameters:
src (float) : Price series (e.g., close)
len (simple int) : Lookback period
Returns: Sharpe ratio value
sortino(src, len)
Calculates the Sortino Ratio over a period, using only downside volatility
Parameters:
src (float) : Price series
len (simple int) : Lookback period
Returns: Sortino ratio value
omega(src, len)
Calculates the Omega Ratio as the ratio of upside to downside return area
Parameters:
src (float) : Price series
len (simple int) : Lookback period
Returns: Omega ratio value
beta(asset, benchmark, len)
Calculates beta coefficient of asset vs benchmark using rolling covariance
Parameters:
asset (float) : Series of the asset (e.g., close)
benchmark (float) : Series of the benchmark (e.g., SPX close)
len (simple int) : Lookback window
Returns: Beta value (slope of linear regression)
alpha(asset, benchmark, len)
Calculates rolling alpha of an asset relative to a benchmark
Parameters:
asset (float) : Series of the asset (e.g., close)
benchmark (float) : Series of the benchmark (e.g., SPX close)
len (simple int) : Lookback window
Returns: Alpha value (excess return not explained by Beta exposure)
skew(x, len)
Computes skewness of a return series
Parameters:
x (float) : Input series (e.g., returns)
len (simple int) : Lookback period
Returns: Skewness value
kurtosis(x, len)
Computes kurtosis of a return series
Parameters:
x (float) : Input series (e.g., returns)
len (simple int) : Lookback period
Returns: Kurtosis value
cv(x, len)
Calculates Coefficient of Variation
Parameters:
x (float) : Input series (e.g., returns or prices)
len (simple int) : Lookback period
Returns: CV value
autocorr(x, len)
Calculates autocorrelation with 1-lag
Parameters:
x (float) : Series to test
len (simple int) : Lookback window
Returns: Autocorrelation at lag 1
stderr(x, len)
Calculates rolling standard error of a series
Parameters:
x (float) : Input series
len (simple int) : Lookback window
Returns: Standard error (std dev / sqrt(n))
info_ratio(asset, benchmark, len)
Calculates the Information Ratio
Parameters:
asset (float) : Asset price series
benchmark (float) : Benchmark price series
len (simple int) : Lookback period
Returns: Information ratio (alpha / tracking error)
tracking_error(asset, benchmark, len)
Measures deviation from benchmark (Tracking Error)
Parameters:
asset (float) : Asset return series
benchmark (float) : Benchmark return series
len (simple int) : Lookback window
Returns: Tracking error value
max_drawdown(x, len)
Computes maximum drawdown over a rolling window
Parameters:
x (float) : Price series
len (simple int) : Lookback window
Returns: Rolling max drawdown percentage (as a negative value)
zscore_signal(z, ob, os)
Converts Z-score into a 3-level signal
Parameters:
z (float) : Z-score series
ob (float) : Overbought threshold
os (float) : Oversold threshold
Returns: -1, 0, or 1 depending on signal state
r_squared(x, y, len)
Calculates rolling R-squared (coefficient of determination)
Parameters:
x (float) : Asset returns
y (float) : Benchmark returns
len (simple int) : Lookback window
Returns: R-squared value (0 to 1)
entropy(x, len)
Approximates Shannon entropy using log returns
Parameters:
x (float) : Price series
len (simple int) : Lookback period
Returns: Approximate entropy
zreversal(z)
Detects Z-score reversals to the mean
Parameters:
z (float) : Z-score series
Returns: +1 on upward reversal, -1 on downward
momentum_rank(x, len)
Calculates relative momentum strength
Parameters:
x (float) : Price series
len (simple int) : Lookback window
Returns: Proportion of lookback where current price is higher
normalize(x, len)
Normalizes a series to a 0–1 range over a period
Parameters:
x (float) : The input series
len (simple int) : Lookback period
Returns: Normalized value between 0 and 1
composite_score(score1, score2, score3)
Combines multiple normalized scores into a composite score
Parameters:
score1 (float)
score2 (float)
score3 (float)
Returns: Average composite score
ICT Killzones and Sessions W/ Silver Bullet + MacrosForex and Equity Session Tracker with Killzones, Silver Bullet, and Macro Times
This Pine Script indicator is a comprehensive timekeeping tool designed specifically for ICT traders using any time-based strategy. It helps you visualize and keep track of forex and equity session times, kill zones, macro times, and silver bullet hours.
Features:
Session and Killzone Lines:
Green: London Open (LO)
White: New York (NY)
Orange: Australian (AU)
Purple: Asian (AS)
Includes AM and PM session markers.
Dotted/Striped Lines indicate overlapping kill zones within the session timeline.
Customization Options:
Display sessions and killzones in collapsed or full view.
Hide specific sessions or killzones based on your preferences.
Customize colors, texts, and sizes.
Option to hide drawings older than the current day.
Automatic Updates:
The indicator draws all lines and boxes at the start of a new day.
Automatically adjusts time-based boxes according to the New York timezone.
Killzone Time Windows (for indices):
London KZ: 02:00 - 05:00
New York AM KZ: 07:00 - 10:00
New York PM KZ: 13:30 - 16:00
Silver Bullet Times:
03:00 - 04:00
10:00 - 11:00
14:00 - 15:00
Macro Times:
02:33 - 03:00
04:03 - 04:30
08:50 - 09:10
09:50 - 10:10
10:50 - 11:10
11:50 - 12:50
Latest Update:
January 15:
Added option to automatically change text coloring based on the chart.
Included additional optional macro times per user request:
12:50 - 13:10
13:50 - 14:15
14:50 - 15:10
15:50 - 16:15
Usage:
To maximize your experience, minimize the pane where the script is drawn. This minimizes distractions while keeping the essential time markers visible. The script is designed to help traders by clearly annotating key trading periods without overwhelming their charts.
Originality and Justification:
This indicator uniquely integrates various time-based strategies essential for ICT traders. Unlike other indicators, it consolidates session times, kill zones, macro times, and silver bullet hours into one comprehensive tool. This allows traders to have a clear and organized view of critical trading periods, facilitating better decision-making.
Credits:
This script incorporates open-source elements with significant improvements to enhance functionality and user experience.
Forex and Equity Session Tracker with Killzones, Silver Bullet, and Macro Times
This Pine Script indicator is a comprehensive timekeeping tool designed specifically for ICT traders using any time-based strategy. It helps you visualize and keep track of forex and equity session times, kill zones, macro times, and silver bullet hours.
Features:
Session and Killzone Lines:
Green: London Open (LO)
White: New York (NY)
Orange: Australian (AU)
Purple: Asian (AS)
Includes AM and PM session markers.
Dotted/Striped Lines indicate overlapping kill zones within the session timeline.
Customization Options:
Display sessions and killzones in collapsed or full view.
Hide specific sessions or killzones based on your preferences.
Customize colors, texts, and sizes.
Option to hide drawings older than the current day.
Automatic Updates:
The indicator draws all lines and boxes at the start of a new day.
Automatically adjusts time-based boxes according to the New York timezone.
Killzone Time Windows (for indices):
London KZ: 02:00 - 05:00
New York AM KZ: 07:00 - 10:00
New York PM KZ: 13:30 - 16:00
Silver Bullet Times:
03:00 - 04:00
10:00 - 11:00
14:00 - 15:00
Macro Times:
02:33 - 03:00
04:03 - 04:30
08:50 - 09:10
09:50 - 10:10
10:50 - 11:10
11:50 - 12:50
Latest Update:
January 15:
Added option to automatically change text coloring based on the chart.
Included additional optional macro times per user request:
12:50 - 13:10
13:50 - 14:15
14:50 - 15:10
15:50 - 16:15
ICT Sessions and Kill Zones
What They Are:
ICT Sessions: These are specific times during the trading day when market activity is expected to be higher, such as the London Open, New York Open, and the Asian session.
Kill Zones: These are specific time windows within these sessions where the probability of significant price movements is higher. For example, the New York AM Kill Zone is typically from 8:30 AM to 11:00 AM EST.
How to Use Them:
Identify the Session: Determine which trading session you are in (London, New York, or Asian).
Focus on Kill Zones: Within that session, focus on the kill zones for potential trade setups. For instance, during the New York session, look for setups between 8:30 AM and 11:00 AM EST.
Silver Bullets
What They Are:
Silver Bullets: These are specific, high-probability trade setups that occur within the kill zones. They are designed to be "one shot, one kill" trades, meaning they aim for precise and effective entries and exits.
How to Use Them:
Time-Based Setup: Look for these setups within the designated kill zones. For example, between 10:00 AM and 11:00 AM for the New York AM session .
Chart Analysis: Start with higher time frames like the 15-minute chart and then refine down to 5-minute and 1-minute charts to identify imbalances or specific patterns .
Macros
What They Are:
Macros: These are broader market conditions and trends that influence your trading decisions. They include understanding the overall market direction, seasonal tendencies, and the Commitment of Traders (COT) reports.
How to Use Them:
Understand Market Conditions: Be aware of the macroeconomic factors and market conditions that could affect price movements.
Seasonal Tendencies: Know the seasonal patterns that might influence the market direction.
COT Reports: Use the Commitment of Traders reports to understand the positioning of large traders and commercial hedgers .
Putting It All Together
Preparation: Understand the macro conditions and review the COT reports.
Session and Kill Zone: Identify the trading session and focus on the kill zones.
Silver Bullet Setup: Look for high-probability setups within the kill zones using refined chart analysis.
Execution: Execute the trade with precision, aiming for a "one shot, one kill" outcome.
By following these steps, you can effectively use ICT sessions, kill zones, silver bullets, and macros to enhance your trading strategy.
Usage:
To maximize your experience, shrink the pane where the script is drawn. This minimizes distractions while keeping the essential time markers visible. The script is designed to help traders by clearly annotating key trading periods without overwhelming their charts.
Originality and Justification:
This indicator uniquely integrates various time-based strategies essential for ICT traders. Unlike other indicators, it consolidates session times, kill zones, macro times, and silver bullet hours into one comprehensive tool. This allows traders to have a clear and organized view of critical trading periods, facilitating better decision-making.
Credits:
This script incorporates open-source elements with significant improvements to enhance functionality and user experience. All credit goes to itradesize for the SB + Macro boxes
TASC 2024.03 Rate of Directional Change█ OVERVIEW
This script implements the Rate of Directional Change (RODC) indicator introduced by Richard Poster in the "Taming The Effects Of Whipsaw" article featured in the March 2024 edition of TASC's Traders' Tips .
█ CONCEPTS
In his article, Richard Poster discusses an approach to potentially reduce false trend-following strategy entry signals due to whipsaws in forex data. The RODC indicator is central to this approach. The idea behind RODC is that one can characterize market whipsaw as alternating up and down ZigZag segments. By counting the number of up and down segments within a lookback window, the RODC indicator aims to identify if the window contains a significant whipsaw pattern:
RODC = 100 * Segments / Window Size (bars)
Larger RODC values suggest elevated whipsaw in the calculation window, while smaller values signify trending price activity.
█ CALCULATIONS
• For each price bar, the script iterates through the lookback window to identify up and down segments.
• If the price change between subsequent bars within the window is in the direction opposite to the current segment and exceeds the specified threshold , the calculation interprets the condition as a reversal point and the start of a new segment.
• The script uses the number of segments within the window to calculate RODC according to the above formula.
• Finally, the script applies a simple moving average to smoothen the RODC data.
Users can change the length of the lookback window , the threshold value, and the smoothing length in the "Inputs" tab of the script's settings.
Machine Learning: Anchored Gaussian Process Regression [LuxAlgo]Machine Learning: Anchored Gaussian Process Regression is an anchored version of Machine Learning: Gaussian Process Regression .
It implements Gaussian Process Regression (GPR), a popular machine-learning method capable of estimating underlying trends in prices as well as forecasting them. Users can set a Training Window by choosing 2 points. GPR will be calculated for the data between these 2 points.
Do remember that forecasting trends in the market is challenging, do not use this tool as a standalone for your trading decisions.
🔶 USAGE
When adding the indicator to the chart, users will be prompted to select a starting and ending point for the calculations, click on your chart to select those points.
Start & end point are named 'Anchor 1' & 'Anchor 2', the Training Window is located between these 2 points. Once both points are positioned, the Training Window is set, whereafter the Gaussian Process Regression (GPR) is calculated using data between both Anchors .
The blue line is the GPR fit, the red line is the GPR prediction, derived from data between the Training Window .
Two user settings controlling the trend estimate are available, Smooth and Sigma.
Smooth determines the smoothness of our estimate, with higher values returning smoother results suitable for longer-term trend estimates.
Sigma controls the amplitude of the forecast, with values closer to 0 returning results with a higher amplitude.
One of the advantages of the anchoring process is the ability for the user to evaluate the accuracy of forecasts and further understand how settings affect their accuracy.
The publication also shows the mean average (faint silver line), which indicates the average of the prices within the calculation window (between the anchors). This can be used as a reference point for the forecast, seeing how it deviates from the training window average.
🔶 DETAILS
🔹 Limited Training Window
The Training Window is limited due to matrix.new() limitations in size.
When the 2 points are too far from each other (as in the latter example), the line will end at the maximum limit, without giving a size error.
The red forecasted line is always given priority.
🔹 Positioning Anchors
Typically Anchor 1 is located further in history than Anchor 2 , however, placing Anchor 2 before Anchor 1 is perfectly possibly, and won't give issues.
🔶 SETTINGS
Anchor 1 / Anchor 2: both points will form the Training Window .
Forecasting Length: Forecasting horizon, determines how many bars in the 'future' are forecasted.
Smooth: Controls the degree of smoothness of the model fit.
Sigma: Noise variance. Controls the amplitude of the forecast, lower values will make it more sensitive to outliers.
Profitunity - Beginner [TC]This indicator aggregates the knowledges of the first level of the Trading Chaos approach by Bill Williams. It uses the Market Facilitation Index (MFI) in conjunction with the type of bar(candle) to generate strong long and strong short signals.
General information
Bars numeration
All bars or candles could be numbered with the following algorithm. If we divide the candle for 3 equal parts from high to low. The highest third have the number 1, the middle one - 2, the lowest one - 3. Hence we can define the first number as the number of the third where the price opened, second - where the price closed. For example, if the price opened at the highest third and closed at the lowest one this candle has the number 13.
Trend defining
Also candles could be divided into three groups according to the trend condition: uptrend, downtrend, sideways. If the middle of the candle's trading range is above the high of the previous candle - it's uptrend candle, if below the low of the previous candle - it's downtrend candle, sideways in other candles.
Profitunity windows
According to Bill Williams MFI has 4 windows - fake, green, fade and squat. I am not going to describe here the methodology of MFI, but one thing you should know that the most valuable windows are green and squat. Green state is an indication of the true move on the market. Squat the sign that the increase in volume have not triggered the trend continuation and reverse is about to happen.
How to use?
You can use this script as the helper in automatic defining the type of candle. Indicator shows only green (green candle color) and squat (red candle color) MFI states. Add script to any timeframe and asset chart to see labels.
The "strong long" label flashes when 3 conditions are met:
1. Squat candle
2. Candle number 13
3. Downtrend candle
"Strong short" label flashes when:
1. Squat candle
2. Candle number 31
3. Uptrend candle
This indicator helps to find the trend reversal points, can be used in conjunction with other TA tools to find the entry points.
FVG 9:31–10:00 AM ETFVG 9:31–10:00 AM ET - Script Description
What This Script Does
This indicator finds **Fair Value Gaps (FVGs)** that form during the first 29 minutes of the U.S. stock market (9:31 AM to 10:00 AM Eastern Time). A Fair Value Gap is a price imbalance where there's a gap between candles that often becomes an important support or resistance level.
Key Features:
- **Time Window**: Only looks for FVGs between 9:31-10:00 AM ET (most important opening period)
- **One Per Day**: Finds only the first FVG that forms in this time window each day
- **Visual Display**: Draws a purple box around the gap with a clear "FVG" label
- **Price Tracking**: Monitors when price comes back to test the gap level
- **Alert System**: Sends notifications when price returns to the FVG zone
How FVGs Are Detected:
- **Bullish FVG**: When there's a gap up (low of middle candle is above high of 3rd candle back)
- **Bearish FVG**: When there's a gap down (high of middle candle is below low of 3rd candle back)
The 9:31-10:00 AM window is chosen because this is when institutions and algorithms create their biggest price moves right after market open, making these gaps very reliable.
Customization Options
User Settings
Extend FVG Box (Bars)
- **What it does**: Makes the purple box longer to the right
- **Default**: 0 (box ends right after the gap forms)
- **Options**: Any number from 0 to 100+
- **When to use**:
- Keep at 0 for clean historical view
- Set to 10-20 to track the gap during the current session
- Set higher for longer reference
Code Settings (Can Be Changed)
Time Window
- **Start**: 9:31 AM Eastern Time
- **End**: 10:00 AM Eastern Time
- **Can modify**: Change the hour/minute numbers in the code
Visual Style
- **Color**: Purple with see-through background
- **Label**: Shows "FVG" text in white
- **Can modify**: Change colors and transparency in the code
How to Use:
Setup
Chart Settings
1. Use 1-minute, 5-minute, or 15-minute charts (works best on these timeframes)
2. Apply to liquid markets like ES, NQ, major stocks, or forex pairs
3. Set the "Extend FVG Box" to your preference (start with 0 or 10)
What You'll See
- A purple box appears when an FVG forms during 9:31-10:00 AM
- Box shows the exact price levels of the gap
- "FVG" label appears on the box
- Only one FVG per day will be marked
Trading Strategies
Basic FVG Trading
1. **Wait for Formation**: Let the purple box appear during 9:31-10:00 AM
2. **Watch Price Movement**: See if price moves away from the gap
3. **Enter on Retest**: When price comes back to the purple box area, consider entering
4. **Trade Direction**:
- Bullish FVG = look for long opportunities when price retests
- Bearish FVG = look for short opportunities when price retests
Entry Methods
- **Bounce Play**: Enter when price touches the FVG box and bounces away
- **Break Play**: Enter if price strongly breaks through the FVG box
- **Rejection Play**: Enter opposite direction if price gets rejected at the FVG
Risk Management
Stop Losses
- Place stops just outside the FVG box (a few ticks beyond the gap)
- If trading a bounce, stop goes on opposite side of the gap
- If trading a break, stop goes back inside the gap
Position Sizing
- Start small until you understand how FVGs work in your market
- Bigger gaps = smaller position size (more risk)
- Smaller gaps = can use larger position size
Profit Targets
- Take profits at obvious levels like round numbers, previous highs/lows
- Consider taking half profits at 1:1 risk/reward ratio
- Let some position run if the move is strong
Best Practices
When It Works Best
- High-volume stocks and futures (ES, NQ work great)
- Normal market days without major news during the 9:31-10:00 window
- When there's clear institutional activity in the opening period
When to Be Careful
- Low-volume stocks or markets
- Major economic news releases during the time window
- Market holidays when volume is low
- Very choppy or sideways days
Alert Usage
- The script will alert you when price comes back to test the FVG
- Don't trade the alert blindly - always check the current market situation
- Use the alert as a heads-up to start watching the setup more closely
Tips for Success
- The earlier the FVG forms in the 9:31-10:00 window, often the more significant it is
- FVGs that form with high volume are usually more reliable
- Always consider the overall market direction - don't fight the main trend
- Practice on paper first to understand how FVGs behave in your chosen market
🔗 Works Best With:
✅ Liquidity Levels — Smart Swing Lows: Spot key structural lows that can fuel stop hunts and reversals.
✅ ICT Turtle Soup — Liquidity Reversal: Add a classic reversal pattern to your toolkit to catch fakeouts cleanly.
✅ ICT SMC Liquidity Grabs and OBs- Liquidity Grabs, Order Block Zones, and Fibonacci OTE Levels, allowing traders to identify institutional entry models with clean, rule-based visual signals.
This script is most valuable for day traders who want to catch institutional moves right after market open, but it can also help swing traders identify important intraday levels.
✅ ICT Macro Zones (Grey Box Version)- It tracks real-time highs and lows for each Silver Bullet session.
✅ Weekly Opening Gap (cryptonnnite)
Bilateral Filter For Loop [BackQuant]Bilateral Filter For Loop
The Bilateral Filter For Loop is an advanced technical indicator designed to filter out market noise and smooth out price data, thus improving the identification of underlying market trends. It employs a bilateral filter, which is a sophisticated non-linear filter commonly used in image processing and price time series analysis. By considering both spatial and range differences between price points, this filter is highly effective at preserving significant trends while reducing random fluctuations, ultimately making it suitable for dynamic trend-following strategies.
Please take the time to read the following:
Key Features
1. Bilateral Filter Calculation:
The bilateral filter is the core of this indicator and works by applying a weight to each data point based on two factors: spatial distance and price range difference. This dual weighting process allows the filter to preserve important price movements while reducing the impact of less relevant fluctuations. The filter uses two primary parameters:
Spatial Sigma (σ_d): This parameter adjusts the weight applied based on the distance of each price point from the current price. A larger spatial sigma means more smoothing, as further away values will contribute more heavily to the result.
Range Sigma (σ_r): This parameter controls how much weight is applied based on the difference in price values. Larger price differences result in smaller weights, while similar price values result in larger weights, thereby preserving the trend while filtering out noise.
The output of this filter is a smoothed version of the original price series, which eliminates short-term fluctuations, helping traders focus on longer-term trends. The bilateral filter is applied over a rolling window, adjusting the level of smoothing dynamically based on both the distance between values and their relative price movements.
2. For Loop Calculation for Trend Scoring:
A for-loop is used to calculate the trend score based on the filtered price data. The loop compares the current value to previous values within the specified window, scoring the trend as follows:
+1 for upward movement (when the filtered value is greater than the previous value).
-1 for downward movement (when the filtered value is less than the previous value).
The cumulative result of this loop gives a continuous trend score, which serves as a directional indicator for the market's momentum. By summing the scores over the window period, the loop provides an aggregate value that reflects the overall trend strength. This score helps determine whether the market is experiencing a strong uptrend, downtrend, or sideways movement.
3. Long and Short Conditions:
Once the trend score has been calculated, it is compared against predefined threshold levels:
A long signal is generated when the trend score exceeds the upper threshold, indicating that the market is in a strong uptrend.
A short signal is generated when the trend score crosses below the lower threshold, signaling a potential downtrend or trend reversal.
These conditions provide clear signals for potential entry points, and the color-coding helps traders quickly identify market direction:
Long signals are displayed in green.
Short signals are displayed in red.
These signals are designed to provide high-confidence entries for trend-following strategies, helping traders capture profitable movements in the market.
4. Trend Background and Bar Coloring:
The script offers customizable visual settings to enhance the clarity of the trend signals. Traders can choose to:
Color the bars based on the trend direction: Bars are colored green for long signals and red for short signals.
Change the background color to provide additional context: The background will be shaded green for a bullish trend and red for a bearish trend. This visual feedback helps traders to stay aligned with the prevailing market sentiment.
These features offer a quick visual reference for understanding the market's direction, making it easier for traders to identify when to enter or exit positions.
5. Threshold Lines for Visual Feedback:
Threshold lines are plotted on the chart to represent the predefined long and short levels. These lines act as clear markers for when the market reaches a critical threshold, triggering a potential buy (long) or sell (short) signal. By showing these threshold lines on the chart, traders can quickly gauge the strength of the market and assess whether the trend is strong enough to warrant action.
These thresholds can be adjusted based on the trader's preferences, allowing them to fine-tune the indicator for different market conditions or asset behaviors.
6. Customizable Parameters for Flexibility:
The indicator offers several parameters that can be adjusted to suit individual trading preferences:
Window Period (Bilateral Filter): The window size determines how many past price values are used to calculate the bilateral filter. A larger window increases smoothing, while a smaller window results in more responsive, but noisier, data.
Spatial Sigma (σ_d) and Range Sigma (σ_r): These values control how sensitive the filter is to price changes and the distance between data points. Fine-tuning these parameters allows traders to adjust the degree of noise reduction applied to the price series.
Threshold Levels: The upper and lower thresholds determine when the trend score crosses into long or short territory. These levels can be customized to better match the trader's risk tolerance or asset characteristics.
Visual Settings: Traders can customize the appearance of the chart, including the line width of trend signals, bar colors, and background shading, to make the indicator more readable and aligned with their charting style.
7. Alerts for Trend Reversals:
The indicator includes alert conditions for real-time notifications when the market crosses the defined thresholds. Traders can set alerts to be notified when:
The trend score crosses the long threshold, signaling an uptrend.
The trend score crosses the short threshold, signaling a downtrend.
These alerts provide timely information, allowing traders to take immediate action when the market shows a significant change in direction.
Final Thoughts
The Bilateral Filter For Loop indicator is a robust tool for trend-following traders who wish to reduce market noise and focus on the underlying trend. By applying the bilateral filter and calculating trend scores, this indicator helps traders identify strong uptrends and downtrends, providing reliable entry signals with minimal market noise. The customizable parameters, visual feedback, and alerting system make it a versatile tool for traders seeking to improve their timing and capture profitable market movements.
Thus following all of the key points here are some sample backtests on the 1D Chart
Disclaimer: Backtests are based off past results, and are not indicative of the future.
INDEX:BTCUSD
INDEX:ETHUSD
CRYPTO:SOLUSD
Autofib Extensions | DTDHello trader comuunity!
I'm introducing another script that is part of my main day-trading strategy. We all know regardless of what strategy we use, we need to know what levels offer the least amount of risk to our trade entry and a great tool to anticipate how far a move might go or what level a move may retrace to are the Fibonacci Retracement and Extensions. This indicator combines both together, but with a twist.
The main elements of the script are:
1. Multiple Session High and Lows | Developing my first script led me to understand that measuring key times during each session provides understanding of the market's continuity. I have provided 3 "sessions' a user can define according to CST time where the script saves the high and low of that session window to produce the retracement and extensions from those plots. Currently, the levels are always plotted from low to high (with the 0 mark being the high) and negative values provided so the levels are consistent. You can toggle each session on or off.
2. Coloring Key Retracements / Extensions | I use a dark background for my charts so the default colors help me distinguish from other another indicator I use. Feel free to adjust the colors to your preference. I consider 3 different colors because of their significance. Retracements that you want to see continue fall back into the .50 to .618 level (this I consider the "Golden Zone"). While basic Elliott Wave Theory states a wave is completed near the 1.618 level (this I consider "Major Extensions"). Everything isn't noise, but minor levels in a larger sequence.
______________
Script Limitations
All of my scripts are made with the help of ChatGPT so there are going to be limitations. One current one that I have made progress on, but not fully is when you are viewing a timeframe where the candle doesn't start when a session window starts. On smaller timeframes like the 7-minute this is not an issue. However, on the hourly, if your session window starts at the half hour which the 3rd session default window does, the lines will not produce. I will hopefully have this rectified in the near future. I will open the script since none of this work is original in nature and I would love to see how others can create a better product. Also, this is mainly a futures trading tool. If you are using this on stocks you will find it not as useful if the session window is too wide since the script waits until the session window closes to calculate the extension values.
Cheers,
DTD
Hybrid Adaptive Double Exponential Smoothing🙏🏻 This is HADES (Hybrid Adaptive Double Exponential Smoothing) : fully data-driven & adaptive exponential smoothing method, that gains all the necessary info directly from data in the most natural way and needs no subjective parameters & no optimizations. It gets applied to data itself -> to fit residuals & one-point forecast errors, all at O(1) algo complexity. I designed it for streaming high-frequency univariate time series data, such as medical sensor readings, orderbook data, tick charts, requests generated by a backend, etc.
The HADES method is:
fit & forecast = a + b * (1 / alpha + T - 1)
T = 0 provides in-sample fit for the current datum, and T + n provides forecast for n datapoints.
y = input time series
a = y, if no previous data exists
b = 0, if no previous data exists
otherwise:
a = alpha * y + (1 - alpha) * a
b = alpha * (a - a ) + (1 - alpha) * b
alpha = 1 / sqrt(len * 4)
len = min(ceil(exp(1 / sig)), available data)
sig = sqrt(Absolute net change in y / Sum of absolute changes in y)
For the start datapoint when both numerator and denominator are zeros, we define 0 / 0 = 1
...
The same set of operations gets applied to the data first, then to resulting fit absolute residuals to build prediction interval, and finally to absolute forecasting errors (from one-point ahead forecast) to build forecasting interval:
prediction interval = data fit +- resoduals fit * k
forecasting interval = data opf +- errors fit * k
where k = multiplier regulating intervals width, and opf = one-point forecasts calculated at each time t
...
How-to:
0) Apply to your data where it makes sense, eg. tick data;
1) Use power transform to compensate for multiplicative behavior in case it's there;
2) If you have complete data or only the data you need, like the full history of adjusted close prices: go to the next step; otherwise, guided by your goal & analysis, adjust the 'start index' setting so the calculations will start from this point;
3) Use prediction interval to detect significant deviations from the process core & make decisions according to your strategy;
4) Use one-point forecast for nowcasting;
5) Use forecasting intervals to ~ understand where the next datapoints will emerge, given the data-generating process will stay the same & lack structural breaks.
I advise k = 1 or 1.5 or 4 depending on your goal, but 1 is the most natural one.
...
Why exponential smoothing at all? Why the double one? Why adaptive? Why not Holt's method?
1) It's O(1) algo complexity & recursive nature allows it to be applied in an online fashion to high-frequency streaming data; otherwise, it makes more sense to use other methods;
2) Double exponential smoothing ensures we are taking trends into account; also, in order to model more complex time series patterns such as seasonality, we need detrended data, and this method can be used to do it;
3) The goal of adaptivity is to eliminate the window size question, in cases where it doesn't make sense to use cumulative moving typical value;
4) Holt's method creates a certain interaction between level and trend components, so its results lack symmetry and similarity with other non-recursive methods such as quantile regression or linear regression. Instead, I decided to base my work on the original double exponential smoothing method published by Rob Brown in 1956, here's the original source , it's really hard to find it online. This cool dude is considered the one who've dropped exponential smoothing to open access for the first time🤘🏻
R&D; log & explanations
If you wanna read this, you gotta know, you're taking a great responsability for this long journey, and it gonna be one hell of a trip hehe
Machine learning, apprentissage automatique, машинное обучение, digital signal processing, statistical learning, data mining, deep learning, etc., etc., etc.: all these are just artificial categories created by the local population of this wonderful world, but what really separates entities globally in the Universe is solution complexity / algorithmic complexity.
In order to get the game a lil better, it's gonna be useful to read the HTES script description first. Secondly, let me guide you through the whole R&D; process.
To discover (not to invent) the fundamental universal principle of what exponential smoothing really IS, it required the review of the whole concept, understanding that many things don't add up and don't make much sense in currently available mainstream info, and building it all from the beginning while avoiding these very basic logical & implementation flaws.
Given a complete time t, and yet, always growing time series population that can't be logically separated into subpopulations, the very first question is, 'What amount of data do we need to utilize at time t?'. Two answers: 1 and all. You can't really gain much info from 1 datum, so go for the second answer: we need the whole dataset.
So, given the sequential & incremental nature of time series, the very first and basic thing we can do on the whole dataset is to calculate a cumulative , such as cumulative moving mean or cumulative moving median.
Now we need to extend this logic to exponential smoothing, which doesn't use dataset length info directly, but all cool it can be done via a formula that quantifies the relationship between alpha (smoothing parameter) and length. The popular formulas used in mainstream are:
alpha = 1 / length
alpha = 2 / (length + 1)
The funny part starts when you realize that Cumulative Exponential Moving Averages with these 2 alpha formulas Exactly match Cumulative Moving Average and Cumulative (Linearly) Weighted Moving Average, and the same logic goes on:
alpha = 3 / (length + 1.5) , matches Cumulative Weighted Moving Average with quadratic weights, and
alpha = 4 / (length + 2) , matches Cumulative Weighted Moving Average with cubic weghts, and so on...
It all just cries in your shoulder that we need to discover another, native length->alpha formula that leverages the recursive nature of exponential smoothing, because otherwise, it doesn't make sense to use it at all, since the usual CMA and CMWA can be computed incrementally at O(1) algo complexity just as exponential smoothing.
From now on I will not mention 'cumulative' or 'linearly weighted / weighted' anymore, it's gonna be implied all the time unless stated otherwise.
What we can do is to approach the thing logically and model the response with a little help from synthetic data, a sine wave would suffice. Then we can think of relationships: Based on algo complexity from lower to higher, we have this sequence: exponential smoothing @ O(1) -> parametric statistics (mean) @ O(n) -> non-parametric statistics (50th percentile / median) @ O(n log n). Based on Initial response from slow to fast: mean -> median Based on convergence with the real expected value from slow to fast: mean (infinitely approaches it) -> median (gets it quite fast).
Based on these inputs, we need to discover such a length->alpha formula so the resulting fit will have the slowest initial response out of all 3, and have the slowest convergence with expected value out of all 3. In order to do it, we need to have some non-linear transformer in our formula (like a square root) and a couple of factors to modify the response the way we need. I ended up with this formula to meet all our requirements:
alpha = sqrt(1 / length * 2) / 2
which simplifies to:
alpha = 1 / sqrt(len * 8)
^^ as you can see on the screenshot; where the red line is median, the blue line is the mean, and the purple line is exponential smoothing with the formulas you've just seen, we've met all the requirements.
Now we just have to do the same procedure to discover the length->alpha formula but for double exponential smoothing, which models trends as well, not just level as in single exponential smoothing. For this comparison, we need to use linear regression and quantile regression instead of the mean and median.
Quantile regression requires a non-closed form solution to be solved that you can't really implement in Pine Script, but that's ok, so I made the tests using Python & sklearn:
paste.pics
^^ on this screenshot, you can see the same relationship as on the previous screenshot, but now between the responses of quantile regression & linear regression.
I followed the same logic as before for designing alpha for double exponential smoothing (also considered the initial overshoots, but that's a little detail), and ended up with this formula:
alpha = sqrt(1 / length) / 2
which simplifies to:
alpha = 1 / sqrt(len * 4)
Btw, given the pattern you see in the resulting formulas for single and double exponential smoothing, if you ever want to do triple (not Holt & Winters) exponential smoothing, you'll need len * 2 , and just len * 1 for quadruple exponential smoothing. I hope that based on this sequence, you see the hint that Maybe 4 rounds is enough.
Now since we've dealt with the length->alpha formula, we can deal with the adaptivity part.
Logically, it doesn't make sense to use a slower-than-O(1) method to generate input for an O(1) method, so it must be something universal and minimalistic: something that will help us measure consistency in our data, yet something far away from statistics and close enough to topology.
There's one perfect entity that can help us, this is fractal efficiency. The way I define fractal efficiency can be checked at the very beginning of the post, what matters is that I add a square root to the formula that is not typically added.
As explained in the description of my metric QSFS , one of the reasons for SQRT-transformed values of fractal efficiency applied in moving window mode is because they start to closely resemble normal distribution, yet with support of (0, 1). Data with this interesting property (normally distributed yet with finite support) can be modeled with the beta distribution.
Another reason is, in infinitely expanding window mode, fractal efficiency of every time series that exhibits randomness tends to infinitely approach zero, sqrt-transform kind of partially neutralizes this effect.
Yet another reason is, the square root might better reflect the dimensional inefficiency or degree of fractal complexity, since it could balance the influence of extreme deviations from the net paths.
And finally, fractals exhibit power-law scaling -> measures like length, area, or volume scale in a non-linear way. Adding a square root acknowledges this intrinsic property, while connecting our metric with the nature of fractals.
---
I suspect that, given analogies and connections with other topics in geometry, topology, fractals and most importantly positive test results of the metric, it might be that the sqrt transform is the fundamental part of fractal efficiency that should be applied by default.
Now the last part of the ballet is to convert our fractal efficiency to length value. The part about inverse proportionality is obvious: high fractal efficiency aka high consistency -> lower window size, to utilize only the last data that contain brand new information that seems to be highly reliable since we have consistency in the first place.
The non-obvious part is now we need to neutralize the side effect created by previous sqrt transform: our length values are too low, and exponentiation is the perfect candidate to fix it since translating fractal efficiency into window sizes requires something non-linear to reflect the fractal dynamics. More importantly, using exp() was the last piece that let the metric shine, any other transformations & formulas alike I've tried always had some weird results on certain data.
That exp() in the len formula was the last piece that made it all work both on synthetic and on real data.
^^ a standalone script calculating optimal dynamic window size
Omg, THAT took time to write. Comment and/or text me if you need
...
"Versace Pip-Boy, I'm a young gun coming up with no bankroll" 👻
∞
Dynamic Score SMA [QuantAlgo]Dynamic Score SMA 📈🌊
The Dynamic Score SMA by QuantAlgo offers a powerful trend-following approach that combines the simplicity of the Simple Moving Average (SMA) with an innovative dynamic trend scoring technique . By continuously evaluating price movement relative to the SMA over a customizable window, this indicator adapts to varying market conditions, providing traders and investors with clearer, more adaptable trend signals. With this dynamic scoring approach, the Dynamic Score SMA helps identify trend shifts, allowing for more strategic decision-making.
🌟 Conceptual Foundation and Innovation
At the core of the Dynamic Score SMA is its dynamic trend score system , which assesses price movements by comparing them to the SMA over a series of historical data points. This technique goes beyond traditional SMA indicators by offering a dynamic, probabilistic evaluation of trend strength, delivering a more responsive and nuanced view of market direction. The integration of this scoring system enables traders and investors to navigate both trending and sideway markets with greater confidence and precision.
⚙️ Technical Composition and Calculation
The Dynamic Score SMA leverages the Simple Moving Average to establish a baseline trend, with customizable SMA length to control the indicator’s sensitivity. The dynamic trend scoring technique then evaluates price behavior relative to the SMA over a specified window, generating a trend score that reflects the current market bias.
When the score crosses the designated uptrend or downtrend thresholds, the indicator signals a potential trend shift. By adjusting the SMA length, window duration, and thresholds, users can refine the indicator’s responsiveness to match their preferred trading or investing strategy, making it suitable for both volatile and steady markets.
📈 Features and Practical Applications
Customizable SMA Length: Set the length of the SMA to control how sensitive the trend is to price changes. Longer lengths produce smoother trends, while shorter lengths increase responsiveness.
Window Length for Dynamic Scoring: Adjust the window length to determine how many data points are considered in the dynamic trend score calculation, allowing for more tailored analysis of recent versus long-term trends.
Uptrend/Downtrend Thresholds: Define thresholds for triggering trend signals. Higher thresholds reduce sensitivity, providing clearer signals in volatile markets, while lower thresholds capture shorter-term movements.
Bar and Background Coloring: Visual cues, including bar coloring and background fills, provide a quick reference for current trend direction, making it easier to monitor market conditions.
Trend Confirmation: The dynamic trend scoring system verifies trend strength, offering more reliable entry and exit points by filtering out potential false signals.
⚡️ How to Use
✅ Add the Indicator: Add the Dynamic Score SMA to your favourites, then apply it to your chart. Customize the SMA length, window size, and thresholds to match your trading or investing preferences.
👀 Monitor Trend Shifts: Observe the trend in relation to the SMA and watch for signals when the score crosses key thresholds. Bar and/or background coloring will help identify the current trend direction and any shifts in momentum.
🔔 Set Alerts: Configure alerts for significant trend crossovers and reversals, enabling you to act on market changes in real-time without needing constant chart observation.
💫 Summary and Usage Tips
The Dynamic Score SMA by QuantAlgo is a sophisticated trend-following indicator that combines the familiarity of the SMA with a dynamic trend scoring system, providing a more adaptable and probabilistic approach to trend analysis. By tailoring the SMA length, scoring window, and thresholds, traders and investors can fine-tune the indicator for both short-term adjustments and long-term trend following. For optimal use, adjust sensitivity based on market volatility, and rely on the visual cues for clear trend confirmation. Whether you’re navigating choppy markets or stable trends, the Dynamic Score SMA offers a refined approach to capturing market direction with enhanced precision.
Mag7 IndexThis is an indicator index based on cumulative market value of the Magnificent 7 (AAPL, MSFT, NVDA, TSLA, META, AMZN, GOOG). Such an indicator for the famous Mag 7, against which your main security can be benchmarked, was missing from the TradingView user library.
The index bar values are calculated by taking the weighted average of the 7 stocks, relative to their market cap. Explicitly, we are multiplying each bar period's total outstanding stock amount by the OHLC of that period for each stock and dividing that value by the combined sum of outstanding stock for the 7 corporations. OHLC is taken for the extended trading session.
The index dynamically adjusts with respect to the chosen main security and the bars/line visible in the chart window; that is, the first close value is normalized to the main security's first close value. It provides recalculation of the performance in that chart window as you scroll (this isn't apparent in the demo chart above this description).
It can be useful for checking market breadth, or benchmarking price performance of the individual stock components that comprise the Magnificent 7. I prefer comparing the indicator to the Nasdaq Composite Index (IXIC) or S&P500 (SPX), but of course you can make comparisons to any security or commodity.
Settings Input Options:
1) Bar vs. Line - view as OHLC colored bars or line chart. Line chart color based on close above or below the previous period close as green or red line respectively.
2) % vs Regular - the final value for the window period as % return for that window or index value
3) Turn on/off - bottom right tile displaying window-period performance
Inspired by the simpler NQ 7 Index script by @RaenonX but with normalization to main security at start of window and additional settings input options.
Please provide feedback for additional features, e.g., if a regular/extended session option is useful.
Adaptive Fisherized Z-scoreHello Fellas,
It's time for a new adaptive fisherized indicator of me, where I apply adaptive length and more on a classic indicator.
Today, I chose the Z-score, also called standard score, as indicator of interest.
Special Features
Advanced Smoothing: JMA, T3, Hann Window and Super Smoother
Adaptive Length Algorithms: In-Phase Quadrature, Homodyne Discriminator, Median and Hilbert Transform
Inverse Fisher Transform (IFT)
Signals: Enter Long, Enter Short, Exit Long and Exit Short
Bar Coloring: Presents the trade state as bar colors
Band Levels: Changes the band levels
Decision Making
When you create such a mod you need to think about which concepts are the best to conclude. I decided to take Inverse Fisher Transform instead of normalization to make a version which fits to a fixed scale to avoid the usual distortion created by normalization.
Moreover, I chose JMA, T3, Hann Window and Super Smoother, because JMA and T3 are the bleeding-edge MA's at the moment with the best balance of lag and responsiveness. Additionally, I chose Hann Window and Super Smoother because of their extraordinary smoothing capabilities and because Ehlers favours them.
Furthermore, I decided to choose the half length of the dominant cycle instead of the full dominant cycle to make the indicator more responsive which is very important for a signal emitter like Z-score. Signal emitters always need to be faster or have the same speed as the filters they are combined with.
Usage
The Z-score is a low timeframe scalper which works best during choppy/ranging phases. The direction you should trade is determined by the last trend change. E.g. when the last trend change was from bearish market to bullish market and you are now in a choppy/ranging phase confirmed by e.g. Chop Zone or KAMA slope you want to do long trades.
Interpretation
The Z-score indicator is a momentum indicator which shows the number of standard deviations by which the value of a raw score (price/source) is above or below the mean value of what is being observed or measured. Easily explained, it is almost the same as Bollinger Bands with another visual representation form.
Signals
B -> Buy -> Z-score crosses above lower band
S -> Short -> Z-score crosses below upper band
BE -> Buy Exit -> Z-score crosses above 0
SE -> Sell Exit -> Z-score crosses below 0
If you were reading till here, thank you already. Now, follows a bunch of knowledge for people who don't know the concepts I talk about.
T3
The T3 moving average, short for "Tim Tillson's Triple Exponential Moving Average," is a technical indicator used in financial markets and technical analysis to smooth out price data over a specific period. It was developed by Tim Tillson, a software project manager at Hewlett-Packard, with expertise in Mathematics and Computer Science.
The T3 moving average is an enhancement of the traditional Exponential Moving Average (EMA) and aims to overcome some of its limitations. The primary goal of the T3 moving average is to provide a smoother representation of price trends while minimizing lag compared to other moving averages like Simple Moving Average (SMA), Weighted Moving Average (WMA), or EMA.
To compute the T3 moving average, it involves a triple smoothing process using exponential moving averages. Here's how it works:
Calculate the first exponential moving average (EMA1) of the price data over a specific period 'n.'
Calculate the second exponential moving average (EMA2) of EMA1 using the same period 'n.'
Calculate the third exponential moving average (EMA3) of EMA2 using the same period 'n.'
The formula for the T3 moving average is as follows:
T3 = 3 * (EMA1) - 3 * (EMA2) + (EMA3)
By applying this triple smoothing process, the T3 moving average is intended to offer reduced noise and improved responsiveness to price trends. It achieves this by incorporating multiple time frames of the exponential moving averages, resulting in a more accurate representation of the underlying price action.
JMA
The Jurik Moving Average (JMA) is a technical indicator used in trading to predict price direction. Developed by Mark Jurik, it’s a type of weighted moving average that gives more weight to recent market data rather than past historical data.
JMA is known for its superior noise elimination. It’s a causal, nonlinear, and adaptive filter, meaning it responds to changes in price action without introducing unnecessary lag. This makes JMA a world-class moving average that tracks and smooths price charts or any market-related time series with surprising agility.
In comparison to other moving averages, such as the Exponential Moving Average (EMA), JMA is known to track fast price movement more accurately. This allows traders to apply their strategies to a more accurate picture of price action.
Inverse Fisher Transform
The Inverse Fisher Transform is a transform used in DSP to alter the Probability Distribution Function (PDF) of a signal or in our case of indicators.
The result of using the Inverse Fisher Transform is that the output has a very high probability of being either +1 or –1. This bipolar probability distribution makes the Inverse Fisher Transform ideal for generating an indicator that provides clear buy and sell signals.
Hann Window
The Hann function (aka Hann Window) is named after the Austrian meteorologist Julius von Hann. It is a window function used to perform Hann smoothing.
Super Smoother
The Super Smoother uses a special mathematical process for the smoothing of data points.
The Super Smoother is a technical analysis indicator designed to be smoother and with less lag than a traditional moving average.
Adaptive Length
Length based on the dominant cycle length measured by a "dominant cycle measurement" algorithm.
Happy Trading!
Best regards,
simwai
---
Credits to
@cheatcountry
@everget
@loxx
@DasanC
@blackcat1402
Educational Inidicators - Ichimoku CloudThis indicator is part of the Indicator Educational Series, intended to help newer traders understand and interact with various indicators. The goal is to allow users to gain a stronger understanding of an indicator's underlying philosophy, and visually see how changes to an indicator's parameters affects the trades suggested by that indicator.
The scripts in this series are all open source, with the code broken up into logical section and notated so beginner users can also understand some PineScript fundamentals.
Please understand that no indicator presented in and of itself constitutes a complete trading strategy. Rather, this series is to help users determine which indicators make sense to them, and which ones to combine to create their own trading strategy. All material presented is purely for educational purposes.
Presented here is the Ichimoku Cloud.
The Ichimoku Cloud was developed by Goichi Hosada, and first published in the late 1960s. It is used by traders to understand price momentum, and help forecast future price movements.
The indicator at its core can be understood from four component parts:
The Conversion Line - An average of the highest and lowest price in a given window. Typically, this is a "fast" average, and as such, this line has the lowest period
The Base Line - An average of the highest and lowest price in a given window. This is a "slower" average than the Conversion Line, and as such should have a larger period than the Conversion Line
Leading Span A - The average of the Conversion Line and the Base Line
[*}Leading Span B - An average of the highest and lowest price in a given window. This is the "slowest" average of all three, and as such should have the largest period
When plotted, the Conversion Line (orange by default), Base Line (purple by default), Leading Span A (blue by default), and Leading Span B (red by defaults) are all drawn on the chart along with the price candles. The area between the Leading Span A and Leading Span B lines are also shaded depending on which of the two lines is greater: whenever Leading Span A is greater the area is shaded positively (blue by default), whenever Leading Span B is greater the area is shaded negatively (red by defaults).
One interesting feature of the Ichimoku Cloud is that it drawn a certain number of candles forward. What this means is that where the cloud is drawn on the chart is reflective of prices that have occurred a number of candles in the past. This is done intentionally to help traders see how the current price is moving in relation to historical price movements on the asset.
See below for how the indicators look in their default colors on the chart
These indicators can then be used to start analyzing the price movement, and making trade decisions.
The first inference we can make is the momentum of the price. Since the lines are drawn from averages of varying speeds, the shaded area between the Leading Span lines can tell us whether the momentum is bullish (up) or bearish (down).
Whenever Leading Span A, the faster of the two lines, is above Leading Span B, that means that price is moving upward faster than it typically has, ergo we are in Bullish Momentum. On the chart, this is indicated in two ways:
The area is shaded positively (blue by default)
A green upward triangle is added to the chart to indicate where the momentum first turned Bullish
Whenever Leading Span A is below Leading Span B, that means that price is moving downward faster than it typically has, ergo we are in Bearish Momentum. On the chart, this is indicated in two ways:
The area is shaded negatively (red by default)
A red downward triangle is added to the chart to indicate where the momentum first turned Bearish
The next inference we can make is possible trading points. When we're in a period of momentum, as determined above, we know that price is going up or down, depending on the momentum we're in. We can then use the Conversion Line, Base Line, and the Price itself to confirm a good trade price.
When the asset is in Bullish Momentum, and the Conversion Line, our fastest average, is above the Base Line, our mid speed average, we know that the price is coming up quickly in the short term. When the Base Line and current Price are also above the cloud, then we have triple confirmation that price is going up, and we should enter a Long position. On the chart, this point is indicated with a green flag.
When the asset is in Bearish Momentum, and the Conversion Line is below the Base Line, we know that the price is going down quickly in the short term. When the Base Line and current Price are also below the cloud, then we have triple confirmation that price is going down, and we should enter a Short position. On the chart, this point is indicated with a red flag.
The script presented here also allows users to customize the various parameters of the Ichimoku Cloud, and visually see how analysis is affected by these changes. This is designed to allow users to modify parameters as they see fit, within certain constraints, to find the best set for them. The lines, cloud, and chart indicators will all update automatically with the users' inputs.
Paytience DistributionPaytience Distribution Indicator User Guide
Overview:
The Paytience Distribution indicator is designed to visualize the distribution of any chosen data source. By default, it visualizes the distribution of a built-in Relative Strength Index (RSI). This guide provides details on its functionality and settings.
Distribution Explanation:
A distribution in statistics and data analysis represents the way values or a set of data are spread out or distributed over a range. The distribution can show where values are concentrated, values are absent or infrequent, or any other patterns. Visualizing distributions helps users understand underlying patterns and tendencies in the data.
Settings and Parameters:
Main Settings:
Window Size
- Description: This dictates the amount of data used to calculate the distribution.
- Options: A whole number (integer).
- Tooltip: A window size of 0 means it uses all the available data.
Scale
- Description: Adjusts the height of the distribution visualization.
- Options: Any integer between 20 and 499.
Round Source
- Description: Rounds the chosen data source to a specified number of decimal places.
- Options: Any whole number (integer).
Minimum Value
- Description: Specifies the minimum value you wish to account for in the distribution.
- Options: Any integer from 0 to 100.
- Tooltip: 0 being the lowest and 100 being the highest.
Smoothing
- Description: Applies a smoothing function to the distribution visualization to simplify its appearance.
- Options: Any integer between 1 and 20.
Include 0
- Description: Dictates whether zero should be included in the distribution visualization.
- Options: True (include) or False (exclude).
Standard Deviation
- Description: Enables the visualization of standard deviation, which measures the amount of variation or dispersion in the chosen data set.
- Tooltip: This is best suited for a source that has a vaguely Gaussian (bell-curved) distribution.
- Options: True (enable) or False (disable).
Color Options
- High Color and Low Color: Specifies colors for high and low data points.
- Standard Deviation Color: Designates a color for the standard deviation lines.
Example Settings:
Example Usage RSI
- Description: Enables the use of RSI as the data source.
- Options: True (enable) or False (disable).
RSI Length
- Description: Determines the period over which the RSI is calculated.
- Options: Any integer greater than 1.
Using an External Source:
To visualize the distribution of an external source:
Select the "Move to" option in the dropdown menu for the Paytience Distribution indicator on your chart.
Set it to the existing panel where your external data source is placed.
Navigate to "Pin to Scale" and pin the indicator to the same scale as your external source.
Indicator Logic and Functions:
Sinc Function: Used in signal processing, the sinc function ensures the elimination of aliasing effects.
Sinc Filter: A filtering mechanism which uses sinc function to provide estimates on the data.
Weighted Mean & Standard Deviation: These are statistical measures used to capture the central tendency and variability in the data, respectively.
Output and Visualization:
The indicator visualizes the distribution as a series of colored boxes, with the intensity of the color indicating the frequency of the data points in that range. Additionally, lines representing the standard deviation from the mean can be displayed if the "Standard Deviation" setting is enabled.
The example RSI, if enabled, is plotted along with its common threshold lines at 70 (upper) and 30 (lower).
Understanding the Paytience Distribution Indicator
1. What is a Distribution?
A distribution represents the spread of data points across different values, showing how frequently each value occurs. For instance, if you're looking at a stock's closing prices over a month, you may find that the stock closed most frequently around $100, occasionally around $105, and rarely around $110. Graphically visualizing this distribution can help you see the central tendencies, variability, and shape of your data distribution. This visualization can be essential in determining key trading points, understanding volatility, and getting an overview of the market sentiment.
2. The Rounding Mechanism
Every asset and dataset is unique. Some assets, especially cryptocurrencies or forex pairs, might have values that go up to many decimal places. Rounding these values is essential to generate a more readable and manageable distribution.
Why is Rounding Needed? If every unique value from a high-precision dataset was treated distinctly, the resulting distribution would be sparse and less informative. By rounding off, the values are grouped, making the distribution more consolidated and understandable.
Adjusting Rounding: The `Round Source` input allows users to determine the number of decimal places they'd like to consider. If you're working with an asset with many decimal places, adjust this setting to get a meaningful distribution. If the rounding is set too low for high precision assets, the distribution could lose its utility.
3. Standard Deviation and Oscillators
Standard deviation is a measure of the amount of variation or dispersion of a set of values. In the context of this indicator:
Use with Oscillators: When using oscillators like RSI, the standard deviation can provide insights into the oscillator's range. This means you can determine how much the oscillator typically deviates from its average value.
Setting Bounds: By understanding this deviation, traders can better set reasonable upper and lower bounds, identifying overbought or oversold conditions in relation to the oscillator's historical behavior.
4. Resampling
Resampling is the process of adjusting the time frame or value buckets of your data. In the context of this indicator, resampling ensures that the distribution is manageable and visually informative.
Resample Size vs. Window Size: The `Resample Resolution` dictates the number of bins or buckets the distribution will be divided into. On the other hand, the `Window Size` determines how much of the recent data will be considered. It's crucial to ensure that the resample size is smaller than the window size, or else the distribution will not accurately reflect the data's behavior.
Why Use Resampling? Especially for price-based sources, setting the window size around 500 (instead of 0) ensures that the distribution doesn't become too overloaded with data. When set to 0, the window size uses all available data, which may not always provide an actionable insight.
5. Uneven Sample Bins and Gaps
You might notice that the width of sample bins in the distribution is not uniform, and there can be gaps.
Reason for Uneven Widths: This happens because the indicator uses a 'resampled' distribution. The width represents the range of values in each bin, which might not be constant across bins. Some value ranges might have more data points, while others might have fewer.
Gaps in Distribution: Sometimes, there might be no data points in certain value ranges, leading to gaps in the distribution. These gaps are not flaws but indicate ranges where no values were observed.
In conclusion, the Paytience Distribution indicator offers a robust mechanism to visualize the distribution of data from various sources. By understanding its intricacies, users can make better-informed trading decisions based on the distribution and behavior of their chosen data source.
Rolling MACDThis indicator displays a Rolling Moving Average Convergence Divergence . Contrary to MACD indicators which use a fix time segment, RMACD calculates using a moving window defined by a time period (not a simple number of bars), so it shows better results.
This indicator is inspired by and use the Close & Inventory Bar Retracement Price Line to create an MACD in different timeframes.
█ CONCEPTS
If you are not already familiar with MACD, so look at Help Center will get you started www.tradingview.com
The typical MACD, short for moving average convergence/divergence, is a trading indicator used in technical analysis of stock prices, created by Gerald Appel in the late 1970s. It is designed to reveal changes in the strength, direction, momentum, and duration of a trend in a stock's price.
The MACD indicator(or "oscillator") is a collection of three time series calculated from historical price data, most often the closing price. These three series are: the MACD series proper, the "signal" or "average" series, and the "divergence" series which is the difference between the two. The MACD series is the difference between a "fast" (short period) exponential moving average (EMA), and a "slow" (longer period) EMA of the price series. The average series is an EMA of the MACD series itself.
Because RMACD uses a moving window, it does not exhibit the jumpiness of MACD plots. You can see the more jagged MACD on the chart above. I think both can be useful to traders; up to you to decide which flavor works for you.
█ HOW TO USE IT
Load the indicator on an active chart (see the Help Center if you don't know how).
Time period
By default, the script uses an auto-stepping mechanism to adjust the time period of its moving window to the chart's timeframe. The following table shows chart timeframes and the corresponding time period used by the script. When the chart's timeframe is less than or equal to the timeframe in the first column, the second column's time period is used to calculate RMACD:
Chart Time
timeframe period
1min 🠆 1H
5min 🠆 4H
1H 🠆 1D
4H 🠆 3D
12H 🠆 1W
1D 🠆 1M
1W 🠆 3M
You can use the script's inputs to specify a fixed time period, which you can express in any combination of days, hours and minutes.
By default, the time period currently used is displayed in the lower-right corner of the chart. The script's inputs allow you to hide the display or change its size and location.
Minimum Window Size
This input field determines the minimum number of values to keep in the moving window, even if these values are outside the prescribed time period. This mitigates situations where a large time gap between two bars would cause the time window to be empty, which can occur in non-24x7 markets where large time gaps may separate contiguous chart bars, namely across holidays or trading sessions. For example, if you were using a 1D time period and there is a two-day gap between two bars, then no chart bars would fit in the moving window after the gap. The default value is 10 bars.
//
This indicator should make trading easier and improve analysis. Nothing is worse than indicators that give confusingly different signals.
I hope you enjoy my new ideas
best regards
Chervolino
Adaptive Average Vortex Index [lastguru]As a longtime fan of ADX, looking at Vortex Indicator I often wondered, where is the third line. I have rarely seen that anybody is calculating it. So, here it is: Average Vortex Index - an ADX calculated from Vortex Indicator. I interpret it similarly to the ADX indicator: higher values show stronger trend. If you discover other interpretation or have suggestions, comments are welcome.
Both VI+ and VI- lines are also drawn. As I use adaptive length calculation in my other scripts (based on the libraries I've developed and published), I have also included the possibility to have an adaptive length here, so if you hate the idea of calculating ADX from VI, you can disable that line and just look at the adaptive Vortex Indicator.
Note that as with all my oscillators, all the lines here are renormalized to -1..1 range unlike the original Vortex Indicator computation. To do that for VI+ and VI- lines, I subtract 1 from their values. It does not change the shape or the amplitude of the lines.
Adaptation algorithms are roughly subdivided in two categories: classic Length Adaptations and Cycle Estimators (they are also implemented in separate libraries), all are selected in Adaptation dropdown. Length Adaptation used in the Adaptive Moving Averages and the Adaptive Oscillators try to follow price movements and accelerate/decelerate accordingly (usually quite rapidly with a huge range). Cycle Estimators, on the other hand, try to measure the cycle period of the current market, which does not reflect price movement or the rate of change (the rate of change may also differ depending on the cycle phase, but the cycle period itself usually changes slowly).
VIDYA - based on VIDYA algorithm. The period oscillates from the Lower Bound up (slow)
VIDYA-RS - based on Vitali Apirine's modification of VIDYA algorithm (he calls it Relative Strength Moving Average). The period oscillates from the Upper Bound down (fast)
Kaufman Efficiency Scaling - based on Efficiency Ratio calculation originally used in KAMA
Fractal Adaptation - based on FRAMA by John F. Ehlers
MESA MAMA Cycle - based on MESA Adaptive Moving Average by John F. Ehlers
Pearson Autocorrelation* - based on Pearson Autocorrelation Periodogram by John F. Ehlers
DFT Cycle* - based on Discrete Fourier Transform Spectrum estimator by John F. Ehlers
Phase Accumulation* - based on Dominant Cycle from Phase Accumulation by John F. Ehlers
Length Adaptation usually take two parameters: Bound From (lower bound) and To (upper bound). These are the limits for Adaptation values. Note that the Cycle Estimators marked with asterisks(*) are very computationally intensive, so the bounds should not be set much higher than 50, otherwise you may receive a timeout error (also, it does not seem to be a useful thing to do, but you may correct me if I'm wrong).
The Cycle Estimators marked with asterisks(*) also have 3 checkboxes: HP (Highpass Filter), SS (Super Smoother) and HW (Hann Window). These enable or disable their internal prefilters, which are recommended by their author - John F. Ehlers . I do not know, which combination works best, so you can experiment.
If no Adaptation is selected ( None option), you can set Length directly. If an Adaptation is selected, then Cycle multiplier can be set.
The oscillator also has the option to configure the internal smoothing function with Window setting. By default, RMA is used (like in ADX calculation). Fast Default option is using half the length for smoothing. Triangle , Hamming and Hann Window algorithms are some better smoothers suggested by John F. Ehlers.
After the oscillator a Moving Average can be applied. The following Moving Averages are included: SMA , RMA, EMA , HMA , VWMA , 2-pole Super Smoother, 3-pole Super Smoother, Filt11, Triangle Window, Hamming Window, Hann Window, Lowpass, DSSS.
Postfilter options are applied last:
Stochastic - Stochastic
Super Smooth Stochastic - Super Smooth Stochastic (part of MESA Stochastic ) by John F. Ehlers
Inverse Fisher Transform - Inverse Fisher Transform
Noise Elimination Technology - a simplified Kendall correlation algorithm "Noise Elimination Technology" by John F. Ehlers
Momentum - momentum (derivative)
Except for Inverse Fisher Transform , all Postfilter algorithms can have Length parameter. If it is not specified (set to 0), then the calculated Slow MA Length is used. If Filter/MA Length is less than 2 or Postfilter Length is less than 1, they are calculated as a multiplier of the calculated oscillator length.
More information on the algorithms is given in the code for the libraries used. I am also very grateful to other TradingView community members (they are also mentioned in the library code) without whom this script would not have been possible.
Rolling VWAP█ OVERVIEW
This indicator displays a Rolling Volume-Weighted Average Price. Contrary to VWAP indicators which reset at the beginning of a new time segment, RVWAP calculates using a moving window defined by a time period (not a simple number of bars), so it never resets.
█ CONCEPTS
If you are not already familiar with VWAP, our Help Center will get you started.
The typical VWAP is designed to be used on intraday charts, as it resets at the beginning of the day. Such VWAPs cannot be used on daily, weekly or monthly charts. Instead, this rolling VWAP uses a time period that automatically adjusts to the chart's timeframe. You can thus use RVWAP on any chart that includes volume information in its data feed.
Because RVWAP uses a moving window, it does not exhibit the jumpiness of VWAP plots that reset. You can see the more jagged VWAP on the chart above. We think both can be useful to traders; up to you to decide which flavor works for you.
█ HOW TO USE IT
Load the indicator on an active chart (see the Help Center if you don't know how).
Time period
By default, the script uses an auto-stepping mechanism to adjust the time period of its moving window to the chart's timeframe. The following table shows chart timeframes and the corresponding time period used by the script. When the chart's timeframe is less than or equal to the timeframe in the first column, the second column's time period is used to calculate RVWAP:
Chart Time
timeframe period
1min 🠆 1H
5min 🠆 4H
1H 🠆 1D
4H 🠆 3D
12H 🠆 1W
1D 🠆 1M
1W 🠆 3M
You can use the script's inputs to specify a fixed time period, which you can express in any combination of days, hours and minutes.
By default, the time period currently used is displayed in the lower-right corner of the chart. The script's inputs allow you to hide the display or change its size and location.
Minimum Window Size
This input field determines the minimum number of values to keep in the moving window, even if these values are outside the prescribed time period. This mitigates situations where a large time gap between two bars would cause the time window to be empty, which can occur in non-24x7 markets where large time gaps may separate contiguous chart bars, namely across holidays or trading sessions. For example, if you were using a 1D time period and there is a two-day gap between two bars, then no chart bars would fit in the moving window after the gap. The default value is 10 bars.
█ NOTES
If you are interested in VWAP indicators, you may find the VWAP Auto Anchored built-in indicator worth a try.
For Pine Script™ coders
The heart of this script's calculations uses the `totalForTimeWhen()` function from the ConditionalAverages library published by PineCoders . It works by maintaining an array of values included in a time period, but without a for loop requiring a lookback from the current bar, so it is much more efficient.
We write our Pine Script™ code using the recommendations in the User Manual's Style Guide .
Look first. Then leap.