Kernels©2024, GoemonYae; copied from @jdehorty's "KernelFunctions" on 2024-03-09 to ensure future dependency compatibility. Will also add more functions to this script.
Library "KernelFunctions"
This library provides non-repainting kernel functions for Nadaraya-Watson estimator implementations. This allows for easy substition/comparison of different kernel functions for one another in indicators. Furthermore, kernels can easily be combined with other kernels to create newer, more customized kernels.
rationalQuadratic(_src, _lookback, _relativeWeight, startAtBar)
Rational Quadratic Kernel - An infinite sum of Gaussian Kernels of different length scales.
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_relativeWeight (simple float) : Relative weighting of time frames. Smaller values resut in a more stretched out curve and larger values will result in a more wiggly curve. As this value approaches zero, the longer time frames will exert more influence on the estimation. As this value approaches infinity, the behavior of the Rational Quadratic Kernel will become identical to the Gaussian kernel.
startAtBar (simple int)
Returns: yhat The estimated values according to the Rational Quadratic Kernel.
gaussian(_src, _lookback, startAtBar)
Gaussian Kernel - A weighted average of the source series. The weights are determined by the Radial Basis Function (RBF).
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
startAtBar (simple int)
Returns: yhat The estimated values according to the Gaussian Kernel.
periodic(_src, _lookback, _period, startAtBar)
Periodic Kernel - The periodic kernel (derived by David Mackay) allows one to model functions which repeat themselves exactly.
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_period (simple int) : The distance between repititions of the function.
startAtBar (simple int)
Returns: yhat The estimated values according to the Periodic Kernel.
locallyPeriodic(_src, _lookback, _period, startAtBar)
Locally Periodic Kernel - The locally periodic kernel is a periodic function that slowly varies with time. It is the product of the Periodic Kernel and the Gaussian Kernel.
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_period (simple int) : The distance between repititions of the function.
startAtBar (simple int)
Returns: yhat The estimated values according to the Locally Periodic Kernel.
Cerca negli script per "curve"
TrendLine ScythesTrendline Scythes is a script designed to automatically detect and draw special curved trendlines, resembling scythes or blades, based on pivotal points in price action. These trendlines adapt to the volatility of the market, providing a unique perspective on trend dynamics.
🔲 Methodology
Traditional trendlines connect consecutive pivot points on a price chart, providing a linear representation of trend direction. However, this script employs a distinctive methodology by automatically detecting price pivots and then calculating special curved trendlines based on the Average True Range (ATR) of the price. This introduces a curvature to the trendlines, resembling scythes, offering a unique way to interpret market trends.
🔲 Auto Breakout and Target Detection
Trendline Scythes includes features for automatic breakout detection, signaling potential trend changes. Additionally, the script assists in target detection, helping traders set realistic and data-driven profit-taking levels based on market volatility and user adjustment.
🔲 Utility
Trend Confirmation - Use Trendline Scythes to confirm existing trends by observing how price interacts with the curved trendlines.
Breakout Signals - Auto-detection of breakouts adds a proactive element to your trading strategy, helping you stay ahead of potential trend reversals.
Target Setting - Utilize the script to set profit-taking targets based on volatility, aligning with the current market conditions.
🔲 Settings
Pivot Length - Swing detection length
Scythe Length - Adjusts the length of the scythes blade
Sensitivity - Controls how restrained the target calculation is, higher values will result in tighter targets.
🔲 Alerts
Breakout
Breakdown
Target Reached
Target Invalidated
As well as the option to trigger 'any alert' call.
Trendline Scythes is a versatile tool combining the benefits of traditional trendlines with the dynamic adaptability of curved lines for a unique approach to trend analysis.
Relative Daily Change% by SUMIT
"Relative Daily Change%" Indicator (RDC)
The "Relative Daily Change%" indicator compares a stock's average daily price change percentage over the last 200 days with a chosen index.
It plots a colored curve. If the stock's change% is higher than the index, the curve is green, indicating it's doing better. Red means the stock is under-performing.
This indicator is designed to compare the performance of a stock with specific index (as selected) for last 200 candles.
I use this during a breakout to see whether the stock is performing well with comparison to it`s index. As I marked in the chart there was a range zone (red box), we got a breakout with good volume and it is also sustaining above 50 and 200 EMA, the RDC color is also in green so as per my indicator it is performing well. This is how I do fine-tuning of my analysis for a breakout strategy.
You can select Index from the list available in input
**Line Color Green = Avg Change% per day of the stock is more than the Selected Index
**Line Color White = Avg Change% per day of the stock is less than the Selected Index
If you want details of stocks for all index you can ask for it.
Disclaimer : **This is for educational purpose only. It is not any kind of trade recommendation/tips.
Zero Lag Moving Average with Gaussian weightsIntroduction
The Zero Lag Moving Average (ZLMA) is a powerful technical indicator that aims to eliminate the lag inherent in traditional moving averages. This post provides a comprehensive exploration of the ZLMA with Gaussian Weights (GWMA) indicator, discussing the concepts, the calculations, and its application in trading.
Concepts
Zero Lag Moving Average (ZLMA): A ZLMA is an advanced moving average designed to reduce the lag in price movements associated with conventional moving averages. This reduction in lag enables traders to make more informed decisions based on the most recent price data.
Gaussian Weights: Gaussian weights are derived from the Gaussian function, which is a mathematical function used to calculate probabilities in a normal distribution. The Gaussian function is smooth, symmetric, and has a bell-shaped curve. In this context, Gaussian weights are used to calculate the weighted average of a series of data points.
Why Gaussian Weights are Beneficial
Gaussian Weights offer several advantages in comparison to traditional moving averages. One of the main reasons for using Gaussian Weights is to address the issue of lag, which is commonly associated with simple and exponential moving averages. By reducing lag, traders can make more informed decisions based on up-to-date information.
Another advantage of Gaussian Weights is their mathematical foundation, which is rooted in the Gaussian function. This function describes the normal distribution in probability theory and statistics. The smooth and symmetric bell-shaped curve of Gaussian Weights enables a more refined approach to handling data points, resulting in a more responsive and accurate moving average.
While exponential moving averages (EMAs) also assign more weight to recent data points, they can still exhibit some lag. Gaussian Weights, on the other hand, offer a smoother and more adaptive solution to different market conditions. By adjusting the smoothing period, traders can tailor the Gaussian Weights to their specific needs, making them a versatile tool for various trading strategies.
In summary, Gaussian Weights provide a valuable alternative to traditional moving averages due to their ability to reduce lag, their strong mathematical foundation, and their adaptability to different market conditions. These benefits make Gaussian Weights a worthwhile consideration for traders looking to enhance their trading strategies.
Calculations
The ZLMA with GWMA consists of two main calculations:
Gaussian Weight Calculation: The Gaussian weight for a given 'k' and 'smooth_per' is calculated using the standard deviation (sigma) and the exponent part of the Gaussian function.
Zero-Lag GWMA Calculation: The zero-lag GWMA is calculated using a source buffer, a Gaussian weighted moving average (gwma1), and an output array. The source buffer stores the input data, the gwma1 array stores the first Gaussian weighted moving average, and the output array stores the final zero-lag moving average.
Application in Trading
The ZLMA with GWMA indicator can be used to identify trends and potential entry/exit points in trading:
Trend Identification: When the ZLMA is above the price, it indicates a bearish trend, and when it is below the price, it indicates a bullish trend.
Entry/Exit Points: Traders can use crossovers between the ZLMA and price to identify potential entry and exit points. A long position could be taken when the price crosses above the ZLMA, and a short position could be taken when the price crosses below the ZLMA.
Conclusion
The Zero Lag Moving Average with Gaussian Weights is a powerful and versatile indicator that can be used in various trading strategies. By minimizing the lag associated with traditional moving averages, the ZLMA with GWMA provides traders with more accurate and timely information about price trends and potential trade opportunities.
Gaussian Moving Average (GA)The Gaussian moving average (GA) is a technical analysis tool that is used to smooth out price data and identify trends. It is similar to a simple moving average (SMA), but instead of using equal weights for each value in the calculation, it uses a Gaussian distribution to assign weights. This means that the values at the edges of the calculation window have lower weights and are given less importance in the moving average calculation, while the values at the center of the window have higher weights and are given more importance. This helps to reduce the impact of noisy or outlying data points on the moving average and make it more responsive to changes in the underlying trend.
To calculate the GA, the script first defines the standard deviation of the Gaussian distribution. This is a measure of how spread out the values in the distribution are and can be adjusted to change the shape of the curve. The default value in the script is set to one quarter of the length of the calculation window, which gives a bell-shaped curve with a peak at the center of the window.
Next, the script generates an array of indices from 1 to the length of the calculation window. This is used to calculate the weights for each value in the moving average calculation. The weights are calculated using the Gaussian distribution, with the indices as the input values and the standard deviation as a parameter. This produces a set of weights that are highest at the center of the window and decrease towards the edges.
Finally, the script calculates the weighted sum of the values in the calculation window using the weights. This is divided by the sum of the weights to give the moving average value. The resulting moving average is smoother and more responsive to changes in the underlying trend than a simple moving average, making it a useful tool for technical analysis.
Overall, this script is useful for analyzing financial data and identifying trends in the data. By using the Gaussian moving average, the script can smooth out fluctuations in the data and make trends more apparent, which can help traders make more informed decisions.
Coppock Unchanged
An implementation of the "Coppock Unchanged" plot concept by Tom McClellan.
Simply put, assume that for each bar, an alternative close creates a Coppock Plot that is unchanged , i.e. a close that generates a flat coppock curve.
This coppock unchanged plot can be used to:
1) identify a start of a trend on a long timescale (monthly) when the price goes above the coppock unchanged plot after a major correction
2) potentially identify an end of a trend when the prices goes below the coppock unchanged plot
See Tom McClellan's article 'Coppock Curve Still Working On a Major Bottom Signal' for a full explanation...
KernelFunctionsLibrary "KernelFunctions"
This library provides non-repainting kernel functions for Nadaraya-Watson estimator implementations. This allows for easy substitution/comparison of different kernel functions for one another in indicators. Furthermore, kernels can easily be combined with other kernels to create newer, more customized kernels. Compared to Moving Averages (which are really just simple kernels themselves), these kernel functions are more adaptive and afford the user an unprecedented degree of customization and flexibility.
rationalQuadratic(_src, _lookback, _relativeWeight, _startAtBar)
Rational Quadratic Kernel - An infinite sum of Gaussian Kernels of different length scales.
Parameters:
_src : The source series.
_lookback : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_relativeWeight : Relative weighting of time frames. Smaller values result in a more stretched-out curve, and larger values will result in a more wiggly curve. As this value approaches zero, the longer time frames will exert more influence on the estimation. As this value approaches infinity, the behavior of the Rational Quadratic Kernel will become identical to the Gaussian kernel.
_startAtBar : Bar index on which to start regression. The first bars of a chart are often highly volatile, and omitting these initial bars often leads to a better overall fit.
Returns: yhat The estimated values according to the Rational Quadratic Kernel.
gaussian(_src, _lookback, _startAtBar)
Gaussian Kernel - A weighted average of the source series. The weights are determined by the Radial Basis Function (RBF).
Parameters:
_src : The source series.
_lookback : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_startAtBar : Bar index on which to start regression. The first bars of a chart are often highly volatile, and omitting these initial bars often leads to a better overall fit.
Returns: yhat The estimated values according to the Gaussian Kernel.
periodic(_src, _lookback, _period, _startAtBar)
Periodic Kernel - The periodic kernel (derived by David Mackay) allows one to model functions that repeat themselves exactly.
Parameters:
_src : The source series.
_lookback : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_period : The distance between repititions of the function.
_startAtBar : Bar index on which to start regression. The first bars of a chart are often highly volatile, and omitting these initial bars often leads to a better overall fit.
Returns: yhat The estimated values according to the Periodic Kernel.
locallyPeriodic(_src, _lookback, _period, _startAtBar)
Locally Periodic Kernel - The locally periodic kernel is a periodic function that slowly varies with time. It is the product of the Periodic Kernel and the Gaussian Kernel.
Parameters:
_src : The source series.
_lookback : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_period : The distance between repititions of the function.
_startAtBar : Bar index on which to start regression. The first bars of a chart are often highly volatile, and omitting these initial bars often leads to a better overall fit.
Returns: yhat The estimated values according to the Locally Periodic Kernel.
MTF MA Ribbon and Bands + BB, Gaussian F. and R. VWAP with StDev█ Multi Timeframe Moving Average Ribbon and Bands + Bollinger Bands, Gaussian Filter and Rolling Volume Weighted Average Price with Standard Deviation Bands
Up to 9 moving averages can be independently applied.
The length , type and timeframe of each moving average are configurable .
The lines, colors and background fill are customizable too.
This script can also display:
Moving Average Bands
Bollinger Bands
Gaussian Filter
Rolling VWAP and Standard Deviation Bands
Types of Moving Averages:
Simple Moving Average (SMA)
Exponential Moving Average (EMA)
Smoothed Moving Average (SMMA)
Weighted Moving Average (WMA)
Volume Weighted Moving Average (VWMA)
Least Squares Moving Average (LSMA)
Hull Moving Average (HMA)
Arnaud Legoux Moving Average (ALMA)
█ Moving Average
Moving Averages are price based, lagging (or reactive) indicators that display the average price of a security over a set period of time.
A Moving Average is a good way to gauge momentum as well as to confirm trends, and define areas of support and resistance.
█ Bollinger Bands
Bollinger Bands consist of a band of three lines which are plotted in relation to security prices.
The line in the middle is usually a Simple Moving Average (SMA) set to a period of 20 days (the type of trend line and period can be changed by the trader, a 20 day moving average is by far the most popular).
The SMA then serves as a base for the Upper and Lower Bands which are used as a way to measure volatility by observing the relationship between the Bands and price.
█ Gaussian Filter
Gaussian filter can be used for smoothing.
It rejects high frequencies (fast movements) better than an EMA and has lower lag.
A Gaussian filter is one whose transfer response is described by the familiar Gaussian bell-shaped curve.
In the case of low-pass filters, only the upper half of the curve describes the filter.
The use of gaussian filters is a move toward achieving the dual goal of reducing lag and reducing the lag of high-frequency components relative to the lag of lower-frequency components.
█ Rolling VWAP
The typical VWAP is designed to be used on intraday charts, as it resets at the beginning of the day.
Such VWAPs cannot be used on daily, weekly or monthly charts. Instead, this rolling VWAP uses a time period that automatically adjusts to the chart's timeframe.
You can thus use the rolling VWAP on any chart that includes volume information in its data feed.
Because the rolling VWAP uses a moving window, it does not exhibit the jumpiness of VWAP plots that reset.
Made with the help from scripts of: adam24x, VishvaP, loxx and pmk07.
Gaussian Average Convergence DivergenceWhat exactly is the Ehlers Gaussian filter?
This filter is useful for smoothing. It rejects higher frequencies (fast movements) more effectively than an EMA and has less lag. John F. Ehlers published it in "Rocket Science For Traders." Dr. René Koch was the first to implement it in Wealth-Lab.
The transfer response of a Gaussian filter is described by the well-known Gaussian bell-shaped curve. Only the upper half of the curve describes the filter in the case of low-pass filters. The use of gaussian filters is a step toward achieving the dual goals of lowering lag and lowering the lag of high-frequency components relative to lower-frequency components.
From Ehlers Book: "The first objective of using smoothers is to eliminate or reduce the undesired high-frequency components in the price data. Therefore these smoothers are called low-pass filters, and they all work by some form of averaging. Butterworth low-pass filters can do this job, but nothing comes for free. A higher degree of filtering is necessarily accompanied by a larger amount of lag. We have come to see that is a fact of life."
References John F. Ehlers: "Rocket Science For Traders, Digital Signal Processing Applications", Chapter 15: "Infinite Impulse Response Filters"
Possible RSI [Loxx]Possible RSI is a normalized, variety second-pass normalized, Variety RSI with Dynamic Zones and optionl High-Pass IIR digital filtering of source price input. This indicator includes 7 types of RSI.
High-Pass Fitler (optional)
The Ehlers Highpass Filter is a technical analysis tool developed by John F. Ehlers. Based on aerospace analog filters, this filter aims at reducing noise from price data. Ehlers Highpass Filter eliminates wave components with periods longer than a certain value. This reduces lag and makes the oscialltor zero mean. This turns the RSI output into something more similar to Stochasitc RSI where it repsonds to price very quickly.
First Normalization Pass
RSI (Relative Strength Index) is already normalized. Hence, making a normalized RSI seems like a nonsense... if it was not for the "flattening" property of RSI. RSI tends to be flatter and flatter as we increase the calculating period--to the extent that it becomes unusable for levels trading if we increase calculating periods anywhere over the broadly recommended period 8 for RSI. In order to make that (calculating period) have less impact to significant levels usage of RSI trading style in this version a sort of a "raw stochastic" (min/max) normalization is applied.
Second-Pass Variety Normalization Pass
There are three options to choose from:
1. Gaussian (Fisher Transform), this is the default: The Fisher Transform is a function created by John F. Ehlers that converts prices into a Gaussian normal distribution. The normaliztion helps highlights when prices have moved to an extreme, based on recent prices. This may help in spotting turning points in the price of an asset. It also helps show the trend and isolate the price waves within a trend.
2. Softmax: The softmax function, also known as softargmax: or normalized exponential function, converts a vector of K real numbers into a probability distribution of K possible outcomes. It is a generalization of the logistic function to multiple dimensions, and used in multinomial logistic regression. The softmax function is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom.
3. Regular Normalization (devaitions about the mean): Converts a vector of K real numbers into a probability distribution of K possible outcomes without using log sigmoidal transformation as is done with Softmax. This is basically Softmax without the last step.
Dynamic Zones
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
7 Types of RSI
See here to understand which RSI types are included:
Included:
Bar coloring
4 signal types
Alerts
Loxx's Expanded Source Types
Loxx's Variety RSI
Loxx's Dynamic Zones
STD-Filtered, N-Pole Gaussian Filter [Loxx]This is a Gaussian Filter with Standard Deviation Filtering that works for orders (poles) higher than the usual 4 poles that was originally available in Ehlers Gaussian Filter formulas. Because of that, it is a sort of generalized Gaussian filter that can calculate arbitrary (order) pole Gaussian Filter and which makes it a sort of a unique indicator. For this implementation, the practical mathematical maximum is 15 poles after which the precision of calculation is useless--the coefficients for levels above 15 poles are so high that the precision loss actually means very little. Despite this maximal precision utility, I've left the upper bound of poles open-ended so you can try poles of order 15 and above yourself. The default is set to 5 poles which is 1 pole greater than the normal maximum of 4 poles.
The purpose of the standard deviation filter is to filter out noise by and by default it will filter 1 standard deviation. Adjust this number and the filter selections (price, both, GMA, none) to reduce the signal noise.
What is Ehlers Gaussian filter?
This filter can be used for smoothing. It rejects high frequencies (fast movements) better than an EMA and has lower lag. published by John F. Ehlers in "Rocket Science For Traders".
A Gaussian filter is one whose transfer response is described by the familiar Gaussian bell-shaped curve. In the case of low-pass filters, only the upper half of the curve describes the filter. The use of gaussian filters is a move toward achieving the dual goal of reducing lag and reducing the lag of high-frequency components relative to the lag of lower-frequency components.
A gaussian filter with...
One Pole: f = alpha*g + (1-alpha)f
Two Poles: f = alpha*2g + 2(1-alpha)f - (1-alpha)2f
Three Poles: f = alpha*3g + 3(1-alpha)f - 3(1-alpha)2f + (1-alpha)3f
Four Poles: f = alpha*4g + 4(1-alpha)f - 6(1-alpha)2f + 4(1-alpha)3f - (1-alpha)4f
and so on...
For an equivalent number of poles the lag of a Gaussian is about half the lag of a Butterworth filters: Lag = N*P / pi^2, where,
N is the number of poles, and
P is the critical period
Special initialization of filter stages ensures proper working in scans with as few bars as possible.
From Ehlers Book: "The first objective of using smoothers is to eliminate or reduce the undesired high-frequency components in the eprice data. Therefore these smoothers are called low-pass filters, and they all work by some form of averaging. Butterworth low-pass filters can do this job, but nothing comes for free. A higher degree of filtering is necessarily accompanied by a larger amount of lag. We have come to see that is a fact of life."
References John F. Ehlers: "Rocket Science For Traders, Digital Signal Processing Applications", Chapter 15: "Infinite Impulse Response Filters"
Included
Loxx's Expanded Source Types
Signals
Alerts
Bar coloring
Related indicators
STD-Filtered, Gaussian Moving Average (GMA)
STD-Filtered, Gaussian-Kernel-Weighted Moving Average
One-Sided Gaussian Filter w/ Channels
Fisher Transform w/ Dynamic Zones
R-sqrd Adapt. Fisher Transform w/ D. Zones & Divs .
Gaussian Filter MACD [Loxx]Gaussian Filter MACD is a MACD that uses an 1-4 Pole Ehlers Gaussian Filter for its calculations. Compare this with Ehlers Fisher Transform.
What is Ehlers Gaussian filter?
This filter can be used for smoothing. It rejects high frequencies (fast movements) better than an EMA and has lower lag. published by John F. Ehlers in "Rocket Science For Traders". First implemented in Wealth-Lab by Dr René Koch.
A Gaussian filter is one whose transfer response is described by the familiar Gaussian bell-shaped curve. In the case of low-pass filters, only the upper half of the curve describes the filter. The use of gaussian filters is a move toward achieving the dual goal of reducing lag and reducing the lag of high-frequency components relative to the lag of lower-frequency components.
A gaussian filter with...
one pole is equivalent to an EMA filter.
two poles is equivalent to EMA ( EMA ())
three poles is equivalent to EMA ( EMA ( EMA ()))
and so on...
For an equivalent number of poles the lag of a Gaussian is about half the lag of a Butterworth filters: Lag = N * P / (2 * ¶2), where,
N is the number of poles, and
P is the critical period
Special initialization of filter stages ensures proper working in scans with as few bars as possible.
From Ehlers Book: "The first objective of using smoothers is to eliminate or reduce the undesired high-frequency components in the eprice data. Therefore these smoothers are called low-pass filters, and they all work by some form of averaging. Butterworth low-pass filtters can do this job, but nothing comes for free. A higher degree of filtering is necessarily accompanied by a larger amount of lag. We have come to see that is a fact of life."
References John F. Ehlers: "Rocket Science For Traders, Digital Signal Processing Applications", Chapter 15: "Infinite Impulse Response Filters"
Included
Loxx's Expanded Source Types
Signals, zero or signal crossing, signal crossing is very noisy
Alerts
Bar coloring
STD-Filtered, Gaussian Moving Average (GMA) [Loxx]STD-Filtered, Gaussian Moving Average (GMA) is a 1-4 pole Ehlers Gaussian Filter with standard deviation filtering. This indicator should perform similar to Ehlers Fisher Transform.
The purpose of the standard deviation filter is to filter out noise by and by default it will filter 1 standard deviation. Adjust this number and the filter selections (price, both, GMA, none) to reduce the signal noise.
What is Ehlers Gaussian filter?
This filter can be used for smoothing. It rejects high frequencies (fast movements) better than an EMA and has lower lag. published by John F. Ehlers in "Rocket Science For Traders". First implemented in Wealth-Lab by Dr René Koch.
A Gaussian filter is one whose transfer response is described by the familiar Gaussian bell-shaped curve. In the case of low-pass filters, only the upper half of the curve describes the filter. The use of gaussian filters is a move toward achieving the dual goal of reducing lag and reducing the lag of high-frequency components relative to the lag of lower-frequency components.
A gaussian filter with...
one pole is equivalent to an EMA filter.
two poles is equivalent to EMA(EMA())
three poles is equivalent to EMA(EMA(EMA()))
and so on...
For an equivalent number of poles the lag of a Gaussian is about half the lag of a Butterworth filters: Lag = N * P / (2 * ¶2), where,
N is the number of poles, and
P is the critical period
Special initialization of filter stages ensures proper working in scans with as few bars as possible.
From Ehlers Book: "The first objective of using smoothers is to eliminate or reduce the undesired high-frequency components in the eprice data. Therefore these smoothers are called low-pass filters, and they all work by some form of averaging. Butterworth low-pass filtters can do this job, but nothing comes for free. A higher degree of filtering is necessarily accompanied by a larger amount of lag. We have come to see that is a fact of life."
References John F. Ehlers: "Rocket Science For Traders, Digital Signal Processing Applications", Chapter 15: "Infinite Impulse Response Filters"
Included
Loxx's Expanded Source Types
Signals
Alerts
Bar coloring
Related indicators
STD-Filtered, Gaussian-Kernel-Weighted Moving Average
One-Sided Gaussian Filter w/ Channels
Fisher Transform w/ Dynamic Zones
R-sqrd Adapt. Fisher Transform w/ D. Zones & Divs.
STD-Filtered, Gaussian-Kernel-Weighted Moving Average [Loxx]STD-Filtered, Gaussian-Kernel-Weighted Moving Average is a moving average that weights price by using a Gaussian kernel function to calculate data points. This indicator also allows for filtering both source input price and output signal using a standard deviation filter.
Purpose
This purpose of this indicator is to take the concept of Kernel estimation and apply it in a way where instead of predicting past values, the weighted function predicts the current bar value at each bar to create a moving average that is suitable for trading. Normally this method is used to create an array of past estimators to model past data but this method is not useful for trading as the past values will repaint. This moving average does NOT repaint, however you much allow signals to close on the current bar before taking the signal. You can compare this to Nadaraya-Watson Estimator wherein they use Nadaraya-Watson estimator method with normalized kernel weighted function to model price.
What are Kernel Functions?
A kernel function is used as a weighing function to develop non-parametric regression model is discussed. In the beginning of the article, a brief discussion about properties of kernel functions and steps to build kernels around data points are presented.
Kernel Function
In non-parametric statistics, a kernel is a weighting function which satisfies the following properties.
A kernel function must be symmetrical. Mathematically this property can be expressed as K (-u) = K (+u). The symmetric property of kernel function enables its maximum value (max(K(u)) to lie in the middle of the curve.
The area under the curve of the function must be equal to one. Mathematically, this property is expressed as: integral −∞ + ∞ ∫ K(u)d(u) = 1
Value of kernel function can not be negative i.e. K(u) ≥ 0 for all −∞ < u < ∞.
Kernel Estimation
In this article, Gaussian kernel function is used to calculate kernels for the data points. The equation for Gaussian kernel is:
K(u) = (1 / sqrt(2pi)) * e^(-0.5 *(j / bw)^2)
Where xi is the observed data point. j is the value where kernel function is computed and bw is called the bandwidth. Bandwidth in kernel regression is called the smoothing parameter because it controls variance and bias in the output. The effect of bandwidth value on model prediction is discussed later in this article.
Included
Loxx's Expanded Source types
Signals
Alerts
Bar coloring
VHF-Adaptive, Digital Kahler Variety RSI w/ Dynamic Zones [Loxx]VHF-Adaptive, Digital Kahler Variety RSI w/ Dynamic Zones is an RSI indicator with adaptive inputs, Digital Kahler filtering, and Dynamic Zones. This indicator uses a Vertical Horizontal Filter for calculating the adaptive period inputs and allows the user to select from 7 different types of RSI.
What is VHF Adaptive Cycle?
Vertical Horizontal Filter (VHF) was created by Adam White to identify trending and ranging markets. VHF measures the level of trend activity, similar to ADX DI. Vertical Horizontal Filter does not, itself, generate trading signals, but determines whether signals are taken from trend or momentum indicators. Using this trend information, one is then able to derive an average cycle length.
What is Digital Kahler?
From Philipp Kahler's article for www.traders-mag.com, August 2008. "A Classic Indicator in a New Suit: Digital Stochastic"
Digital Indicators
Whenever you study the development of trading systems in particular, you will be struck in an extremely unpleasant way by the seemingly unmotivated indentations and changes in direction of each indicator. An experienced trader can recognise many false signals of the indicator on the basis of his solid background; a stupid trading system usually falls into any trap offered by the unclear indicator course. This is what motivated me to improve even further this and other indicators with the help of a relatively simple procedure. The goal of this development is to be able to use this indicator in a trading system with as few additional conditions as possible. Discretionary traders will likewise be happy about this clear course, which is not nerve-racking and makes concentrating on the essential elements of trading possible.
How Is It Done?
The digital stochastic is a child of the original indicator. We owe a debt of gratitude to George Lane for his idea to design an indicator which describes the position of the current price within the high-low range of the historical price movement. My contribution to this indicator is the changed pattern which improves the quality of the signal without generating too long delays in giving signals. The trick used to generate this “digital” behavior of the indicator. It can be used with most oscillators like RSI or CCI .
First of all, the original is looked at. The indicator always moves between 0 and 100. The precise position of the indicator or its course relative to the trigger line are of no interest to me, I would just like to know whether the indicator is quoted below or above the value 50. This is tantamount to the question of whether the market is just trading above or below the middle of the high-low range of the past few days. If the market trades in the upper half of its high-low range, then the digital stochastic is given the value 1; if the original stochastic is below 50, then the value –1 is given. This leads to a sequence of 1/-1 values – the digital core of the new indicator. These values are subsequently smoothed by means of a short exponential moving average . This way minor false signals are eliminated and the indicator is given its typical form.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included:
Bar coloring
4 signal types
Alerts
Loxx's Expanded Source Types
Loxx's Moving Averages
Loxx's Variety RSI
Loxx's Dynamic Zones
CFB-Adaptive Velocity Histogram [Loxx]CFB-Adaptive Velocity Histogram is a velocity indicator with One-More-Moving-Average Adaptive Smoothing of input source value and Jurik's Composite-Fractal-Behavior-Adaptive Price-Trend-Period input with Dynamic Zones. All Juirk smoothing allows for both single and double Jurik smoothing passes. Velocity is adjusted to pips but there is no input value for the user. This indicator is tuned for Forex but can be used on any time series data.
What is Composite Fractal Behavior ( CFB )?
All around you mechanisms adjust themselves to their environment. From simple thermostats that react to air temperature to computer chips in modern cars that respond to changes in engine temperature, r.p.m.'s, torque, and throttle position. It was only a matter of time before fast desktop computers applied the mathematics of self-adjustment to systems that trade the financial markets.
Unlike basic systems with fixed formulas, an adaptive system adjusts its own equations. For example, start with a basic channel breakout system that uses the highest closing price of the last N bars as a threshold for detecting breakouts on the up side. An adaptive and improved version of this system would adjust N according to market conditions, such as momentum, price volatility or acceleration.
Since many systems are based directly or indirectly on cycles, another useful measure of market condition is the periodic length of a price chart's dominant cycle, (DC), that cycle with the greatest influence on price action.
The utility of this new DC measure was noted by author Murray Ruggiero in the January '96 issue of Futures Magazine. In it. Mr. Ruggiero used it to adaptive adjust the value of N in a channel breakout system. He then simulated trading 15 years of D-Mark futures in order to compare its performance to a similar system that had a fixed optimal value of N. The adaptive version produced 20% more profit!
This DC index utilized the popular MESA algorithm (a formulation by John Ehlers adapted from Burg's maximum entropy algorithm, MEM). Unfortunately, the DC approach is problematic when the market has no real dominant cycle momentum, because the mathematics will produce a value whether or not one actually exists! Therefore, we developed a proprietary indicator that does not presuppose the presence of market cycles. It's called CFB (Composite Fractal Behavior) and it works well whether or not the market is cyclic.
CFB examines price action for a particular fractal pattern, categorizes them by size, and then outputs a composite fractal size index. This index is smooth, timely and accurate
Essentially, CFB reveals the length of the market's trending action time frame. Long trending activity produces a large CFB index and short choppy action produces a small index value. Investors have found many applications for CFB which involve scaling other existing technical indicators adaptively, on a bar-to-bar basis.
What is Jurik Volty used in the Juirk Filter?
One of the lesser known qualities of Juirk smoothing is that the Jurik smoothing process is adaptive. "Jurik Volty" (a sort of market volatility ) is what makes Jurik smoothing adaptive. The Jurik Volty calculation can be used as both a standalone indicator and to smooth other indicators that you wish to make adaptive.
What is the Jurik Moving Average?
Have you noticed how moving averages add some lag (delay) to your signals? ... especially when price gaps up or down in a big move, and you are waiting for your moving average to catch up? Wait no more! JMA eliminates this problem forever and gives you the best of both worlds: low lag and smooth lines.
Ideally, you would like a filtered signal to be both smooth and lag-free. Lag causes delays in your trades, and increasing lag in your indicators typically result in lower profits. In other words, late comers get what's left on the table after the feast has already begun.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included:
Bar coloring
3 signal variations w/ alerts
Divergences w/ alerts
Loxx's Expanded Source Types
CFB-Adaptive, Williams %R w/ Dynamic Zones [Loxx]CFB-Adaptive, Williams %R w/ Dynamic Zones is a Jurik-Composite-Fractal-Behavior-Adaptive Williams % Range indicator with Dynamic Zones. These additions to the WPR calculation reduce noise and return a signal that is more viable than WPR alone.
What is Williams %R?
Williams %R , also known as the Williams Percent Range, is a type of momentum indicator that moves between 0 and -100 and measures overbought and oversold levels. The Williams %R may be used to find entry and exit points in the market. The indicator is very similar to the Stochastic oscillator and is used in the same way. It was developed by Larry Williams and it compares a stock’s closing price to the high-low range over a specific period, typically 14 days or periods.
What is Composite Fractal Behavior ( CFB )?
All around you mechanisms adjust themselves to their environment. From simple thermostats that react to air temperature to computer chips in modern cars that respond to changes in engine temperature, r.p.m.'s, torque, and throttle position. It was only a matter of time before fast desktop computers applied the mathematics of self-adjustment to systems that trade the financial markets.
Unlike basic systems with fixed formulas, an adaptive system adjusts its own equations. For example, start with a basic channel breakout system that uses the highest closing price of the last N bars as a threshold for detecting breakouts on the up side. An adaptive and improved version of this system would adjust N according to market conditions, such as momentum, price volatility or acceleration.
Since many systems are based directly or indirectly on cycles, another useful measure of market condition is the periodic length of a price chart's dominant cycle, (DC), that cycle with the greatest influence on price action.
The utility of this new DC measure was noted by author Murray Ruggiero in the January '96 issue of Futures Magazine. In it. Mr. Ruggiero used it to adaptive adjust the value of N in a channel breakout system. He then simulated trading 15 years of D-Mark futures in order to compare its performance to a similar system that had a fixed optimal value of N. The adaptive version produced 20% more profit!
This DC index utilized the popular MESA algorithm (a formulation by John Ehlers adapted from Burg's maximum entropy algorithm, MEM). Unfortunately, the DC approach is problematic when the market has no real dominant cycle momentum, because the mathematics will produce a value whether or not one actually exists! Therefore, we developed a proprietary indicator that does not presuppose the presence of market cycles. It's called CFB (Composite Fractal Behavior) and it works well whether or not the market is cyclic.
CFB examines price action for a particular fractal pattern, categorizes them by size, and then outputs a composite fractal size index. This index is smooth, timely and accurate
Essentially, CFB reveals the length of the market's trending action time frame. Long trending activity produces a large CFB index and short choppy action produces a small index value. Investors have found many applications for CFB which involve scaling other existing technical indicators adaptively, on a bar-to-bar basis.
What is Jurik Volty used in the Juirk Filter?
One of the lesser known qualities of Juirk smoothing is that the Jurik smoothing process is adaptive. "Jurik Volty" (a sort of market volatility ) is what makes Jurik smoothing adaptive. The Jurik Volty calculation can be used as both a standalone indicator and to smooth other indicators that you wish to make adaptive.
What is the Jurik Moving Average?
Have you noticed how moving averages add some lag (delay) to your signals? ... especially when price gaps up or down in a big move, and you are waiting for your moving average to catch up? Wait no more! JMA eliminates this problem forever and gives you the best of both worlds: low lag and smooth lines.
Ideally, you would like a filtered signal to be both smooth and lag-free. Lag causes delays in your trades, and increasing lag in your indicators typically result in lower profits. In other words, late comers get what's left on the table after the feast has already begun.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included:
Bar coloring
3 signal variations w/ alerts
Divergences w/ alerts
Loxx's Expanded Source Types
R-sqrd Adapt. Fisher Transform w/ D. Zones & Divs. [Loxx]The full name of this indicator is R-Squared Adaptive Fisher Transform w/ Dynamic Zones and Divergences. This is an R-squared adaptive Fisher transform with adjustable dynamic zones, signals, alerts, and divergences.
What is Fisher Transform?
The Fisher Transform is a technical indicator created by John F. Ehlers that converts prices into a Gaussian normal distribution.
The indicator highlights when prices have moved to an extreme, based on recent prices. This may help in spotting turning points in the price of an asset. It also helps show the trend and isolate the price waves within a trend.
What is R-squared Adaptive?
One tool available in forecasting the trendiness of the breakout is the coefficient of determination ( R-squared ), a statistical measurement.
The R-squared indicates linear strength between the security's price (the Y - axis) and time (the X - axis). The R-squared is the percentage of squared error that the linear regression can eliminate if it were used as the predictor instead of the mean value. If the R-squared were 0.99, then the linear regression would eliminate 99% of the error for prediction versus predicting closing prices using a simple moving average .
R-squared is used here to derive an r-squared value that is then modified by a user input "factor"
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included:
Bar coloring
4 signal variations w/ alerts
Divergences w/ alerts
Loxx's Expanded Source Types
STD-Filterd, R-squared Adaptive T3 w/ Dynamic Zones [Loxx]STD-Filterd, R-squared Adaptive T3 w/ Dynamic Zones is a standard deviation filtered R-squared Adaptive T3 moving average with dynamic zones.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
What is R-squared Adaptive?
One tool available in forecasting the trendiness of the breakout is the coefficient of determination ( R-squared ), a statistical measurement.
The R-squared indicates linear strength between the security's price (the Y - axis) and time (the X - axis). The R-squared is the percentage of squared error that the linear regression can eliminate if it were used as the predictor instead of the mean value. If the R-squared were 0.99, then the linear regression would eliminate 99% of the error for prediction versus predicting closing prices using a simple moving average .
R-squared is used here to derive a T3 factor used to modify price before passing price through a six-pole non-linear Kalman filter.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included:
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
Variety RSI w/ Dynamic Zones [Loxx]Variety RSI w/ Dynamic Zones is an indicator with 7 different RSI types with Dynamic Zones. This indicator has signal crossing options for signal, middle, and all Dynamic Zone levels.
What is RSI?
The relative strength index ( RSI ) is a momentum indicator used in technical analysis . RSI measures the speed and magnitude of a security's recent price changes to evaluate overvalued or undervalued conditions in the price of that security.
The RSI is displayed as an oscillator (a line graph) on a scale of zero to 100. The indicator was developed by J. Welles Wilder Jr. and introduced in his seminal 1978 book, New Concepts in Technical Trading Systems.
The RSI can do more than point to overbought and oversold securities. It can also indicate securities that may be primed for a trend reversal or corrective pullback in price. It can signal when to buy and sell. Traditionally, an RSI reading of 70 or above indicates an overbought situation. A reading of 30 or below indicates an oversold condition.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included
RSI source pre-smoothing options
Bar coloring
4 types of signal crossing options
Alerts
Loxx's Expanded Source Types
Loxx's RSI Variety RSI types
Natural Market Mirror (NMM) and NMAs w/ Dynamic Zones [Loxx]Natural Market Mirror (NMM) and NMAs w/ Dynamic Zones is a very complex indicator derived from Sloman's Ocean Theory. This indicator contains 3 core outputs and those outputs, depending on the one you select to be used to crate a long/short signal, will be highlighted and bound by Dynamic Zones. Pre-smoothing of source input is available, you only need to increase the period length to greater than 1. The smoothing algorithm used here it's Ehlers Two-pole Super Smoother. This indicator should be used as you would use the popular QQE, the difference being this indicator is multi-level momentum adaptive, and QQE is fixed RSI-based. This indicator is multilayer adaptive.
The three core indicators calculations are as follows:
NMM = Natural Market Mirror, solid line
NMF = Natural Moving Average Fast, dashed line (white when off)
NMA = Natural Moving Average Regular, dashed line (yellow when off)
Whichever one you select to be used as the signal output base, that line with increased in width and change color to match the price inputted trend. The Dynamic Zones will then readjust around that selected output and form a new bounding zone for signal output.
What is the Ocean Natural Market Mirror?
Created by Jim Sloman, the NMA is a momentum indicator that automatically adjusts to volatility without being programed to do so. For more info, read his guide "Ocean Theory, an Introduction"
What is the Ocean Natural Moving Average?
Also created by Jim Sloman, the NMA is a moving average that automatically adjusts to volatility.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included
Bar coloring
3 types of signal output options
Alerts
Loxx's Expanded Source Types
Dynamic Zone of Bollinger Band Stops Line [Loxx]Dynamic Zone of Bollinger Band Stops Line is a Bollinger Band indicator with Dynamic Zones. This indicator serves as both a trend indicator and a dynamic stop-loss indicator.
What are Bollinger Bands?
A Bollinger Band is a technical analysis tool defined by a set of trendlines plotted two standard deviations (positively and negatively) away from a simple moving average (SMA) of a security's price, but which can be adjusted to user preferences.
Bollinger Bands were developed and copyrighted by famous technical trader John Bollinger, designed to discover opportunities that give investors a higher probability of properly identifying when an asset is oversold or overbought.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included
Bar coloring
Signals
Alerts
3 types of signal smoothing
Dynamic Zones of On Chart Stochastic [Loxx]Dynamic Zones of On Chart Stochastic is a Stochastic indicator that sits on top of the chart instead of below as an oscillator. Dynamic zone levels are included to find breakouts/breakdowns and reversals.
What is the Stochastic Oscillator?
A stochastic oscillator is a momentum indicator comparing a particular closing price of a security to a range of its prices over a certain period of time. The sensitivity of the oscillator to market movements is reducible by adjusting that time period or by taking a moving average of the result. It is used to generate overbought and oversold trading signals, utilizing a 0–100 bounded range of values.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included
Bar coloring
Signals
Alerts
4 types of signal smoothing