IPDA Standard Deviations [DexterLab x TFO x toodegrees]> Introduction and Acknowledgements
The IPDA Standard Deviations tool encompasses the Time and price relationship as studied by @TraderDext3r .
I am not the creator of this Theory, and I do not hold the answers to all the questions you may have; I suggest you to study it from Dexter's tweets, videos, and material.
This tool was born from a collaboration between @TraderDext3r, @tradeforopp and I, with the objective of bringing a comprehensive IPDA Standard Deviations tool to Tradingview.
> Tool Description
This is purely a graphical aid for traders to be able to quickly determine Fractal IPDA Time Windows, and trace the potential Standard Deviations of the moves at their respective high and low extremes.
The disruptive value of this tool is that it allows traders to save Time by automatically adapting the Time Windows based on the current chart's Timeframe, as well as providing customizations to filter and focus on the appropriate Standard Deviations.
> IPDA Standard Deviations by TraderDext3r
The underlying idea is based on the Interbank Price Delivery Algorithm's lookback windows on the daily chart as taught by the Inner Circle Trader:
IPDA looks at the past three months of price action to determine how to deliver price in the future.
Additionally, the ICT concept of projecting specific manipulation moves prior to large displacement upwards/downwards is used to navigate and interpret the priorly mentioned displacement move. We pay attention to specific Standard Deviations based on the current environment and overall narrative.
Dexter being one of the most prominent Inner Circle Trader students, harnessed the fractal nature of price to derive fractal IPDA Lookback Time Windows for lower Timeframes, and studied the behaviour of price at specific Deviations.
For Example:
The -1 to -2 area can initiate an algorithmic retracement before continuation.
The -2 to -2.5 area can initiate an algorithmic retracement before continuation, or a Smart Money Reversal.
The -4 area should be seen as the ultimate objective, or the level at which the displacement will slow down.
Given that these ideas stem from ICT's concepts themselves, they are to be used hand in hand with all other ICT Concepts (PD Array Matrix, PO3, Institutional Price Levels, ...).
> Fractal IPDA Time Windows
The IPDA Lookbacks Types identified by Dexter are as follows:
Monthly – 1D Chart: one widow per Month, highlighting the past three Months.
Weekly – 4H to 8H Chart: one window per Week, highlighting the past three Weeks.
Daily – 15m to 1H Chart: one window per Day, highlighting the past three Days.
Intraday – 1m to 5m Chart: one window per 4 Hours highlighting the past 12 Hours.
Inside these three respective Time Windows, the extreme High and Low will be identified, as well as the prior opposing short term market structure point. These represent the anchors for the Standard Deviation Projections.
> Tool Settings
The User is able to plot any type of Standard Deviation they want by inputting them in the settings, in their own line of the text box. They will always be plotted from the Time Windows extremes.
As previously mentioned, the User is also able to define their own Timeframe intervals for the respective IPDA Lookback Types. The specific Timeframes on which the different Lookback Types are plotted are edge-inclusive. In case of an overlap, the higher Timeframe Lookback will be prioritized.
Finally the User is able to filter and remove Standard Deviations in two ways:
"Remove Once Invalidated" will automatically delete a Deviation once its outer anchor extreme is traded through.
Manual Toggles will allow to remove the Upward or Downward Deviation of each Time Window at the discretion of the User.
Major shoutout to Dexter and TFO for their Time, it was a pleasure to collaborate and create this tool with them.
GLGT!
Cerca negli script per "algo"
Machine Learning Momentum Oscillator [ChartPrime]The Machine Learning Momentum Oscillator brings together the K-Nearest Neighbors (KNN) algorithm and the predictive strength of the Tactical Sector Indicator (TSI) Momentum. This unique oscillator not only uses the insights from TSI Momentum but also taps into the power of machine learning therefore being designed to give traders a more comprehensive view of market momentum.
At its core, the Machine Learning Momentum Oscillator blends TSI Momentum with the capabilities of the KNN algorithm. Introducing KNN logic allows for better handling of noise in the data set. The TSI Momentum is known for understanding how strong trends are and which direction they're headed, and now, with the added layer of machine learning, we're able to offer a deeper perspective on market trends. This is a fairly classical when it comes to visuals and trading.
Green bars show the trader when the asset is in an uptrend. On the flip side, red bars mean things are heading down, signaling a bearish movement driven by selling pressure. These color cues make it easier to catch the sentiment and direction of the market in a glance.
Yellow boxes are also displayed by the oscillator. These boxes highlight potential turning points or peaks. When the market comes close to these points, they can provide a heads-up about the possibility of changes in momentum or even a trend reversal, helping a trader make informed choices quickly. These can be looked at as possible reversal areas simply put.
Settings:
Users can adjust the number of neighbours in the KNN algorithm and choose the periods they prefer for analysis. This way, the tool becomes a part of a trader's strategy, adapting to different market conditions as they see fit. Users can also adjust the smoothing used by the oscillator via the smoothing input.
SimilarityMeasuresLibrary "SimilarityMeasures"
Similarity measures are statistical methods used to quantify the distance between different data sets
or strings. There are various types of similarity measures, including those that compare:
- data points (SSD, Euclidean, Manhattan, Minkowski, Chebyshev, Correlation, Cosine, Camberra, MAE, MSE, Lorentzian, Intersection, Penrose Shape, Meehl),
- strings (Edit(Levenshtein), Lee, Hamming, Jaro),
- probability distributions (Mahalanobis, Fidelity, Bhattacharyya, Hellinger),
- sets (Kumar Hassebrook, Jaccard, Sorensen, Chi Square).
---
These measures are used in various fields such as data analysis, machine learning, and pattern recognition. They
help to compare and analyze similarities and differences between different data sets or strings, which
can be useful for making predictions, classifications, and decisions.
---
References:
en.wikipedia.org
cran.r-project.org
numerics.mathdotnet.com
github.com
github.com
github.com
Encyclopedia of Distances, doi.org
ssd(p, q)
Sum of squared difference for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of distance that calculates the squared euclidean distance.
euclidean(p, q)
Euclidean distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of distance that calculates the straight-line (or Euclidean).
manhattan(p, q)
Manhattan distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of absolute differences between both points.
minkowski(p, q, p_value)
Minkowsky Distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
p_value (float) : `float` P value, default=1.0(1: manhatan, 2: euclidean), does not support chebychev.
Returns: Measure of similarity in the normed vector space.
chebyshev(p, q)
Chebyshev distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of maximum absolute difference.
correlation(p, q)
Correlation distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of maximum absolute difference.
cosine(p, q)
Cosine distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Cosine distance between vectors `p` and `q`.
---
angiogenesis.dkfz.de
camberra(p, q)
Camberra distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Weighted measure of absolute differences between both points.
mae(p, q)
Mean absolute error is a normalized version of the sum of absolute difference (manhattan).
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Mean absolute error of vectors `p` and `q`.
mse(p, q)
Mean squared error is a normalized version of the sum of squared difference.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Mean squared error of vectors `p` and `q`.
lorentzian(p, q)
Lorentzian distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Lorentzian distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
intersection(p, q)
Intersection distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Intersection distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
penrose(p, q)
Penrose Shape distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Penrose shape distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
meehl(p, q)
Meehl distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Meehl distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
edit(x, y)
Edit (aka Levenshtein) distance for indexed strings.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Number of deletions, insertions, or substitutions required to transform source string into target string.
---
generated description:
The Edit distance is a measure of similarity used to compare two strings. It is defined as the minimum number of
operations (insertions, deletions, or substitutions) required to transform one string into another. The operations
are performed on the characters of the strings, and the cost of each operation depends on the specific algorithm
used.
The Edit distance is widely used in various applications such as spell checking, text similarity, and machine
translation. It can also be used for other purposes like finding the closest match between two strings or
identifying the common prefixes or suffixes between them.
---
github.com
www.red-gate.com
planetcalc.com
lee(x, y, dsize)
Distance between two indexed strings of equal length.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
dsize (int) : `int` Dictionary size.
Returns: Distance between two strings by accounting for dictionary size.
---
www.johndcook.com
hamming(x, y)
Distance between two indexed strings of equal length.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Length of different components on both sequences.
---
en.wikipedia.org
jaro(x, y)
Distance between two indexed strings.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Measure of two strings' similarity: the higher the value, the more similar the strings are.
The score is normalized such that `0` equates to no similarities and `1` is an exact match.
---
rosettacode.org
mahalanobis(p, q, VI)
Mahalanobis distance between two vectors with population inverse covariance matrix.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
VI (matrix) : `matrix` Inverse of the covariance matrix.
Returns: The mahalanobis distance between vectors `p` and `q`.
---
people.revoledu.com
stat.ethz.ch
docs.scipy.org
fidelity(p, q)
Fidelity distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Bhattacharyya Coefficient between vectors `p` and `q`.
---
en.wikipedia.org
bhattacharyya(p, q)
Bhattacharyya distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Bhattacharyya distance between vectors `p` and `q`.
---
en.wikipedia.org
hellinger(p, q)
Hellinger distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The hellinger distance between vectors `p` and `q`.
---
en.wikipedia.org
jamesmccaffrey.wordpress.com
kumar_hassebrook(p, q)
Kumar Hassebrook distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Kumar Hassebrook distance between vectors `p` and `q`.
---
github.com
jaccard(p, q)
Jaccard distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Jaccard distance between vectors `p` and `q`.
---
github.com
sorensen(p, q)
Sorensen distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Sorensen distance between vectors `p` and `q`.
---
people.revoledu.com
chi_square(p, q, eps)
Chi Square distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
eps (float)
Returns: The Chi Square distance between vectors `p` and `q`.
---
uw.pressbooks.pub
stats.stackexchange.com
www.itl.nist.gov
kulczynsky(p, q, eps)
Kulczynsky distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
eps (float)
Returns: The Kulczynsky distance between vectors `p` and `q`.
---
github.com
[blackcat] L1 TradingView Array and Series ConversionsLevel 1
Background
It just so happens that I need some functions that can convert between the Series data type and the Array data type.
Function
Series is a unique data type of TradingView. By operating Series data, the algorithm can be simplified, which is very convenient. However, in high-level languages, Array is a basic data type that provides great flexibility and can be used to develop advanced algorithms. This is why TradingView introduces the Array data type. This script simply demonstrates how to convert between these two data types.
s2a function: Convert a TV series into an array.
a2s function: Convert an array into a TV series
Finally, Courtesy of Electrified for his "Average Lib":
Remarks
Feedbacks are appreciated.
Discrete Fourier Transform Overlay [wbburgin]The discrete Fourier transform (DFT) overlay uses a discrete Fourier transform algorithm to identify trend direction. This is a simpler interpretation that only uses the magnitude of the first frequency component obtained from the DFT algorithm, but can be useful for visualization purposes. I haven't seen many Fourier scripts on TradingView that actually have the magnitude plotted on the chart (some have lines, for instance, but that makes it difficult to look into the past or to see previous lines).
About the Discrete Fourier Transform
The DFT is a mathematical transformation that decomposes a time-domain signal into its constituent frequency components. By applying the DFT to OHLC data, we can interpret the periodicities and trends present in the market. I've designed the overlay so that you can choose your source for the Fourier transform, as well as the length.
Settings and Configuration
The "Fourier Period" is the transform length of the DFT algorithm. This input indicates the number of data points considered for the DFT calculation. For example, if this input is set to 20, the DFT will be performed on the most recent 20 data points of the input series. The transform length affects the resolution and accuracy of the frequency analysis. A shorter transform length may provide a broader frequency range but with less detail, while a longer transform length can provide finer frequency resolution but may be computationally more intensive (I recommend using under 100 - anything above that might take too much time to load on the platform).
The "Fourier X Series" is the source you want the Fourier transform to be applied to. I have it set in default to the close.
"Kernel Smoothing" is the bar-start of the rational quadratic kernel used to smooth the frequency component. Think of it just like a normal moving average if you are unfamiliar with the concept, it functions similarly to the "length" value of a moving average.
Aggregate Medians [wbburgin]This indicator recursively finds the average of all high/low medians under your chosen length. This can be very, very helpful for analyzing trends where a moving average or a normal median would produce a bunch of false signals.
Settings:
The "Length" setting is the maximum median that you want the algorithm to add into the sum. The "Start at Period" setting is the the minimum median that you want the algorithm to take into account. Starting at a higher period means that the faster, more sensitive medians of lower lengths are not included, and will smooth out your curve.
I haven't seen many recursive algorithms on TradingView so feel free to use this script as inspiration for any of your ideas. In theory, you can essentially replace the median function with any other function - a moving average, a supertrend, or anything else.
The start must be lower than the length, because this is a sum from the start to the length of all medians in between.
MLExtensionsLibrary "MLExtensions"
normalizeDeriv(src, quadraticMeanLength)
Returns the smoothed hyperbolic tangent of the input series.
Parameters:
src : The input series (i.e., the first-order derivative for price).
quadraticMeanLength : The length of the quadratic mean (RMS).
Returns: nDeriv The normalized derivative of the input series.
normalize(src, min, max)
Rescales a source value with an unbounded range to a target range.
Parameters:
src : The input series
min : The minimum value of the unbounded range
max : The maximum value of the unbounded range
Returns: The normalized series
rescale(src, oldMin, oldMax, newMin, newMax)
Rescales a source value with a bounded range to anther bounded range
Parameters:
src : The input series
oldMin : The minimum value of the range to rescale from
oldMax : The maximum value of the range to rescale from
newMin : The minimum value of the range to rescale to
newMax : The maximum value of the range to rescale to
Returns: The rescaled series
color_green(prediction)
Assigns varying shades of the color green based on the KNN classification
Parameters:
prediction : Value (int|float) of the prediction
Returns: color
color_red(prediction)
Assigns varying shades of the color red based on the KNN classification
Parameters:
prediction : Value of the prediction
Returns: color
tanh(src)
Returns the the hyperbolic tangent of the input series. The sigmoid-like hyperbolic tangent function is used to compress the input to a value between -1 and 1.
Parameters:
src : The input series (i.e., the normalized derivative).
Returns: tanh The hyperbolic tangent of the input series.
dualPoleFilter(src, lookback)
Returns the smoothed hyperbolic tangent of the input series.
Parameters:
src : The input series (i.e., the hyperbolic tangent).
lookback : The lookback window for the smoothing.
Returns: filter The smoothed hyperbolic tangent of the input series.
tanhTransform(src, smoothingFrequency, quadraticMeanLength)
Returns the tanh transform of the input series.
Parameters:
src : The input series (i.e., the result of the tanh calculation).
smoothingFrequency
quadraticMeanLength
Returns: signal The smoothed hyperbolic tangent transform of the input series.
n_rsi(src, n1, n2)
Returns the normalized RSI ideal for use in ML algorithms.
Parameters:
src : The input series (i.e., the result of the RSI calculation).
n1 : The length of the RSI.
n2 : The smoothing length of the RSI.
Returns: signal The normalized RSI.
n_cci(src, n1, n2)
Returns the normalized CCI ideal for use in ML algorithms.
Parameters:
src : The input series (i.e., the result of the CCI calculation).
n1 : The length of the CCI.
n2 : The smoothing length of the CCI.
Returns: signal The normalized CCI.
n_wt(src, n1, n2)
Returns the normalized WaveTrend Classic series ideal for use in ML algorithms.
Parameters:
src : The input series (i.e., the result of the WaveTrend Classic calculation).
n1
n2
Returns: signal The normalized WaveTrend Classic series.
n_adx(highSrc, lowSrc, closeSrc, n1)
Returns the normalized ADX ideal for use in ML algorithms.
Parameters:
highSrc : The input series for the high price.
lowSrc : The input series for the low price.
closeSrc : The input series for the close price.
n1 : The length of the ADX.
regime_filter(src, threshold, useRegimeFilter)
Parameters:
src
threshold
useRegimeFilter
filter_adx(src, length, adxThreshold, useAdxFilter)
filter_adx
Parameters:
src : The source series.
length : The length of the ADX.
adxThreshold : The ADX threshold.
useAdxFilter : Whether to use the ADX filter.
Returns: The ADX.
filter_volatility(minLength, maxLength, useVolatilityFilter)
filter_volatility
Parameters:
minLength : The minimum length of the ATR.
maxLength : The maximum length of the ATR.
useVolatilityFilter : Whether to use the volatility filter.
Returns: Boolean indicating whether or not to let the signal pass through the filter.
backtest(high, low, open, startLongTrade, endLongTrade, startShortTrade, endShortTrade, isStopLossHit, maxBarsBackIndex, thisBarIndex)
Performs a basic backtest using the specified parameters and conditions.
Parameters:
high : The input series for the high price.
low : The input series for the low price.
open : The input series for the open price.
startLongTrade : The series of conditions that indicate the start of a long trade.`
endLongTrade : The series of conditions that indicate the end of a long trade.
startShortTrade : The series of conditions that indicate the start of a short trade.
endShortTrade : The series of conditions that indicate the end of a short trade.
isStopLossHit : The stop loss hit indicator.
maxBarsBackIndex : The maximum number of bars to go back in the backtest.
thisBarIndex : The current bar index.
Returns: A tuple containing backtest values
init_table()
init_table()
Returns: tbl The backtest results.
update_table(tbl, tradeStatsHeader, totalTrades, totalWins, totalLosses, winLossRatio, winrate, stopLosses)
update_table(tbl, tradeStats)
Parameters:
tbl : The backtest results table.
tradeStatsHeader : The trade stats header.
totalTrades : The total number of trades.
totalWins : The total number of wins.
totalLosses : The total number of losses.
winLossRatio : The win loss ratio.
winrate : The winrate.
stopLosses : The total number of stop losses.
Returns: Updated backtest results table.
WaveTrend 3D█ OVERVIEW
WaveTrend 3D (WT3D) is a novel implementation of the famous WaveTrend (WT) indicator and has been completely redesigned from the ground up to address some of the inherent shortcomings associated with the traditional WT algorithm.
█ BACKGROUND
The WaveTrend (WT) indicator has become a widely popular tool for traders in recent years. WT was first ported to PineScript in 2014 by the user @LazyBear, and since then, it has ascended to become one of the Top 5 most popular scripts on TradingView.
The WT algorithm appears to have origins in a lesser-known proprietary algorithm called Trading Channel Index (TCI), created by AIQ Systems in 1986 as an integral part of their commercial software suite, TradingExpert Pro. The software’s reference manual states that “TCI identifies changes in price direction” and is “an adaptation of Donald R. Lambert’s Commodity Channel Index (CCI)”, which was introduced to the world six years earlier in 1980. Interestingly, a vestige of this early beginning can still be seen in the source code of LazyBear’s script, where the final EMA calculation is stored in an intermediate variable called “tci” in the code.
█ IMPLEMENTATION DETAILS
WaveTrend 3D is an alternative implementation of WaveTrend that directly addresses some of the known shortcomings of the indicator, including its unbounded extremes, susceptibility to whipsaw, and lack of insight into other timeframes.
In the canonical WT approach, an exponential moving average (EMA) for a given lookback window is used to assess the variability between price and two other EMAs relative to a second lookback window. Since the difference between the average price and its associated EMA is essentially unbounded, an arbitrary scaling factor of 0.015 is typically applied as a crude form of rescaling but still fails to capture 20-30% of values between the range of -100 to 100. Additionally, the trigger signal for the final EMA (i.e., TCI) crossover-based oscillator is a four-bar simple moving average (SMA), which further contributes to the net lag accumulated by the consecutive EMA calculations in the previous steps.
The core idea behind WT3D is to replace the EMA-based crossover system with modern Digital Signal Processing techniques. By assuming that price action adheres approximately to a Gaussian distribution, it is possible to sidestep the scaling nightmare associated with unbounded price differentials of the original WaveTrend method by focusing instead on the alteration of the underlying Probability Distribution Function (PDF) of the input series. Furthermore, using a signal processing filter such as a Butterworth Filter, we can eliminate the need for consecutive exponential moving averages along with the associated lag they bring.
Ideally, it is convenient to have the resulting probability distribution oscillate between the values of -1 and 1, with the zero line serving as a median. With this objective in mind, it is possible to borrow a common technique from the field of Machine Learning that uses a sigmoid-like activation function to transform our data set of interest. One such function is the hyperbolic tangent function (tanh), which is often used as an activation function in the hidden layers of neural networks due to its unique property of ensuring the values stay between -1 and 1. By taking the first-order derivative of our input series and normalizing it using the quadratic mean, the tanh function performs a high-quality redistribution of the input signal into the desired range of -1 to 1. Finally, using a dual-pole filter such as the Butterworth Filter popularized by John Ehlers, excessive market noise can be filtered out, leaving behind a crisp moving average with minimal lag.
Furthermore, WT3D expands upon the original functionality of WT by providing:
First-class support for multi-timeframe (MTF) analysis
Kernel-based regression for trend reversal confirmation
Various options for signal smoothing and transformation
A unique mode for visualizing an input series as a symmetrical, three-dimensional waveform useful for pattern identification and cycle-related analysis
█ SETTINGS
This is a summary of the settings used in the script listed in roughly the order in which they appear. By default, all default colors are from Google's TensorFlow framework and are considered to be colorblind safe.
Source: The input series. Usually, it is the close or average price, but it can be any series.
Use Mirror: Whether to display a mirror image of the source series; for visualizing the series as a 3D waveform similar to a soundwave.
Use EMA: Whether to use an exponential moving average of the input series.
EMA Length: The length of the exponential moving average.
Use COG: Whether to use the center of gravity of the input series.
COG Length: The length of the center of gravity.
Speed to Emphasize: The target speed to emphasize.
Width: The width of the emphasized line.
Display Kernel Moving Average: Whether to display the kernel moving average of the signal. Like PCA, an unsupervised Machine Learning technique whereby neighboring vectors are projected onto the Principal Component.
Display Kernel Signal: Whether to display the kernel estimator for the emphasized line. Like the Kernel MA, it can show underlying shifts in bias within a more significant trend by the colors reflected on the ribbon itself.
Show Oscillator Lines: Whether to show the oscillator lines.
Offset: The offset of the emphasized oscillator plots.
Fast Length: The length scale factor for the fast oscillator.
Fast Smoothing: The smoothing scale factor for the fast oscillator.
Normal Length: The length scale factor for the normal oscillator.
Normal Smoothing: The smoothing scale factor for the normal frequency.
Slow Length: The length scale factor for the slow oscillator.
Slow Smoothing: The smoothing scale factor for the slow frequency.
Divergence Threshold: The number of bars for the divergence to be considered significant.
Trigger Wave Percent Size: How big the current wave should be relative to the previous wave.
Background Area Transparency Factor: Transparency factor for the background area.
Foreground Area Transparency Factor: Transparency factor for the foreground area.
Background Line Transparency Factor: Transparency factor for the background line.
Foreground Line Transparency Factor: Transparency factor for the foreground line.
Custom Transparency: Transparency of the custom colors.
Total Gradient Steps: The maximum amount of steps supported for a gradient calculation is 256.
Fast Bullish Color: The color of the fast bullish line.
Normal Bullish Color: The color of the normal bullish line.
Slow Bullish Color: The color of the slow bullish line.
Fast Bearish Color: The color of the fast bearish line.
Normal Bearish Color: The color of the normal bearish line.
Slow Bearish Color: The color of the slow bearish line.
Bullish Divergence Signals: The color of the bullish divergence signals.
Bearish Divergence Signals: The color of the bearish divergence signals.
█ ACKNOWLEDGEMENTS
@LazyBear - For authoring the original WaveTrend port on TradingView
@PineCoders - For the beautiful color gradient framework used in this indicator
@veryfid - For the inspiration of using mirrored signals for cycle analysis and using multiple lookback windows as proxies for other timeframes
Improved Chaikin Money FlowChaikin Money Flow is a well-known Indicator for gauging buying/selling pressure. Marc Chaikin intended this to be used on the daily timeframe to capture the behavior of price action at or near the daily close when larger-scale actors influence the market. The calculation is straight forward as described within the built-in TradingView "CMF" indicator:
1. Period Money Flow Multiplier = ((Close - Low) - (High - Close)) /(High - Low)
2. Period Money Flow Volume = Period Money Flow Multiplier x Volume for the Period
3. Chaikin Money Flow = 21 Period Sum of Money Flow Volume / 21 Period Sum of Volume
There is, however, a problem with this algorithm: it does not account for daily gaps in price action. This leads to the indicator sometimes moving out-of-sync with price action and/or an under-emphasis of the magnitude change of the indicator relative to the change in price action. This is a significant problem for someone trying to read divergences against an underlying.
Note: I have never seen a published attempt to improve this indicator which is why I decided that there had to be a way to do it.
In order to mitigate this issue, I have taken the basic script provided by TradingView and made a key modification. If the open of a candle is outside the range of the previous candle, then the close of the previous candle is used as the "high" for the current candle (in the case of a gap down) or the "low" for the current candle (in the case of a gap up). However, if the close of the current candle exceeds the previous close, highs and lows for the current candle are calculated as normal. I believe this accounts for gaps in price action without significantly altering the original intent of the indicator.
I have made four other minor tweaks:
1. Default style is color coded area above and below the Zero Line
2. Range scaled to +/-100 instead of +/-1 (displays better on graph)
3. Set timeframe to Daily (as that is the timeframe for which this indicator was intended by Chaikin)
4. Length defaults to 21 (which is what Chaikin uses)
Extreme Trend Reversal Points [HeWhoMustNotBeNamed]Using moving average crossover for identifying the change in trend is very common. However, this method can give lots of false signals during the ranging markets. In this algorithm, we try to find the extreme trend by looking at fully aligned multi-level moving averages and only look at moving average crossover when market is in the extreme trend - either bullish or bearish. These points can mean long term downtrend or can also cause a small pullback before trend continuation. In this discussion, we will also check how to handle different scenarios.
🎲 Components
🎯 Recursive Multi Level Moving Averages
Multi level moving average here refers to applying moving average on top of base moving average on multiple levels. For example,
Level 1 SMA = SMA(source, length)
Level 2 SMA = SMA(Level 1 SMA, length)
Level 3 SMA = SMA(Level 2 SMA, length)
..
..
..
Level n SMA = SMA(Level (n-1) SMA, length)
In this script, user can select how many levels of moving averages need to be calculated. This is achieved through " recursive moving average " algorithm. Requirement for building such algorithm was initially raised by @loxx
While I was able to develop them in minimal code with the help of some of the existing libraries built on arrays and matrix , I also thought why not extend this to find something interesting.
Note that since we are using variable levels - we will not be able to plot all the levels of moving average. (This is because plotting cannot be done in the loop). Hence, we are using lines to display the latest moving average levels in front of the last candle. Lines are color coded in such a way that least numbered levels are greener and higher levels are redder.
🎯 Finding the trend and range
Strength of fully aligned moving average is calculated based on position of each level with respect to other levels.
For example, in a complete uptrend, we can find
source > L(1)MA > L(2)MA > L(3)MA ...... > L(n-1)MA > L(n)MA
Similarly in a complete downtrend, we can find
source < L(1)MA < L(2)MA < L(3)MA ...... < L(n-1)MA < L(n)MA
Hence, the strength of trend here is calculated based on relative positions of each levels. Due to this, value of strength can range from 0 to Level*(Level-1)/2
0 represents the complete downtrend
Level*(Level-1)/2 represents the complete uptrend.
Range and Extreme Range are calculated based on the percentile from median. The brackets are defined as per input parameters - Range Percentile and Extreme Range Percentile by using Percentile History as reference length.
Moving average plot is color coded to display the trend strength.
Green - Extreme Bullish
Lime - Bullish
Silver - range
Orange - Bearish
Red - Extreme Bearish
🎯 Finding the trend reversal
Possible trend reversals are when price crosses the moving average while in complete trend with all the moving averages fully aligned. Triangle marks are placed in such locations which can help observe the probable trend reversal points. But, there are possibilities of trend overriding these levels. An example of such thing, we can see here:
In order to overcome this problem, we can employ few techniques.
1. After the signal, wait for trend reversal (moving average plot color to turn silver) before placing your order.
2. Place stop orders on immediate pivot levels or support resistance points instead of opening market order. This way, we can also place an order in the direction of trend. Whichever side the price breaks out, will be the direction to trade.
3. Look for other confirmations such as extremely bullish and bearish candles before placing the orders.
🎯 An example of using stop orders
Let us take this scenario where there is a signal on possible reversal from complete uptrend.
Create a box joining high and low pivots at reasonable distance. You can also chose to add 1 ATR additional distance from pivots.
Use the top of the box as stop-entry for long and bottom as stop-entry for short. The other ends of the box can become stop-losses for each side.
After few bars, we can see that few more signals are plotted but, the price is still within the box. There are some candles which touched the top of the box. But, the candlestick patterns did not represent bullishness on those instances. If you have placed stop orders, these orders would have already filled in. In that case, just wait for position to hit either stop or target.
For bullish side, targets can be placed at certain risk reward levels. In this case, we just use 1:1 for bullish (trend side) and 1:1.5 for bearish side (reversal side)
In this case, price hit the target without any issue:
Wait for next reversal signal to appear before placing another order :)
American Approximation Bjerksund & Stensland 2002 [Loxx]American Approximation Bjerksund & Stensland 2002 is an American Options pricing model. This indicator also includes numerical greeks. You can compare the output of the American Approximation to the Black-Scholes-Merton value on the output of the options panel.
The Bjerksund & Stensland (2002) Approximation
The Bjerksund and Stensland (2002) approximation divides the time to maturity into two parts, each with a separate flat exercise boundary. It is thus a straightforward generalization of the Bjerksund-Stensland 1993 algorithm. The method is fast and efficient and should be more accurate than the Barone-Adesi and Whaley (1987) and the Bjerksund and Stensland (1993b) approximations. The algorithm requires an accurate cumulative bivariate normal approximation. Several approximations that are described in the literature are not sufficiently accurate, but the Genze algorithm works.
C = alpha2*S^B - alpha2*phi(S, t1, B, I2, I2)
+ phi(S, t1, I2, I2) - phi(S, t1, I, I1, I2)
- X*phi(S, t1, 0, I2, I2) + X*phi(S, t1, 0, I1, I2)
+ alpha1*phi(X, t1, B, I1, I2) - alpha1*psi*St, T, B, I1, I2, I1, t1)
+ psi(S, T, 1, I1, I2, I1, t1) - psi(S, T, 1, X, I2, I1, t1)
- X*psi(S, T, 0, I1, I2, I1, t1) + psi(S, T, 0 ,X, I2, I1, t1)
where
alpha1 = (I1 - X)*I1^-B
alpha2 = (I2 - X)*I2^-B
B = (1/2 - b/v^2) + ((b/v^2 - 1/2)^2 + 2*(r/v^2))^0.5
The function psi(S, T, y, H, I) is given by
psi(S, T, gamma, H, I) = e^lambda * S^gamma * (N(-d) - (I/S)^k * N(-d2))
d = (log(S/H) + (b + (gamma - 1/2) * v^2) * T) / (v * T^0.5)
d2 = (log(I^2/(S*H)) + (b + (gamma - 1/2) * v^2) * T) / (v * T^0.5)
lambda = -r + gamma * b + 1/2 * gamma * (gamma - 1) * v^2
k = 2*b/v^2 + (2 * gamma - 1)
and the trigger price I is defined as
I1 = B0 + (B(+infi) - B0) * (1 - e^h1)
I2 = B0 + (B(+infi) - B0) * (1 - e^h2)
h1 = -(b*t1 + 2*v*t1^0.5) * (X^2 / ((B(+infi) - B0))*B0)
h2 = -(b*T + 2*v*T^0.5) * (X^2 / ((B(+infi) - B0))*B0)
t1 = 1/2 * (5^0.5 - 1) * T
B(+infi) = (B / (B - 1)) * X
B0 = max(X, (r / (r - b)) * X)
Moreover, the function psi(S, T, gamma, H, I2, I1, t1) is given by
psi(S, T, gamma, H, I2, I1, t1, r, b, v) = e^(lambda * T) * S^gamma * (M(-e1, -f1, rho) - (I2/S)^k * M(-e2, -f2, rho)
- (I1/S)^k * M(-e3, -f3, -rho) + (I1/I2)^k * M(-e4, -f4, -rho))
where (see screenshot for e and f values)
b=r options on non-dividend paying stock
b=r-q options on stock or index paying a dividend yield of q
b=0 options on futures
b=r-rf currency options (where rf is the rate in the second currency)
Inputs
S = Stock price.
K = Strike price of option.
T = Time to expiration in years.
r = Risk-free rate
c = Cost of Carry
V = Variance of the underlying asset price
cnd1(x) = Cumulative Normal Distribution
cbnd3(x) = Cumulative Bivariate Normal Distribution
nd(x) = Standard Normal Density Function
convertingToCCRate(r, cmp ) = Rate compounder
Numerical Greeks or Greeks by Finite Difference
Analytical Greeks are the standard approach to estimating Delta, Gamma etc... That is what we typically use when we can derive from closed form solutions. Normally, these are well-defined and available in text books. Previously, we relied on closed form solutions for the call or put formulae differentiated with respect to the Black Scholes parameters. When Greeks formulae are difficult to develop or tease out, we can alternatively employ numerical Greeks - sometimes referred to finite difference approximations. A key advantage of numerical Greeks relates to their estimation independent of deriving mathematical Greeks. This could be important when we examine American options where there may not technically exist an exact closed form solution that is straightforward to work with. (via VinegarHill FinanceLabs)
Things to know
Only works on the daily timeframe and for the current source price.
You can adjust the text size to fit the screen
Nadaraya-Watson CombineThis is a combination of the Lux Algo Nadaraya-Watson Estimator and Envelope. Please note the repainting issue.
In addition, I've added a plot of the actual values of the current barstate of
the Nadaraya-Watson windows as they are computed (lines 92-95). It only plots values for the current data at
each time update. It is interesting to compare the trajectory of the end points of the Estimator and
Envelope to the smoothing function at each time update. Due to the kernel smoothing at each update the
history is lost at each update (repaint).
I've added a feature to allow adjustment to the kernel smoothing algorithm as suggested by thomsonraja (line 59).
The settings and usage are repeated from Lux Algo below.
Settings
Window Size: Determines the number of recent price observations to be used to fit the Nadaraya-Watson Estimator.
Bandwidth: Controls the degree of smoothness of the envelopes , with higher values returning smoother results.
Mult: Controls the envelope width.
Src: Input source of the indicator.
Kernel power: See line 59, adjusts the exponential power (powh) as suggested by thomsonraja
Kernel denominator: See line 59, adjusts the denominator (den) as suggested by thomsonraja
Usage
This tool outlines extremes made by the prices within the selected window size.
This is achieved by estimating the underlying trend in the price using kernel smoothing,
calculating the mean absolute deviations from it, and adding/subtracting it
from the estimated underlying trend.
I repeat Lux Algo's caution: 'we do not recommend this tool to be used alone
or solely for real time applications.'
End-Pointed SSA of Normalized Price Corridor [Loxx]End-Pointed SSA of Normalized Price Corridor is an end-pointed SSA of normalized input price to output a smoothed normalized oscillator of price. Corridors are added in attempt to decipher larger trend direction of price. These corridor trend lines are based on highs and lows of price. Due to the SSA algorithm, this indicator takes some time load on the chat, so be patient. You can adjust the lag parameter downward to speed up the indicator load time but this will also degrade the signal. There are many different ways to use this indicator. It is also Renko chart friendly.
An example of emerging trends (these do not repaint)
What is Singular Spectrum Analysis ( SSA )?
Singular spectrum analysis ( SSA ) is a technique of time series analysis and forecasting. It combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. SSA aims at decomposing the original series into a sum of a small number of interpretable components such as a slowly varying trend, oscillatory components and a ‘structureless’ noise. It is based on the singular value decomposition ( SVD ) of a specific matrix constructed upon the time series. Neither a parametric model nor stationarity-type conditions have to be assumed for the time series. This makes SSA a model-free method and hence enables SSA to have a very wide range of applicability.
For our purposes here, we are only concerned with the "Caterpillar" SSA . This methodology was developed in the former Soviet Union independently (the ‘iron curtain effect’) of the mainstream SSA . The main difference between the main-stream SSA and the "Caterpillar" SSA is not in the algorithmic details but rather in the assumptions and in the emphasis in the study of SSA properties. To apply the mainstream SSA , one often needs to assume some kind of stationarity of the time series and think in terms of the "signal plus noise" model (where the noise is often assumed to be ‘red’). In the "Caterpillar" SSA , the main methodological stress is on separability (of one component of the series from another one) and neither the assumption of stationarity nor the model in the form "signal plus noise" are required.
"Caterpillar" SSA
The basic "Caterpillar" SSA algorithm for analyzing one-dimensional time series consists of:
Transformation of the one-dimensional time series to the trajectory matrix by means of a delay procedure (this gives the name to the whole technique);
Singular Value Decomposition of the trajectory matrix;
Reconstruction of the original time series based on a number of selected eigenvectors.
This decomposition initializes forecasting procedures for both the original time series and its components. The method can be naturally extended to multidimensional time series and to image processing.
The method is a powerful and useful tool of time series analysis in meteorology, hydrology, geophysics, climatology and, according to our experience, in economics, biology, physics, medicine and other sciences; that is, where short and long, one-dimensional and multidimensional, stationary and non-stationary, almost deterministic and noisy time series are to be analyzed.
Included
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
End-pointed SSA of Williams %R [Loxx]End-pointed SSA of Williams %R is an indicator that runes Williams %R SSA calculation through a Singular Spectrum Analysis (SSA) algorithm to derive a smoother final output. The reduction in noise from the traditional Williams %R is significant.
What is Williams %R?
Williams %R , also known as the Williams Percent Range, is a type of momentum indicator that moves between 0 and -100 and measures overbought and oversold levels. The Williams %R may be used to find entry and exit points in the market. The indicator is very similar to the Stochastic oscillator and is used in the same way. It was developed by Larry Williams and it compares a stock’s closing price to the high-low range over a specific period, typically 14 days or periods.
What is Singular Spectrum Analysis ( SSA )?
Singular spectrum analysis ( SSA ) is a technique of time series analysis and forecasting. It combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. SSA aims at decomposing the original series into a sum of a small number of interpretable components such as a slowly varying trend, oscillatory components and a ‘structureless’ noise. It is based on the singular value decomposition ( SVD ) of a specific matrix constructed upon the time series. Neither a parametric model nor stationarity-type conditions have to be assumed for the time series. This makes SSA a model-free method and hence enables SSA to have a very wide range of applicability.
For our purposes here, we are only concerned with the "Caterpillar" SSA . This methodology was developed in the former Soviet Union independently (the ‘iron curtain effect’) of the mainstream SSA . The main difference between the main-stream SSA and the "Caterpillar" SSA is not in the algorithmic details but rather in the assumptions and in the emphasis in the study of SSA properties. To apply the mainstream SSA , one often needs to assume some kind of stationarity of the time series and think in terms of the "signal plus noise" model (where the noise is often assumed to be ‘red’). In the "Caterpillar" SSA , the main methodological stress is on separability (of one component of the series from another one) and neither the assumption of stationarity nor the model in the form "signal plus noise" are required.
"Caterpillar" SSA
The basic "Caterpillar" SSA algorithm for analyzing one-dimensional time series consists of:
Transformation of the one-dimensional time series to the trajectory matrix by means of a delay procedure (this gives the name to the whole technique);
Singular Value Decomposition of the trajectory matrix;
Reconstruction of the original time series based on a number of selected eigenvectors.
This decomposition initializes forecasting procedures for both the original time series and its components. The method can be naturally extended to multidimensional time series and to image processing.
The method is a powerful and useful tool of time series analysis in meteorology, hydrology, geophysics, climatology and, according to our experience, in economics, biology, physics, medicine and other sciences; that is, where short and long, one-dimensional and multidimensional, stationary and non-stationary, almost deterministic and noisy time series are to be analyzed.
Included:
Bar coloring
[*Alerts
[*Signals
[*Loxx's Expanded Source Types
Related Williams %R Indicators
Williams %R on Chart w/ Dynamic Zones
Williams %R w/ Bollinger Bands
Intermediate Williams %R w/ Discontinued Signal Lines
Related SSA Indicators
End-pointed SSA of FDASMA
End-pointed SSA of Normalized Price Oscillator
SSA of Price [Loxx]SSA of Price ris an indicator that runs an SSA calculation on price to derive final output. This indicator also serves to introduce the concept of SSA to the Pine Coder community. The data returned from this algorithm is an array of modeled values on past X bars. Unlike the end-pointed SSA posted previously, this version pulls the modeled data from the output array and draws a line backward from the current bar. This indicator recalculates so past observations aren't very useful, but the current observation is since the current bar is index 0 of the output array which means it's the endpointed value.
What is Singular Spectrum Analysis ( SSA )?
Singular spectrum analysis ( SSA ) is a technique of time series analysis and forecasting. It combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. SSA aims at decomposing the original series into a sum of a small number of interpretable components such as a slowly varying trend, oscillatory components and a ‘structureless’ noise. It is based on the singular value decomposition ( SVD ) of a specific matrix constructed upon the time series. Neither a parametric model nor stationarity-type conditions have to be assumed for the time series. This makes SSA a model-free method and hence enables SSA to have a very wide range of applicability.
For our purposes here, we are only concerned with the "Caterpillar" SSA . This methodology was developed in the former Soviet Union independently (the ‘iron curtain effect’) of the mainstream SSA . The main difference between the main-stream SSA and the "Caterpillar" SSA is not in the algorithmic details but rather in the assumptions and in the emphasis in the study of SSA properties. To apply the mainstream SSA , one often needs to assume some kind of stationarity of the time series and think in terms of the "signal plus noise" model (where the noise is often assumed to be ‘red’). In the "Caterpillar" SSA , the main methodological stress is on separability (of one component of the series from another one) and neither the assumption of stationarity nor the model in the form "signal plus noise" are required.
"Caterpillar" SSA
The basic "Caterpillar" SSA algorithm for analyzing one-dimensional time series consists of:
Transformation of the one-dimensional time series to the trajectory matrix by means of a delay procedure (this gives the name to the whole technique);
Singular Value Decomposition of the trajectory matrix;
Reconstruction of the original time series based on a number of selected eigenvectors.
This decomposition initializes forecasting procedures for both the original time series and its components. The method can be naturally extended to multidimensional time series and to image processing.
The method is a powerful and useful tool of time series analysis in meteorology, hydrology, geophysics, climatology and, according to our experience, in economics, biology, physics, medicine and other sciences; that is, where short and long, one-dimensional and multidimensional, stationary and non-stationary, almost deterministic and noisy time series are to be analyzed.
Included:
Bar coloring
Alerts
Signals
Loxx's Expanded Source Types
End-pointed SSA of Normalized Price Oscillator [Loxx]End-pointed SSA of Normalized Price Oscillator is an indicator that converts source price into a normalized oscillator and runs an SSA calculation to derived a smoother final output. This indicator also serves to introduce the concept of SSA to the Pine Coder community. The data returned from this algorithm is an array of modeled values on past X bars. We could use this data but it's not useful, so instead we use the end-pointed value which is the first value of the array at index 0.
What is Singular Spectrum Analysis (SSA)?
Singular spectrum analysis (SSA) is a technique of time series analysis and forecasting. It combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. SSA aims at decomposing the original series into a sum of a small number of interpretable components such as a slowly varying trend, oscillatory components and a ‘structureless’ noise. It is based on the singular value decomposition (SVD) of a specific matrix constructed upon the time series. Neither a parametric model nor stationarity-type conditions have to be assumed for the time series. This makes SSA a model-free method and hence enables SSA to have a very wide range of applicability.
For our purposes here, we are only concerned with the "Caterpillar" SSA. This methodology was developed in the former Soviet Union independently (the ‘iron curtain effect’) of the mainstream SSA. The main difference between the main-stream SSA and the "Caterpillar" SSA is not in the algorithmic details but rather in the assumptions and in the emphasis in the study of SSA properties. To apply the mainstream SSA, one often needs to assume some kind of stationarity of the time series and think in terms of the "signal plus noise" model (where the noise is often assumed to be ‘red’). In the "Caterpillar" SSA, the main methodological stress is on separability (of one component of the series from another one) and neither the assumption of stationarity nor the model in the form "signal plus noise" are required.
"Caterpillar" SSA
The basic "Caterpillar" SSA algorithm for analyzing one-dimensional time series consists of:
Transformation of the one-dimensional time series to the trajectory matrix by means of a delay procedure (this gives the name to the whole technique);
Singular Value Decomposition of the trajectory matrix;
Reconstruction of the original time series based on a number of selected eigenvectors.
This decomposition initializes forecasting procedures for both the original time series and its components. The method can be naturally extended to multidimensional time series and to image processing.
The method is a powerful and useful tool of time series analysis in meteorology, hydrology, geophysics, climatology and, according to our experience, in economics, biology, physics, medicine and other sciences; that is, where short and long, one-dimensional and multidimensional, stationary and non-stationary, almost deterministic and noisy time series are to be analyzed.
Included:
Bar coloring
Alerts
Signals
Loxx's Expanded Source Types
Itakura-Saito Autoregressive Extrapolation of Price [Loxx]Itakura-Saito Autoregressive Extrapolation of Price is an indicator that uses an autoregressive analysis to predict future prices. This is a linear technique that was originally derived or speech analysis algorithms.
What is Itakura-Saito Autoregressive Analysis?
The technique of linear prediction has been available for speech analysis since the late 1960s (Itakura & Saito, 1973a, 1970; Atal & Hanauer, 1971), although the basic principles were established long before this by Wiener (1947). Linear predictive coding, which is also known as autoregressive analysis, is a time-series algorithm that has applications in many fields other than speech analysis (see, e.g., Chatfield, 1989).
Itakura and Saito developed a formulation for linear prediction analysis using a lattice form for the inverse filter. The Itakura–Saito distance (or Itakura–Saito divergence) is a measure of the difference between an original spectrum and an approximation of that spectrum. Although it is not a perceptual measure it is intended to reflect perceptual (dis)similarity. It was proposed by Fumitada Itakura and Shuzo Saito in the 1960s while they were with NTT. The distance is defined as: The Itakura–Saito distance is a Bregman divergence, but is not a true metric since it is not symmetric and it does not fulfil triangle inequality.
read more: Selected Methods for Improving Synthesis Speech Quality Using Linear Predictive Coding: System Description, Coefficient Smoothing and Streak
Data inputs
Source Settings: -Loxx's Expanded Source Types. You typically use "open" since open has already closed on the current active bar
LastBar - bar where to start the prediction
PastBars - how many bars back to model
LPOrder - order of linear prediction model; 0 to 1
FutBars - how many bars you want to forward predict
Things to know
Normally, a simple moving average is calculated on source data. I've expanded this to 38 different averaging methods using Loxx's Moving Avreages.
This indicator repaints
Related Indicators (linear extrapolation of price)
Levinson-Durbin Autocorrelation Extrapolation of Price
Weighted Burg AR Spectral Estimate Extrapolation of Price
Helme-Nikias Weighted Burg AR-SE Extra. of Price
APA Adaptive Fisher Transform [Loxx]APA Adaptive Fisher Transform is an adaptive cycle Fisher Transform using Ehlers Autocorrelation Periodogram Algorithm to calculate the dominant cycle period.
What is an adaptive cycle, and what is Ehlers Autocorrelation Periodogram Algorithm?
From Ehlers' book Cycle Analytics for Traders Advanced Technical Trading Concepts by John F. Ehlers , 2013, page 135:
"Adaptive filters can have several different meanings. For example, Perry Kaufman’s adaptive moving average ( KAMA ) and Tushar Chande’s variable index dynamic average ( VIDYA ) adapt to changes in volatility . By definition, these filters are reactive to price changes, and therefore they close the barn door after the horse is gone.The adaptive filters discussed in this chapter are the familiar Stochastic , relative strength index ( RSI ), commodity channel index ( CCI ), and band-pass filter.The key parameter in each case is the look-back period used to calculate the indicator. This look-back period is commonly a fixed value. However, since the measured cycle period is changing, it makes sense to adapt these indicators to the measured cycle period. When tradable market cycles are observed, they tend to persist for a short while.Therefore, by tuning the indicators to the measure cycle period they are optimized for current conditions and can even have predictive characteristics.
The dominant cycle period is measured using the Autocorrelation Periodogram Algorithm. That dominant cycle dynamically sets the look-back period for the indicators. I employ my own streamlined computation for the indicators that provide smoother and easier to interpret outputs than traditional methods. Further, the indicator codes have been modified to remove the effects of spectral dilation.This basically creates a whole new set of indicators for your trading arsenal."
What is Fisher Transform?
The Fisher Transform is a technical indicator created by John F. Ehlers that converts prices into a Gaussian normal distribution.
The indicator highlights when prices have moved to an extreme, based on recent prices. This may help in spotting turning points in the price of an asset. It also helps show the trend and isolate the price waves within a trend.
Included:
Zero-line and signal cross options for bar coloring
Customizable overbought/oversold thresh-holds
Alerts
Signals
Jurik Filtered, Composite Fractal Behavior (CFB) Channels [Loxx]Double Jurik-Filtered Composite Fractal Behavior (CFB) Channels is a channel indicator that acts as both a baseline, similar to Donchian, and as support and resistance levels. This indicator is price time adaptive meaning it flexes to price volatility waves. The indicators adaptive nature is calculated using the Composite Fractal Behavior (CFB) algorithm. The result of this adaptive calculation is then smoothed using Jurik Filtering, and then it's normalized to conform to a range of values. This helps better identify trends.
What is Composite Fractal Behavior (CFB)?
All around you mechanisms adjust themselves to their environment. From simple thermostats that react to air temperature to computer chips in modern cars that respond to changes in engine temperature, r.p.m.'s, torque, and throttle position. It was only a matter of time before fast desktop computers applied the mathematics of self-adjustment to systems that trade the financial markets.
Unlike basic systems with fixed formulas, an adaptive system adjusts its own equations. For example, start with a basic channel breakout system that uses the highest closing price of the last N bars as a threshold for detecting breakouts on the up side. An adaptive and improved version of this system would adjust N according to market conditions, such as momentum, price volatility or acceleration.
Since many systems are based directly or indirectly on cycles, another useful measure of market condition is the periodic length of a price chart's dominant cycle, (DC), that cycle with the greatest influence on price action.
The utility of this new DC measure was noted by author Murray Ruggiero in the January '96 issue of Futures Magazine. In it. Mr. Ruggiero used it to adaptive adjust the value of N in a channel breakout system. He then simulated trading 15 years of D-Mark futures in order to compare its performance to a similar system that had a fixed optimal value of N. The adaptive version produced 20% more profit!
This DC index utilized the popular MESA algorithm (a formulation by John Ehlers adapted from Burg's maximum entropy algorithm, MEM). Unfortunately, the DC approach is problematic when the market has no real dominant cycle momentum, because the mathematics will produce a value whether or not one actually exists! Therefore, we developed a proprietary indicator that does not presuppose the presence of market cycles. It's called CFB (Composite Fractal Behavior) and it works well whether or not the market is cyclic.
CFB examines price action for a particular fractal pattern, categorizes them by size, and then outputs a composite fractal size index. This index is smooth, timely and accurate
Essentially, CFB reveals the length of the market's trending action time frame. Long trending activity produces a large CFB index and short choppy action produces a small index value. Investors have found many applications for CFB which involve scaling other existing technical indicators adaptively, on a bar-to-bar basis.
What is Jurik Volty used in the Juirk Filter?
One of the lesser known qualities of Juirk smoothing is that the Jurik smoothing process is adaptive. "Jurik Volty" (a sort of market volatility ) is what makes Jurik smoothing adaptive. The Jurik Volty calculation can be used as both a standalone indicator and to smooth other indicators that you wish to make adaptive.
What is the Jurik Moving Average?
Have you noticed how moving averages add some lag (delay) to your signals? ... especially when price gaps up or down in a big move, and you are waiting for your moving average to catch up? Wait no more! JMA eliminates this problem forever and gives you the best of both worlds: low lag and smooth lines.
Ideally, you would like a filtered signal to be both smooth and lag-free. Lag causes delays in your trades, and increasing lag in your indicators typically result in lower profits. In other words, late comers get what's left on the table after the feast has already begun.
Adaptive, Double Jurik Filter Moving Average (AJFMA) [Loxx]Adaptive, Double Jurik Filter Moving Average (AJFMA) is moving average like Jurik Moving Average but with the addition of double smoothing and adaptive length (Autocorrelation Periodogram Algorithm) and power/volatility {Juirk Volty) inputs to further reduce noise and identify trends.
What is Jurik Volty?
One of the lesser known qualities of Juirk smoothing is that the Jurik smoothing process is adaptive. "Jurik Volty" (a sort of market volatility ) is what makes Jurik smoothing adaptive. The Jurik Volty calculation can be used as both a standalone indicator and to smooth other indicators that you wish to make adaptive.
What is the Jurik Moving Average?
Have you noticed how moving averages add some lag (delay) to your signals? ... especially when price gaps up or down in a big move, and you are waiting for your moving average to catch up? Wait no more! JMA eliminates this problem forever and gives you the best of both worlds: low lag and smooth lines.
Ideally, you would like a filtered signal to be both smooth and lag-free. Lag causes delays in your trades, and increasing lag in your indicators typically result in lower profits. In other words, late comers get what's left on the table after the feast has already begun.
That's why investors, banks and institutions worldwide ask for the Jurik Research Moving Average ( JMA ). You may apply it just as you would any other popular moving average. However, JMA's improved timing and smoothness will astound you.
What is adaptive Jurik volatility?
One of the lesser known qualities of Juirk smoothing is that the Jurik smoothing process is adaptive. "Jurik Volty" (a sort of market volatility ) is what makes Jurik smoothing adaptive. The Jurik Volty calculation can be used as both a standalone indicator and to smooth other indicators that you wish to make adaptive.
What is an adaptive cycle, and what is Ehlers Autocorrelation Periodogram Algorithm?
From his Ehlers' book Cycle Analytics for Traders Advanced Technical Trading Concepts by John F. Ehlers , 2013, page 135:
"Adaptive filters can have several different meanings. For example, Perry Kaufman’s adaptive moving average ( KAMA ) and Tushar Chande’s variable index dynamic average ( VIDYA ) adapt to changes in volatility . By definition, these filters are reactive to price changes, and therefore they close the barn door after the horse is gone.The adaptive filters discussed in this chapter are the familiar Stochastic , relative strength index ( RSI ), commodity channel index ( CCI ), and band-pass filter.The key parameter in each case is the look-back period used to calculate the indicator. This look-back period is commonly a fixed value. However, since the measured cycle period is changing, it makes sense to adapt these indicators to the measured cycle period. When tradable market cycles are observed, they tend to persist for a short while.Therefore, by tuning the indicators to the measure cycle period they are optimized for current conditions and can even have predictive characteristics.
The dominant cycle period is measured using the Autocorrelation Periodogram Algorithm. That dominant cycle dynamically sets the look-back period for the indicators. I employ my own streamlined computation for the indicators that provide smoother and easier to interpret outputs than traditional methods. Further, the indicator codes have been modified to remove the effects of spectral dilation.This basically creates a whole new set of indicators for your trading arsenal."
Included
- Double calculation of AJFMA for even smoother results
Ehlers Adaptive Relative Strength Index (RSI) [Loxx]Ehlers Adaptive Relative Strength Index (RSI) is an implementation of RSI using Ehlers Autocorrelation Periodogram Algorithm to derive the length input for RSI. Other implementations of Ehers Adaptive RSI rely on the inferior Hilbert Transformer derive the dominant cycle.
In his book "Cycle Analytics for Traders Advanced Technical Trading Concepts", John F. Ehlers describes an implementation for Adaptive Relative Strength Index in order to solve for varying length inputs into the classic RSI equation.
What is an adaptive cycle, and what is the Autocorrelation Periodogram Algorithm?
From his Ehlers' book mentioned above, page 135:
"Adaptive filters can have several different meanings. For example, Perry Kaufman’s adaptive moving average (KAMA) and Tushar Chande’s variable index dynamic average (VIDYA) adapt to changes in volatility. By definition, these filters are reactive to price changes, and therefore they close the barn door after the horse is gone.The adaptive filters discussed in this chapter are the familiar Stochastic, relative strength index (RSI), commodity channel index (CCI), and band-pass filter.The key parameter in each case is the look-back period used to calculate the indicator.This look-back period is commonly a fixed value. However, since the measured cycle period is changing, as we have seen in previous chapters, it makes sense to adapt these indicators to the measured cycle period. When tradable market cycles are observed, they tend to persist for a short while.Therefore, by tuning the indicators to the measure cycle period they are optimized for current conditions and can even have predictive characteristics.
The dominant cycle period is measured using the autocorrelation periodogram algorithm. That dominant cycle dynamically sets the look-back period for the indicators. I employ my own streamlined computation for the indicators that provide smoother and easier to interpret outputs than traditional methods. Further, the indicator codes have been modified to remove the effects of spectral dilation.This basically creates a whole new set of indicators for your trading arsenal."
What is Adaptive RSI?
From his Ehlers' book mentioned above, page 137:
"The adaptive RSI starts with the computation of the dominant cycle using the autocorrelation periodogram approach. Since the objective is to use only those frequency components passed by the roofing filter, the variable "filt" is used as a data input rather than closing prices. Rather than independently taking the averages of the numerator and denominator, I chose to perform smoothing on the ratio using the SuperSmoother filter. The coefficients for the SuperSmoother filters have previously been computed in the dominant cycle measurement part of the code."
Happy trading!
Customizable Non-Repainting HTF MACD MFI Scalper Bot Strategy v2Customizable Non-Repainting HTF MACD MFI Scalper Bot Strategy v2
This script was originally shared by Wunderbit as a free open source script for the community to work with. This is my second published iteration of this idea.
WHAT THIS SCRIPT DOES:
It is intended for use on an algorithmic bot trading platform but can be used for scalping and manual trading.
This strategy is based on the trend-following momentum indicator . It includes the Money Flow index as an additional point for entry.
This is a new and improved version geared for lower timeframes (15-5 minutes), but can be run on larger ones as well. I am testing it live as my high frequency trader.
HOW IT DOES IT:
It uses a combination of MACD and MFI indicators to create entry signals. Parameters for each indicator have been surfaced for user configurability.
Take profits are now trailing profits, and the stop loss is now fixed. Why? I found that the trailing stop loss with ATR in the previous version yields very good results for back tests but becomes very difficult to deploy live due to transaction fees. As you can see the average trade is a higher profit percentage than the previous version.
HOW IS MY VERSION ORIGINAL:
Now instead of using ATR stop loss, we have a fixed stop loss - counter intuitively to what some may believe this performs better in live trading scenarios since it gives the strategy room to move. I noticed that the ATR trailing stop was stopping out too fast and was eating away balance due to transaction fees.
The take profit on the other hand is now a trailing profit with a customizable deviation. This ensures that you can have a minimum profit you want to take in order to exit.
I have depracated the old ATR trailing stop as it became too confusing to have those as different options. I kept the old version for others that want to experiment with it. The source code still requires some cleanup, but its fully functional.
I added in a way to show RSI values and ATR values with a checkbox so that you can use the new an improved ATR Filter (and grab the right RSI values for the RSI filter). This will help to filter out times of very low volatility where we are unlikely to find a profitable trade. Use the "Show Data" checkbox to see what the values are on the indicator pane, then use those values to gauge what you want to filter out.
Both versions
Delayed Signals : The script has been refactored to use a time frame drop down. The higher time frame can be run on a faster chart (recommended on one minute chart for fastest signal confirmation and relay to algotrading platform.)
Repainting Issues : All indicators have been recoded to use the security function that checks to see if the current calculation is in realtime, if it is, then it uses the previous bar for calculation. If you are still experiencing repainting issues based on intended (or non intended use), please provide a report with screenshot and explanation so I can try to address.
Filtering : I have added to additional filters an ABOVE EMA Filter and a BELOW RSI Filter (both can be turned on and off)
Customizable Long and Close Messages : This allows someone to use the script for algorithmic trading without having to alter code. It also means you can use one indicator for all of your different alterts required for your bots.
HOW TO USE IT:
It is intended to be used in the 5-30 minute time frames, but you might be able to get a good configuration for higher time frames. I welcome feedback from other users on what they have found.
Find a pair with high volatility (example KUCOIN:ETH3LUSDT ) - I have found it works particularly well with 3L and 3S tokens for crypto. although it the limitation is that confrigurations I have found to work typically have low R/R ratio, but very high win rate and profit factor.
Ideally set one minute chart for bots, but you can use other charts for manual trading. The signal will be delayed by one bar but I have found configurations that still test well.
Select a time frame in configuration for your indicator calculations.
Select the strategy config for time frame (resolution). I like to use 5 and 15 minutes for scalping scenarios, but I am interested in hearing back from other community memebers.
Optimize your indicator without filters : customize your settings for MACD and MFI that are profitable with your chart and selected time frame calculation. Try different Take Profits (try about 2-5%) and stop loss (try about 5-8%). See if your back test is profitable and continue to optimize.
Use the Trend, RSI, ATR Filter to further refine your signals for entry. You will get less entries but you can increase your win ratio.
You can use the open and close messages for a platform integration, but I choose to set mine up on the destination platform and let the platform close it. With certain platforms you cannot be sure what your entry point actually was compared to Trading View due to slippage and timing, so I let the platform decide when it is actually profitable.
Limitations: this works rather well for short term, and does some good forward testing but back testing large data sets is a problem when switching from very small time frame to large time frame. For instance, finding a configuration that works on a one minute chart but then changing to a 1 hour chart means you lose some of your intra bar calclulations. There are some new features in pine script which might be able to address, this, but I have not had a chance to work on that issue.
LinearRegressionLibraryLibrary "LinearRegressionLibrary" contains functions for fitting a regression line to the time series by means of different models, as well as functions for estimating the accuracy of the fit.
Linear regression algorithms:
RepeatedMedian(y, n, lastBar) applies repeated median regression (robust linear regression algorithm) to the input time series within the selected interval.
Parameters:
y :: float series, source time series (e.g. close)
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
Output:
mSlope :: float, slope of the regression line
mInter :: float, intercept of the regression line
TheilSen(y, n, lastBar) applies the Theil-Sen estimator (robust linear regression algorithm) to the input time series within the selected interval.
Parameters:
y :: float series, source time series
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
Output:
tsSlope :: float, slope of the regression line
tsInter :: float, intercept of the regression line
OrdinaryLeastSquares(y, n, lastBar) applies the ordinary least squares regression (non-robust) to the input time series within the selected interval.
Parameters:
y :: float series, source time series
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
Output:
olsSlope :: float, slope of the regression line
olsInter :: float, intercept of the regression line
Model performance metrics:
metricRMSE(y, n, lastBar, slope, intercept) returns the Root-Mean-Square Error (RMSE) of the regression. The better the model, the lower the RMSE.
Parameters:
y :: float series, source time series (e.g. close)
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
slope :: float, slope of the evaluated linear regression line
intercept :: float, intercept of the evaluated linear regression line
Output:
rmse :: float, RMSE value
metricMAE(y, n, lastBar, slope, intercept) returns the Mean Absolute Error (MAE) of the regression. MAE is is similar to RMSE but is less sensitive to outliers. The better the model, the lower the MAE.
Parameters:
y :: float series, source time series
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
slope :: float, slope of the evaluated linear regression line
intercept :: float, intercept of the evaluated linear regression line
Output:
mae :: float, MAE value
metricR2(y, n, lastBar, slope, intercept) returns the coefficient of determination (R squared) of the regression. The better the linear regression fits the data (compared to the sample mean), the closer the value of the R squared is to 1.
Parameters:
y :: float series, source time series
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
slope :: float, slope of the evaluated linear regression line
intercept :: float, intercept of the evaluated linear regression line
Output:
Rsq :: float, R-sqared score
Usage example:
//@version=5
indicator('ExampleLinReg', overlay=true)
// import the library
import tbiktag/LinearRegressionLibrary/1 as linreg
// define the studied interval: last 100 bars
int Npoints = 100
int lastBar = bar_index
int firstBar = bar_index - Npoints
// apply repeated median regression to the closing price time series within the specified interval
{square bracket}slope, intercept{square bracket} = linreg.RepeatedMedian(close, Npoints, lastBar)
// calculate the root-mean-square error of the obtained linear fit
rmse = linreg.metricRMSE(close, Npoints, lastBar, slope, intercept)
// plot the line and print the RMSE value
float y1 = intercept
float y2 = intercept + slope * (Npoints - 1)
if barstate.islast
{indent} line.new(firstBar,y1, lastBar,y2)
{indent} label.new(lastBar,y2,text='RMSE = '+str.format("{0,number,#.#}", rmse))