Levinson-Durbin Autocorrelation Extrapolation of Price [Loxx]Levinson-Durbin Autocorrelation Extrapolation of Price is an indicator that uses the Levinson recursion or Levinson–Durbin recursion algorithm to predict price moves. This method is commonly used in speech modeling and prediction engines.
What is Levinson recursion or Levinson–Durbin recursion?
Is a linear algebra prediction analysis that is performed once per bar using the autocorrelation method with a within a specified asymmetric window. The autocorrelation coefficients of the window are computed and converted to LP coefficients using the Levinson algorithm. The LP coefficients are then transformed to line spectrum pairs for quantization and interpolation. The interpolated quantized and unquantized filters are converted back to the LP filter coefficients to construct the synthesis and weighting filters for each bar.
Data inputs
Source Settings: -Loxx's Expanded Source Types. You typically use "open" since open has already closed on the current active bar
LastBar - bar where to start the prediction
PastBars - how many bars back to model
LPOrder - order of linear prediction model; 0 to 1
FutBars - how many bars you want to forward predict
Things to know
Normally, a simple moving average is caculated on source data. I've expanded this to 38 different averaging methods using Loxx's Moving Avreages.
This indicator repaints
Included
Bar color muting
Further reading
Implementing the Levinson-Durbin Algorithm on the StarCore™ SC140/SC1400 Cores
LevinsonDurbin_G729 Algorithm, Calculates LP coefficients from the autocorrelation coefficients. Intel® Integrated Performance Primitives for Intel® Architecture Reference Manual
Cerca negli script per "algo"
APA-Adaptive, Ehlers Early Onset Trend [Loxx]APA-Adaptive, Ehlers Early Onset Trend is Ehlers Early Onset Trend but with Autocorrelation Periodogram Algorithm dominant cycle period input.
What is Ehlers Early Onset Trend?
The Onset Trend Detector study is a trend analyzing technical indicator developed by John F. Ehlers , based on a non-linear quotient transform. Two of Mr. Ehlers' previous studies, the Super Smoother Filter and the Roofing Filter, were used and expanded to create this new complex technical indicator. Being a trend-following analysis technique, its main purpose is to address the problem of lag that is common among moving average type indicators.
The Onset Trend Detector first applies the EhlersRoofingFilter to the input data in order to eliminate cyclic components with periods longer than, for example, 100 bars (default value, customizable via input parameters) as those are considered spectral dilation. Filtered data is then subjected to re-filtering by the Super Smoother Filter so that the noise (cyclic components with low length) is reduced to minimum. The period of 10 bars is a default maximum value for a wave cycle to be considered noise; it can be customized via input parameters as well. Once the data is cleared of both noise and spectral dilation, the filter processes it with the automatic gain control algorithm which is widely used in digital signal processing. This algorithm registers the most recent peak value and normalizes it; the normalized value slowly decays until the next peak swing. The ratio of previously filtered value to the corresponding peak value is then quotiently transformed to provide the resulting oscillator. The quotient transform is controlled by the K coefficient: its allowed values are in the range from -1 to +1. K values close to 1 leave the ratio almost untouched, those close to -1 will translate it to around the additive inverse, and those close to zero will collapse small values of the ratio while keeping the higher values high.
Indicator values around 1 signify uptrend and those around -1, downtrend.
What is an adaptive cycle, and what is Ehlers Autocorrelation Periodogram Algorithm?
From his Ehlers' book Cycle Analytics for Traders Advanced Technical Trading Concepts by John F. Ehlers , 2013, page 135:
"Adaptive filters can have several different meanings. For example, Perry Kaufman’s adaptive moving average ( KAMA ) and Tushar Chande’s variable index dynamic average ( VIDYA ) adapt to changes in volatility . By definition, these filters are reactive to price changes, and therefore they close the barn door after the horse is gone.The adaptive filters discussed in this chapter are the familiar Stochastic , relative strength index ( RSI ), commodity channel index ( CCI ), and band-pass filter.The key parameter in each case is the look-back period used to calculate the indicator. This look-back period is commonly a fixed value. However, since the measured cycle period is changing, it makes sense to adapt these indicators to the measured cycle period. When tradable market cycles are observed, they tend to persist for a short while.Therefore, by tuning the indicators to the measure cycle period they are optimized for current conditions and can even have predictive characteristics.
The dominant cycle period is measured using the Autocorrelation Periodogram Algorithm. That dominant cycle dynamically sets the look-back period for the indicators. I employ my own streamlined computation for the indicators that provide smoother and easier to interpret outputs than traditional methods. Further, the indicator codes have been modified to remove the effects of spectral dilation.This basically creates a whole new set of indicators for your trading arsenal."
Jurik Composite Fractal Behavior (CFB) on EMA [Loxx]Jurik Composite Fractal Behavior (CFB) on EMA is an exponential moving average with adaptive price trend duration inputs. This purpose of this indicator is to introduce the formulas for the calculation Composite Fractal Behavior. As you can see from the chart above, price reacts wildly to shifts in volatility--smoothing out substantially while riding a volatility wave and cutting sharp corners when volatility drops. Notice the chop zone on BTC around August 2021, this was a time of extremely low relative volatility.
This indicator uses three previous indicators from my public scripts. These are:
JCFBaux Volatility
Jurik Filter
Jurik Volty
The CFB is also related to the following indicator
Jurik Velocity ("smoother moment")
Now let's dive in...
What is Composite Fractal Behavior (CFB)?
All around you mechanisms adjust themselves to their environment. From simple thermostats that react to air temperature to computer chips in modern cars that respond to changes in engine temperature, r.p.m.'s, torque, and throttle position. It was only a matter of time before fast desktop computers applied the mathematics of self-adjustment to systems that trade the financial markets.
Unlike basic systems with fixed formulas, an adaptive system adjusts its own equations. For example, start with a basic channel breakout system that uses the highest closing price of the last N bars as a threshold for detecting breakouts on the up side. An adaptive and improved version of this system would adjust N according to market conditions, such as momentum, price volatility or acceleration.
Since many systems are based directly or indirectly on cycles, another useful measure of market condition is the periodic length of a price chart's dominant cycle, (DC), that cycle with the greatest influence on price action.
The utility of this new DC measure was noted by author Murray Ruggiero in the January '96 issue of Futures Magazine. In it. Mr. Ruggiero used it to adaptive adjust the value of N in a channel breakout system. He then simulated trading 15 years of D-Mark futures in order to compare its performance to a similar system that had a fixed optimal value of N. The adaptive version produced 20% more profit!
This DC index utilized the popular MESA algorithm (a formulation by John Ehlers adapted from Burg's maximum entropy algorithm, MEM). Unfortunately, the DC approach is problematic when the market has no real dominant cycle momentum, because the mathematics will produce a value whether or not one actually exists! Therefore, we developed a proprietary indicator that does not presuppose the presence of market cycles. It's called CFB (Composite Fractal Behavior) and it works well whether or not the market is cyclic.
CFB examines price action for a particular fractal pattern, categorizes them by size, and then outputs a composite fractal size index. This index is smooth, timely and accurate
Essentially, CFB reveals the length of the market's trending action time frame. Long trending activity produces a large CFB index and short choppy action produces a small index value. Investors have found many applications for CFB which involve scaling other existing technical indicators adaptively, on a bar-to-bar basis.
What is Jurik Volty used in the Juirk Filter?
One of the lesser known qualities of Juirk smoothing is that the Jurik smoothing process is adaptive. "Jurik Volty" (a sort of market volatility ) is what makes Jurik smoothing adaptive. The Jurik Volty calculation can be used as both a standalone indicator and to smooth other indicators that you wish to make adaptive.
What is the Jurik Moving Average?
Have you noticed how moving averages add some lag (delay) to your signals? ... especially when price gaps up or down in a big move, and you are waiting for your moving average to catch up? Wait no more! JMA eliminates this problem forever and gives you the best of both worlds: low lag and smooth lines.
Ideally, you would like a filtered signal to be both smooth and lag-free. Lag causes delays in your trades, and increasing lag in your indicators typically result in lower profits. In other words, late comers get what's left on the table after the feast has already begun.
Modifications and improvements
1. Jurik's original calculation for CFB only allowed for depth lengths of 24, 48, 96, and 192. For theoretical purposes, this indicator allows for up to 20 different depth inputs to sample volatility. These depth lengths are
2, 3, 4, 6, 8, 12, 16, 24, 32, 48, 64, 96, 128, 192, 256, 384, 512, 768, 1024, 1536
Including these additional length inputs is arguable useless, but they are are included for completeness of the algorithm.
2. The result of the CFB calculation is forced to be an integer greater than or equal to 1.
3. The result of the CFB calculation is double filtered using an advanced, (and adaptive itself) filtering algorithm called the Jurik Filter. This filter and accompanying internal algorithm are discussed above.
Relative Strength Super Smoother by lastguruA better version of Apirine's RS EMA by using a superior MA: Ehlers Super Smoother.
In January 2022 edition of TASC Vitaly Apirine introduced his Relative Strength Exponential Moving Average. A concept not entirely new, as Tushar Chande used a similar calculation for his VIDYA moving average. Both are based on the idea to change EMA length depending on the absolute RSI value, so the moving average would speed up then RSI is going up or down from the center value (when there is a significant directional price movement), and slow down when RSI returns to the center value (when there is a neutral or sideways movement). That way EMA responsiveness would increase where it matters most, but decrease where there is a high probability of whipsaw.
There are only two main differences between VIDYA and RS EMA:
RSI internal smoothing - VIDYA uses SMA, as Chande's CMO is an RSI with SMA; RS EMA uses EMA
Change direction - VIDYA sets the fastest length; RS EMA sets the slowest length
Both algorithms use EMA as the base of their calculation. As John F. Ehlers has shown in his article "Predictive and Successful Indicators" (January 2014 issue of TASC), EMA is not a very efficient filter, as it introduces a significant lag if sufficient smoothing is required. He describes a new smoothing filter called SuperSmoother, "that sharply attenuates aliasing noise while minimizing filtering lag." In other words, it provides better smoothing with lower lag than EMA.
In this script, I try to get the best of all these approaches and present to you Relative Strength Super Smoother. It uses RS EMA algorithm to calculate the SuperSmoother length. Unlike the original RS EMA algorithm, that has an abstract "multiplier" setting to scale the period variance (without this parameter, RSI would only allow it to speed up twice; Vitaly Apirine sets the multiplier to 10 by default), my implementation has explicit lower bound setting, so you can specify the exact range of calculated length.
Settings:
Lower Bound - fastest SuperSmoother length (when RSI is +100 or -100)
Upper Bound - slowest SuperSmoother length (when RSI is 0)
RSI Length - underlying RSI length. Unlike the original RSI that uses RMA as an internal smoothing algorithm, Vitaly Apirine uses EMA, which is approximately twice as fast (that is needed because he uses a generally long RSI length and RMA would be too slow for this). It is the same as the Upper Bound by default (0), as in the original implementation
The original RS EMA is also shown on the chart for comparison. The default multiplier of 10 for RS EMA means that the fastest EMA period is around 4. I use the fastest period of 8 by default. It does not introduce too much of a lag in comparison, but the curve is much smoother.
This script is just an interface for my public libraries. Check them out for more information.
Bogdan Ciocoiu - MakaveliDescription
This indicator integrates the functionality of multiple volume price analysis algorithms whilst aligning their scales to fit in a single chart.
Having such indicators loaded enables traders to take advantage of potential divergences between the price action and volume related volatility.
Users will have to enable or disable alternative algorithms depending on their choice.
Uniqueness
This indicator is unique because it combines multiple algorithm-specific two-volume analyses with price volatility.
This indicator is also unique because it amends different algorithms to show output on a similar scale enabling traders to observe various volume-analysis tools simultaneously whilst allocating different colour codes.
Open source re-use
This indicator utilises the following open-source scripts:
Bogdan Ciocoiu - Sniper EntryWhat is Sniper Entry
Sniper Entry is a set indicator that encapsulates a collection of pre-configured scripts using specific variables that enable users to extract signals by interpreting market behaviour quickly, suitable for 1-3min scalping. This instrument is a tool that acts as a confluence for traders to make decisions concerning current market conditions. This indicator does not apply solely to an asset.
What Sniper Entry is not
Sniper Entry is not interpreting fundamental analysis and will also not be providing out of box market signals. Instead, it will provide a collection of integrated and significantly improved open-source subscripts designed to help traders speculate on market trends. Traders must apply their strategies and configure Sniper Entry accordingly to maximise the script's output.
Originality and usefulness
The collection of subscripts encapsulated in this tool makes it unique in the Trading View ecosystem. This indicator enables traders to consider entry positions or exit positions by comparing similar algorithms at once.
Its usefulness also emerges from the unique configurations embedded in the indicator's settings, which are different from those of the original scripts.
This indicator's originality is also reflected in how its modules are integrated, including the integration of the settings.
Open-source reuse
I used the following open-source resources, which I simplified significantly and pre-configured for short term scalping. The source codes for the below are already in the public domain, including the following links listed below.
www.tradingview.com (open source)
(open source and generic algorithm)
www.tradingview.com (open source)
(open source)
(open source)
www.tradingview.com (generic MA algorithm and open source)
(generic VWAP algorithm and open source)
Acrypto - Weighted StrategyHello traders!
I have been developing a fully customizable algo over the last year. The algorithm is based on a set of different strategies, each with its own weight (weighted strategy). The set of strategies that I currently use are 5:
MACD
Stochastic RSI
RSI
Supertrend
MA crossover
Moreover, the algo includes STOP losses criteria and a taking profit strategy. The algo must be optimized for the desired asset to achieves its full potential. The 1H and 4H dataframe give good results. The algo has been tested for several asset (same dataframe, different optimization values).
Important note:
Backtest the algorithm with different data stamps to avoid overfitting results
Best,
Alberto
FunctionArrayMaxSubKadanesAlgorithmLibrary "FunctionArrayMaxSubKadanesAlgorithm"
Implements Kadane's maximum sum sub array algorithm.
size(samples) Kadanes algorithm.
Parameters:
samples : float array, sample data values.
Returns: float.
indices(samples) Kadane's algorithm with indices.
Parameters:
samples : float array, sample data values.
Returns: tuple with format .
MathSearchDijkstraLibrary "MathSearchDijkstra"
Shortest Path Tree Search Methods using Dijkstra Algorithm.
min_distance(distances, flagged_vertices) Find the lowest cost/distance.
Parameters:
distances : float array, data set with distance costs to start index.
flagged_vertices : bool array, data set with visited vertices flags.
Returns: int, lowest cost/distance index.
dijkstra(matrix_graph, dim_x, dim_y, start) Dijkstra Algorithm, perform a greedy tree search to calculate the cost/distance to selected start node at each vertex.
Parameters:
matrix_graph : int array, matrix holding the graph adjacency list and costs/distances.
dim_x : int, x dimension of matrix_graph.
dim_y : int, y dimension of matrix_graph.
start : int, the vertex index to start search.
Returns: int array, set with costs/distances to each vertex from start vertexs.
shortest_path(start, end, matrix_graph, dim_x, dim_y) Retrieves the shortest path between 2 vertices in a graph using Dijkstra Algorithm.
Parameters:
start : int, the vertex index to start search.
end : int, the vertex index to end search.
matrix_graph : int array, matrix holding the graph adjacency list and costs/distances.
dim_x : int, x dimension of matrix_graph.
dim_y : int, y dimension of matrix_graph.
Returns: int array, set with vertex indices to the shortest path.
P-Square - Estimation of the Nth percentile of a seriesEstimation of the Nth percentile of a series
When working with built-in functions in TradingView we have to limit our length parameters to max 4999. In case we want to use a function on the whole available series (bar 0 all the way to the current bar), we can usually not do this without manually creating these calculations in our code. For things like mean or standard deviation, this is quite trivial, but for things like percentiles, this is usually very costly. In more complex scripts, this becomes impossible because of resource restrictions from the Pine Script execution servers.
One solution to this is to use an estimation algorithm to get close to the true percentile value. Therefore, I have ported this implementation of the P-Square algorithm to Pine Script. P-Square is a fast algorithm that does a good job at estimating percentiles in data streams. Here's the algorithms original paper .
The chart
On the chart we see:
The returns of the series (blue scatter plot)
The mean of the returns of the series (orange line)
The standard deviation of the returns of the series (yellow line)
The actual 84.1th percentile of the returns (white line)
The estimatedl 84.1th percentile of the returns using the P-Square algorithm (green line)
Note: We can see that the returns are not normally distributed as we can see that one standard deviation is higher than the 84.1th percentile. One standard deviation should equal the 84.1th percentile if the data is normally distributed.
Machine Learning: Logistic RegressionMulti-timeframe Strategy based on Logistic Regression algorithm
Description:
This strategy uses a classic machine learning algorithm that came from statistics - Logistic Regression (LR).
The first and most important thing about logistic regression is that it is not a 'Regression' but a 'Classification' algorithm. The name itself is somewhat misleading. Regression gives a continuous numeric output but most of the time we need the output in classes (i.e. categorical, discrete). For example, we want to classify emails into “spam” or 'not spam', classify treatment into “success” or 'failure', classify statement into “right” or 'wrong', classify election data into 'fraudulent vote' or 'non-fraudulent vote', classify market move into 'long' or 'short' and so on. These are the examples of logistic regression having a binary output (also called dichotomous).
You can also think of logistic regression as a special case of linear regression when the outcome variable is categorical, where we are using log of odds as dependent variable. In simple words, it predicts the probability of occurrence of an event by fitting data to a logit function.
Basically, the theory behind Logistic Regression is very similar to the one from Linear Regression, where we seek to draw a best-fitting line over data points, but in Logistic Regression, we don’t directly fit a straight line to our data like in linear regression. Instead, we fit a S shaped curve, called Sigmoid, to our observations, that best SEPARATES data points. Technically speaking, the main goal of building the model is to find the parameters (weights) using gradient descent.
In this script the LR algorithm is retrained on each new bar trying to classify it into one of the two categories. This is done via the logistic_regression function by updating the weights w in the loop that continues for iterations number of times. In the end the weights are passed through the sigmoid function, yielding a prediction.
Mind that some assets require to modify the script's input parameters. For instance, when used with BTCUSD and USDJPY, the 'Normalization Lookback' parameter should be set down to 4 (2,...,5..), and optionally the 'Use Price Data for Signal Generation?' parameter should be checked. The defaults were tested with EURUSD.
Note: TradingViews's playback feature helps to see this strategy in action.
Warning: Signals ARE repainting.
Style tags: Trend Following, Trend Analysis
Asset class: Equities, Futures, ETFs, Currencies and Commodities
Dataset: FX Minutes/Hours/Days
Price levelsThanks to the developers for adding arrays to TradingView. This gives you more freedom in Pine Script coding.
I have created an algorithm that draws support and resistance levels on a chart. The algorithm can be easily customized as you need.
This algorithm can help both intuitive and system traders. Intuitive traders just look at the drawn lines. For system traders, the "levels" array stores all level values. Thus, you can use these values for algorithmic trading.
[R&D] Moving CentroidThis script utilizes this concept. Instead of weighting by volume, it weights by amount of price action on every close price of the rolling window. I assume it can be used as an additional reference point for price mode and price antimode.
it is directly connected with Market (not volume) profile, or TPO charts.
The algorithm:
1) takes a rolling window of, for example, 50 data points of close prices:
2) for each of this closing prices, the algorithm will check how many bars touched this close price.
3) then: sum of datapoints * weights/sum of weights
Since the logic is implemented in pretty non-efficient way, the script sometimes can take time to make calculations. Moreover, it calculates the centroid taking into account only close prices, not every tick. of a given rolling window That's why it's still experimental.
RenkoNow you can plot a "Renko" chart on any timeframe for free! As with my previous algorithm, you can plot the "Linear Break" chart on any timeframe for free!
I again decided to help TradingView programmers and wrote code that converts a standard candles / bars to a "Renko" chart. The built-in renko() and security() functions for constructing a "Renko" chart are working wrong. Do not try to write strategies based on the built-in renko() function! The developers write in the manual: "Please note that you cannot plot Renko bricks from Pine script exactly as they look. You can only get a series of numbers similar to OHLC values for Renko bars and use them in your algorithms". However, it is possible to build a "Renko" chart exactly like the "Renko" chart built into TradingView. Personally, I had enough Pine Script functionality.
For a complete understanding of how such a chart is built, you can read to Steve Nison's book "BEYOND JAPANESE CANDLES" and see the instructions for creating a "Renko" chart:
Rule 1: one white brick (or series) is built when the price rises above the base price by a fixed threshold value or more.
Rule 2: one black brick (or series) is built when the price falls below the base price by a fixed threshold or more.
Rule 3: if the rise or fall of the price is less than the minimum fixed value, then new bricks are not drawn.
Rule 4: if today's closing price is higher than the maximum of the last brick (white or black) by a threshold or more, move to the column to the right and build one or more white bricks of equal height. A new brick begins with the maximum of the previous brick.
Rule 5: if today's closing price is below the minimum of the last brick (white or black) by a threshold or more, move to the column to the right and build one or more black bricks of equal height. A new brick begins with the minimum of the previous brick.
Rule 6: if the price is below the maximum or above the minimum, then new bricks are not drawn on the chart.
So my algorithm can to plot Traditional Renko with a fixed box size. I want to note that such a "Renko" chart is slightly different from the "Renko" chart built into TradingView, because as a base price I use (by default) close of first candle. How the developers of TradingView calculate the base price I don’t know. Personally, I do as written in the book of Steve Neeson.
The algorithm is very complicated and I do not want to explain it in detail. I will explain very briefly. The first part of the get_renko () function — // creating lists — creates two lists that record how many green bricks should be and how many red bricks. The second part of the get_renko () function — // creating open and close series — creates open and close series to plot bricks. So, this is a white box - study it!
As you understand, one green candle can create a condition under which it will be necessary to plot, for example, 10 green bricks. So the smaller the box size you make, the smaller the portion of the chart you will see.
I stuffed all the logic into a wrapper in the form of the get_renko() function, which returns a tuple of OHLC values. And these series with the help of the plotcandle() annotation can be converted to the "Renko" chart. I also want to note that with a large number of candles on the chart, outrages about the buffer size uncertainty are heard from the TradingView blackbox. Because of it, in the annotation study() set the value of the max_bars_back parameter.
In general, use this script (for example, to write strategies)!
Coinbase_3-MIN_HFT-StrategyThis conceptual strategy trades against the short-term trend. The first position can be either long or short.
In the short-term, prices fluctuate up and down on wide spread exchanges.
And if the price moves to one side, the price tends to return to its original position momentarily.
This strategy set stop order. Stop price is calculated with upper and lower shadows.
Enhanced Instantaneous Cycle Period - Dr. John EhlersThis is my first public release of detector code entitled "Enhanced Instantaneous Cycle Period" for PSv4.0 I built many months ago. Be forewarned, this is not an indicator, this is a detector to be used by ADVANCED developers to build futuristic indicators in Pine. The origins of this script come from a document by Dr. John Ehlers entitled "SIGNAL ANALYSIS CONCEPTS". You may find this using the NSA's reverse search engine "goggles", as I call it. John Ehlers' MESA used this measurement to establish the data window for analysis for MESA Cycle computations. So... does any developer wish to emulate MESA Cycle now??
I decided to take instantaneous cycle period to another level of novel attainability in this public release of source code with the following methods, if you are curious how I ENHANCED it. Firstly I reduced the delay of accurate measurement from bar_index==0 by quite a few bars closer to IPO. Secondarily, I provided a limit of 6 for a minimum instantaneous cycle period. At bar_index==0, it would provide a period of 0 wrecking many algorithms from the start. I also increased the instantaneous cycle period's maximum value to 80 from 50, providing a window of 6-80 for the instantaneous cycle period value window limits. Thirdly, I replaced the internal EMA with another algorithm. It reduces the lag while extracting a floating point number, for algorithms that will accept that, compared to a sluggish ordinary EMA return. You will see the excessive EMA delay with adding plot(ema(ICP,7)) as it was originally designed. Lastly it's in one simple function for reusability in a nice little package comprising of less than 40 lines of code. I hope I explained that adequately enough and gave you the reader a glimpse of the "Power of Pine" combined with ingenuity.
Be forewarned again, that most of Pine's built-in functions will not accept a floating-point number or dynamic integers for the "length" of it's calculation. You will have to emulate the built-in functions by creating Pine based custom functions, and I assure you, this is very possible in many cases, but not all without array support. You may use int(ICP) to extract an integer from the smoothICP return variable, which may be favorable compared to the choppiness/ringing if ICP alone.
This is commonly what my dense intricate code looks like behind the veil. If you are wondering why there is barely any notation, that's because the notation is in the variable naming and this is intended primarily for ADVANCED developers too. It does contain lines of code that explore techniques in Pine that may be applicable in other Pine projects for those learning or wishing to excel with Pine.
Showcased in the chart below is my free to use "Enhanced Schaff Trend Cycle Indicator", having a common appeal to TV users frequently. If you do have any questions or comments regarding this indicator, I will consider your inquiries, thoughts, and ideas presented below in the comments section, when time provides it. As always, "Like" it if you simply just like it with a proper thumbs up, and also return to my scripts list occasionally for additional postings. Have a profitable future everyone!
NOTICE: Copy pasting bandits who may be having nefarious thoughts, DO NOT attempt this, because this may violate Tradingview's terms, conditions and/or house rules. "WE" are always watching the TV community vigilantly for mischievous behaviors and actions that exploit well intended authors for the purpose of increasing brownie points in reputation scores. Hiding behind a "protected" wall may not protect you from investigation and account penalization by TV staff. Be respectful, and don't just throw an ma() in there branding it as "your" gizmo. Fair enough? Alrighty then... I firmly believe in "innovating" future state-of-the-art indicators, and please contact me if you wish to do so.
Cluster Reversal Zones📌 Cluster Reversal Zones – Smart Market Turning Point Detector
📌 Category : Public (Restricted/Closed-Source) Indicator
📌 Designed for : Traders looking for high-accuracy reversal zones based on price clustering & liquidity shifts.
🔍 Overview
The Cluster Reversal Zones Indicator is an advanced market reversal detection tool that helps traders identify key turning points using a combination of price clustering, order flow analysis, and liquidity tracking. Instead of relying on static support and resistance levels, this tool dynamically adjusts to live market conditions, ensuring traders get the most accurate reversal signals possible.
📊 Core Features:
✅ Real-Time Reversal Zone Mapping – Detects high-probability market turning points using price clustering & order flow imbalance.
✅ Liquidity-Based Support/Resistance Detection – Identifies strong rejection zones based on real-time liquidity shifts.
✅ Order Flow Sensitivity for Smart Filtering – Filters out weak reversals by detecting real market participation behind price movements.
✅ Momentum Divergence for Confirmation – Aligns reversal zones with momentum divergences to increase accuracy.
✅ Adaptive Risk Management System – Adjusts risk parameters dynamically based on volatility and trend state.
🔒 Justification for Mashup
The Cluster Reversal Zones Indicator contains custom-built methodologies that extend beyond traditional support/resistance indicators:
✔ Smart Price Clustering Algorithm: Instead of plotting fixed support/resistance lines, this system analyzes historical price clustering to detect active reversal areas.
✔ Order Flow Delta & Liquidity Shift Sensitivity: The tool tracks real-time order flow data, identifying price zones with the highest accumulation or distribution levels.
✔ Momentum-Based Reversal Validation: Unlike traditional indicators, this tool requires a momentum shift confirmation before validating a potential reversal.
✔ Adaptive Reversal Filtering Mechanism: Uses a combination of historical confluence detection + live market validation to improve accuracy.
🛠️ How to Use:
• Works well for reversal traders, scalpers, and swing traders seeking precise turning points.
• Best combined with VWAP, Market Profile, and Delta Volume indicators for confirmation.
• Suitable for Forex, Indices, Commodities, Crypto, and Stock markets.
🚨 Important Note:
For educational & analytical purposes only.
Ehlers Maclaurin Ultimate Smoother [CT]Ehlers Maclaurin Ultimate Smoother
Introduction
The Ehlers Maclaurin Ultimate Smoother is an innovative enhancement of the classic Ehlers SuperSmoother. By leveraging advanced Maclaurin series approximations, this indicator offers superior market analysis and signal generation.
The indicator combines Ehlers' Ultimate Smoother with Maclaurin series approximations to create a more efficient and accurate smoothing mechanism:
Input price data passes through the initial smoothing phase
Maclaurin series approximates trigonometric functions
Enhanced high-pass filter removes market noise
Final smoothing phase produces the output signal
Why the Maclaurin Approach?
The Maclaurin series is a special form of the Taylor series, centered around 0. It provides an efficient way to approximate complex functions using polynomial terms. In this indicator, we use the Maclaurin approach to improve the sine and cosine functions, resulting in:
Faster Calculations: By using polynomial approximations, we significantly reduce computational complexity.
Improved Stability: The approximation provides a more stable numerical basis for calculations.
Preservation of Precision: Despite the approximation, we maintain the precision needed for price smoothing.
Calculations
The indicator employs several key mathematical components:
Maclaurin Series Approximation:
sin(x) ≈ x - x³/3! + x⁵/5! - x⁷/7! + x⁹/9!
cos(x) ≈ 1 - x²/2! + x⁴/4! - x⁶/6! + x⁸/8!
Smoothing Algorithm:
Uses exponential smoothing with optimized coefficients
Implements high-pass filtering for noise reduction
Applies dynamic weighting based on market conditions
Mathematical Foundation
Utilizes Maclaurin series for trigonometric approximation
Implements Ehlers' smoothing principles
Incorporates advanced filtering techniques
Technical Advantages
Signal Processing:
Lag Reduction: Faster signal detection with less delay.
Noise Filtration: Effective elimination of high-frequency noise.
Precision Enhancement: Preservation of critical price movements.
Adaptive Processing: Dynamic response to market volatility.
Visual Enhancements:
Smart color intensity mapping.
Real-time visualization of trend strength.
Adaptive opacity based on movement significance.
Implementation
Core Configuration:
Plot Type: Choose between the original and the Maclaurin enhanced version.
Length: Default set to 30, optimal for daily timeframes.
hpLength: Default set to 10 for enhanced noise reduction.
Advanced Parameters:
The indicator offers advanced control with:
Dual processing modes (Original/Maclaurin).
Dynamic color intensity system.
Customizable smoothing parameters.
Professional Analysis Tools:
Accurate trend reversal identification.
Advanced support/resistance detection.
Superior performance in volatile markets.
Technical Specifications
Maclaurin Series Implementation:
The indicator employs a 5-term Maclaurin series approximation for both sine and cosine, ensuring efficient and accurate computation.
Performance Metrics
Improved processing efficiency.
Reduced memory utilization.
Increased signal accuracy.
Licensing & Attribution
© 2024 Mupsje aka CasaTropical
Professional Credits
Original Ultimate and SuperSmoother concept: John F. Ehlers
Maclaurin enhancement: Casa Tropical (CT)
www.mathsisfun.com
True Amplitude Envelopes (TAE)The True Envelopes indicator is an adaptation of the True Amplitude Envelope (TAE) method, based on the research paper " Improved Estimation of the Amplitude Envelope of Time Domain Signals Using True Envelope Cepstral Smoothing " by Caetano and Rodet. This indicator aims to create an asymmetric price envelope with strong predictive power, closely following the methodology outlined in the paper.
Due to the inherent limitations of Pine Script, the indicator utilizes a Kernel Density Estimator (KDE) in place of the original Cepstral Smoothing technique described in the paper. While this approach was chosen out of necessity rather than superiority, the resulting method is designed to be as effective as possible within the constraints of the Pine environment.
This indicator is ideal for traders seeking an advanced tool to analyze price dynamics, offering insights into potential price movements while working within the practical constraints of Pine Script. Whether used in dynamic mode or with a static setting, the True Envelopes indicator helps in identifying key support and resistance levels, making it a valuable asset in any trading strategy.
Key Features:
Dynamic Mode: The indicator dynamically estimates the fundamental frequency of the price, optimizing the envelope generation process in real-time to capture critical price movements.
High-Pass Filtering: Uses a high-pass filtered signal to identify and smoothly interpolate price peaks, ensuring that the envelope accurately reflects significant price changes.
Kernel Density Estimation: Although implemented as a workaround, the KDE technique allows for flexible and adaptive smoothing of the envelope, aimed at achieving results comparable to the more sophisticated methods described in the original research.
Symmetric and Asymmetric Envelopes: Provides options to select between symmetric and asymmetric envelopes, accommodating various trading strategies and market conditions.
Smoothness Control: Features adjustable smoothness settings, enabling users to balance between responsiveness and the overall smoothness of the envelopes.
The True Envelopes indicator comes with a variety of input settings that allow traders to customize the behavior of the envelopes to match their specific trading needs and market conditions. Understanding each of these settings is crucial for optimizing the indicator's performance.
Main Settings
Source: This is the data series on which the indicator is applied, typically the closing price (close). You can select other price data like open, high, low, or a custom series to base the envelope calculations.
History: This setting determines how much historical data the indicator should consider when calculating the envelopes. A value of 0 will make the indicator process all available data, while a higher value restricts it to the most recent n bars. This can be useful for reducing the computational load or focusing the analysis on recent market behavior.
Iterations: This parameter controls the number of iterations used in the envelope generation algorithm. More iterations will typically result in a smoother envelope, but can also increase computation time. The optimal number of iterations depends on the desired balance between smoothness and responsiveness.
Kernel Style: The smoothing kernel used in the Kernel Density Estimator (KDE). Available options include Sinc, Gaussian, Epanechnikov, Logistic, and Triangular. Each kernel has different properties, affecting how the smoothing is applied. For example, Gaussian provides a smooth, bell-shaped curve, while Epanechnikov is more efficient computationally with a parabolic shape.
Envelope Style: This setting determines whether the envelope should be Static or Dynamic. The Static mode applies a fixed period for the envelope, while the Dynamic mode automatically adjusts the period based on the fundamental frequency of the price data. Dynamic mode is typically more responsive to changing market conditions.
High Q: This option controls the quality factor (Q) of the high-pass filter. Enabling this will increase the Q factor, leading to a sharper cutoff and more precise isolation of high-frequency components, which can help in better identifying significant price peaks.
Symmetric: This setting allows you to choose between symmetric and asymmetric envelopes. Symmetric envelopes maintain an equal distance from the central price line on both sides, while asymmetric envelopes can adjust differently above and below the price line, which might better capture market conditions where upside and downside volatility are not equal.
Smooth Envelopes: When enabled, this setting applies additional smoothing to the envelopes. While this can reduce noise and make the envelopes more visually appealing, it may also decrease their responsiveness to sudden market changes.
Dynamic Settings
Extra Detrend: This setting toggles an additional high-pass filter that can be applied when using a long filter period. The purpose is to further detrend the data, ensuring that the envelope focuses solely on the most recent price oscillations.
Filter Period Multiplier: This multiplier adjusts the period of the high-pass filter dynamically based on the detected fundamental frequency. Increasing this multiplier will lengthen the period, making the filter less sensitive to short-term price fluctuations.
Filter Period (Min) and Filter Period (Max): These settings define the minimum and maximum bounds for the high-pass filter period. They ensure that the filter period stays within a reasonable range, preventing it from becoming too short (and overly sensitive) or too long (and too sluggish).
Envelope Period Multiplier: Similar to the filter period multiplier, this adjusts the period for the envelope generation. It scales the period dynamically to match the detected price cycles, allowing for more precise envelope adjustments.
Envelope Period (Min) and Envelope Period (Max): These settings establish the minimum and maximum bounds for the envelope period, ensuring the envelopes remain adaptive without becoming too reactive or too slow.
Static Settings
Filter Period: In static mode, this setting determines the fixed period for the high-pass filter. A shorter period will make the filter more responsive to price changes, while a longer period will smooth out more of the price data.
Envelope Period: This setting specifies the fixed period used for generating the envelopes in static mode. It directly influences how tightly or loosely the envelopes follow the price action.
TAE Smoothing: This controls the degree of smoothing applied during the TAE process in static mode. Higher smoothing values result in more gradual envelope curves, which can be useful in reducing noise but may also delay the envelope’s response to rapid price movements.
Visual Settings
Top Band Color: This setting allows you to choose the color for the upper band of the envelope. This band represents the resistance level in the price action.
Bottom Band Color: Similar to the top band color, this setting controls the color of the lower band, which represents the support level.
Center Line Color: This is the color of the central price line, often referred to as the carrier. It represents the detrended price around which the envelopes are constructed.
Line Width: This determines the thickness of the plotted lines for the top band, bottom band, and center line. Thicker lines can make the envelopes more visible, especially when overlaid on price data.
Fill Alpha: This controls the transparency level of the shaded area between the top and bottom bands. A lower alpha value will make the fill more transparent, while a higher value will make it more opaque, helping to highlight the envelope more clearly.
The envelopes generated by the True Envelopes indicator are designed to provide a more precise and responsive representation of price action compared to traditional methods like Bollinger Bands or Keltner Channels. The core idea behind this indicator is to create a price envelope that smoothly interpolates the significant peaks in price action, offering a more accurate depiction of support and resistance levels.
One of the critical aspects of this approach is the use of a high-pass filtered signal to identify these peaks. The high-pass filter serves as an effective method of detrending the price data, isolating the rapid fluctuations in price that are often lost in standard trend-following indicators. By filtering out the lower frequency components (i.e., the trend), the high-pass filter reveals the underlying oscillations in the price, which correspond to significant peaks and troughs. These oscillations are crucial for accurately constructing the envelope, as they represent the most responsive elements of the price movement.
The algorithm works by first applying the high-pass filter to the source price data, effectively detrending the series and isolating the high-frequency price changes. This filtered signal is then used to estimate the fundamental frequency of the price movement, which is essential for dynamically adjusting the envelope to current market conditions. By focusing on the peaks identified in the high-pass filtered signal, the algorithm generates an envelope that is both smooth and adaptive, closely following the most significant price changes without overfitting to transient noise.
Compared to traditional envelopes and bands, such as Bollinger Bands and Keltner Channels, the True Envelopes indicator offers several advantages. Bollinger Bands, which are based on standard deviations, and Keltner Channels, which use the average true range (ATR), both tend to react to price volatility but do not necessarily follow the peaks and troughs of the price with precision. As a result, these traditional methods can sometimes lag behind or fail to capture sudden shifts in price momentum, leading to either false signals or missed opportunities.
In contrast, the True Envelopes indicator, by using a high-pass filtered signal and a dynamic period estimation, adapts more quickly to changes in price behavior. The envelopes generated by this method are less prone to the lag that often affects standard deviation or ATR-based bands, and they provide a more accurate representation of the price's immediate oscillations. This can result in better predictive power and more reliable identification of support and resistance levels, making the True Envelopes indicator a valuable tool for traders looking for a more responsive and precise approach to market analysis.
In conclusion, the True Envelopes indicator is a powerful tool that blends advanced theoretical concepts with practical implementation, offering traders a precise and responsive way to analyze price dynamics. By adapting the True Amplitude Envelope (TAE) method through the use of a Kernel Density Estimator (KDE) and high-pass filtering, this indicator effectively captures the most significant price movements, providing a more accurate depiction of support and resistance levels compared to traditional methods like Bollinger Bands and Keltner Channels. The flexible settings allow for extensive customization, ensuring the indicator can be tailored to suit various trading strategies and market conditions.
Hybrid Adaptive Double Exponential Smoothing🙏🏻 This is HADES (Hybrid Adaptive Double Exponential Smoothing) : fully data-driven & adaptive exponential smoothing method, that gains all the necessary info directly from data in the most natural way and needs no subjective parameters & no optimizations. It gets applied to data itself -> to fit residuals & one-point forecast errors, all at O(1) algo complexity. I designed it for streaming high-frequency univariate time series data, such as medical sensor readings, orderbook data, tick charts, requests generated by a backend, etc.
The HADES method is:
fit & forecast = a + b * (1 / alpha + T - 1)
T = 0 provides in-sample fit for the current datum, and T + n provides forecast for n datapoints.
y = input time series
a = y, if no previous data exists
b = 0, if no previous data exists
otherwise:
a = alpha * y + (1 - alpha) * a
b = alpha * (a - a ) + (1 - alpha) * b
alpha = 1 / sqrt(len * 4)
len = min(ceil(exp(1 / sig)), available data)
sig = sqrt(Absolute net change in y / Sum of absolute changes in y)
For the start datapoint when both numerator and denominator are zeros, we define 0 / 0 = 1
...
The same set of operations gets applied to the data first, then to resulting fit absolute residuals to build prediction interval, and finally to absolute forecasting errors (from one-point ahead forecast) to build forecasting interval:
prediction interval = data fit +- resoduals fit * k
forecasting interval = data opf +- errors fit * k
where k = multiplier regulating intervals width, and opf = one-point forecasts calculated at each time t
...
How-to:
0) Apply to your data where it makes sense, eg. tick data;
1) Use power transform to compensate for multiplicative behavior in case it's there;
2) If you have complete data or only the data you need, like the full history of adjusted close prices: go to the next step; otherwise, guided by your goal & analysis, adjust the 'start index' setting so the calculations will start from this point;
3) Use prediction interval to detect significant deviations from the process core & make decisions according to your strategy;
4) Use one-point forecast for nowcasting;
5) Use forecasting intervals to ~ understand where the next datapoints will emerge, given the data-generating process will stay the same & lack structural breaks.
I advise k = 1 or 1.5 or 4 depending on your goal, but 1 is the most natural one.
...
Why exponential smoothing at all? Why the double one? Why adaptive? Why not Holt's method?
1) It's O(1) algo complexity & recursive nature allows it to be applied in an online fashion to high-frequency streaming data; otherwise, it makes more sense to use other methods;
2) Double exponential smoothing ensures we are taking trends into account; also, in order to model more complex time series patterns such as seasonality, we need detrended data, and this method can be used to do it;
3) The goal of adaptivity is to eliminate the window size question, in cases where it doesn't make sense to use cumulative moving typical value;
4) Holt's method creates a certain interaction between level and trend components, so its results lack symmetry and similarity with other non-recursive methods such as quantile regression or linear regression. Instead, I decided to base my work on the original double exponential smoothing method published by Rob Brown in 1956, here's the original source , it's really hard to find it online. This cool dude is considered the one who've dropped exponential smoothing to open access for the first time🤘🏻
R&D; log & explanations
If you wanna read this, you gotta know, you're taking a great responsability for this long journey, and it gonna be one hell of a trip hehe
Machine learning, apprentissage automatique, машинное обучение, digital signal processing, statistical learning, data mining, deep learning, etc., etc., etc.: all these are just artificial categories created by the local population of this wonderful world, but what really separates entities globally in the Universe is solution complexity / algorithmic complexity.
In order to get the game a lil better, it's gonna be useful to read the HTES script description first. Secondly, let me guide you through the whole R&D; process.
To discover (not to invent) the fundamental universal principle of what exponential smoothing really IS, it required the review of the whole concept, understanding that many things don't add up and don't make much sense in currently available mainstream info, and building it all from the beginning while avoiding these very basic logical & implementation flaws.
Given a complete time t, and yet, always growing time series population that can't be logically separated into subpopulations, the very first question is, 'What amount of data do we need to utilize at time t?'. Two answers: 1 and all. You can't really gain much info from 1 datum, so go for the second answer: we need the whole dataset.
So, given the sequential & incremental nature of time series, the very first and basic thing we can do on the whole dataset is to calculate a cumulative , such as cumulative moving mean or cumulative moving median.
Now we need to extend this logic to exponential smoothing, which doesn't use dataset length info directly, but all cool it can be done via a formula that quantifies the relationship between alpha (smoothing parameter) and length. The popular formulas used in mainstream are:
alpha = 1 / length
alpha = 2 / (length + 1)
The funny part starts when you realize that Cumulative Exponential Moving Averages with these 2 alpha formulas Exactly match Cumulative Moving Average and Cumulative (Linearly) Weighted Moving Average, and the same logic goes on:
alpha = 3 / (length + 1.5) , matches Cumulative Weighted Moving Average with quadratic weights, and
alpha = 4 / (length + 2) , matches Cumulative Weighted Moving Average with cubic weghts, and so on...
It all just cries in your shoulder that we need to discover another, native length->alpha formula that leverages the recursive nature of exponential smoothing, because otherwise, it doesn't make sense to use it at all, since the usual CMA and CMWA can be computed incrementally at O(1) algo complexity just as exponential smoothing.
From now on I will not mention 'cumulative' or 'linearly weighted / weighted' anymore, it's gonna be implied all the time unless stated otherwise.
What we can do is to approach the thing logically and model the response with a little help from synthetic data, a sine wave would suffice. Then we can think of relationships: Based on algo complexity from lower to higher, we have this sequence: exponential smoothing @ O(1) -> parametric statistics (mean) @ O(n) -> non-parametric statistics (50th percentile / median) @ O(n log n). Based on Initial response from slow to fast: mean -> median Based on convergence with the real expected value from slow to fast: mean (infinitely approaches it) -> median (gets it quite fast).
Based on these inputs, we need to discover such a length->alpha formula so the resulting fit will have the slowest initial response out of all 3, and have the slowest convergence with expected value out of all 3. In order to do it, we need to have some non-linear transformer in our formula (like a square root) and a couple of factors to modify the response the way we need. I ended up with this formula to meet all our requirements:
alpha = sqrt(1 / length * 2) / 2
which simplifies to:
alpha = 1 / sqrt(len * 8)
^^ as you can see on the screenshot; where the red line is median, the blue line is the mean, and the purple line is exponential smoothing with the formulas you've just seen, we've met all the requirements.
Now we just have to do the same procedure to discover the length->alpha formula but for double exponential smoothing, which models trends as well, not just level as in single exponential smoothing. For this comparison, we need to use linear regression and quantile regression instead of the mean and median.
Quantile regression requires a non-closed form solution to be solved that you can't really implement in Pine Script, but that's ok, so I made the tests using Python & sklearn:
paste.pics
^^ on this screenshot, you can see the same relationship as on the previous screenshot, but now between the responses of quantile regression & linear regression.
I followed the same logic as before for designing alpha for double exponential smoothing (also considered the initial overshoots, but that's a little detail), and ended up with this formula:
alpha = sqrt(1 / length) / 2
which simplifies to:
alpha = 1 / sqrt(len * 4)
Btw, given the pattern you see in the resulting formulas for single and double exponential smoothing, if you ever want to do triple (not Holt & Winters) exponential smoothing, you'll need len * 2 , and just len * 1 for quadruple exponential smoothing. I hope that based on this sequence, you see the hint that Maybe 4 rounds is enough.
Now since we've dealt with the length->alpha formula, we can deal with the adaptivity part.
Logically, it doesn't make sense to use a slower-than-O(1) method to generate input for an O(1) method, so it must be something universal and minimalistic: something that will help us measure consistency in our data, yet something far away from statistics and close enough to topology.
There's one perfect entity that can help us, this is fractal efficiency. The way I define fractal efficiency can be checked at the very beginning of the post, what matters is that I add a square root to the formula that is not typically added.
As explained in the description of my metric QSFS , one of the reasons for SQRT-transformed values of fractal efficiency applied in moving window mode is because they start to closely resemble normal distribution, yet with support of (0, 1). Data with this interesting property (normally distributed yet with finite support) can be modeled with the beta distribution.
Another reason is, in infinitely expanding window mode, fractal efficiency of every time series that exhibits randomness tends to infinitely approach zero, sqrt-transform kind of partially neutralizes this effect.
Yet another reason is, the square root might better reflect the dimensional inefficiency or degree of fractal complexity, since it could balance the influence of extreme deviations from the net paths.
And finally, fractals exhibit power-law scaling -> measures like length, area, or volume scale in a non-linear way. Adding a square root acknowledges this intrinsic property, while connecting our metric with the nature of fractals.
---
I suspect that, given analogies and connections with other topics in geometry, topology, fractals and most importantly positive test results of the metric, it might be that the sqrt transform is the fundamental part of fractal efficiency that should be applied by default.
Now the last part of the ballet is to convert our fractal efficiency to length value. The part about inverse proportionality is obvious: high fractal efficiency aka high consistency -> lower window size, to utilize only the last data that contain brand new information that seems to be highly reliable since we have consistency in the first place.
The non-obvious part is now we need to neutralize the side effect created by previous sqrt transform: our length values are too low, and exponentiation is the perfect candidate to fix it since translating fractal efficiency into window sizes requires something non-linear to reflect the fractal dynamics. More importantly, using exp() was the last piece that let the metric shine, any other transformations & formulas alike I've tried always had some weird results on certain data.
That exp() in the len formula was the last piece that made it all work both on synthetic and on real data.
^^ a standalone script calculating optimal dynamic window size
Omg, THAT took time to write. Comment and/or text me if you need
...
"Versace Pip-Boy, I'm a young gun coming up with no bankroll" 👻
∞
Fourier Extrapolation of PriceThis advanced algorithm leverages Fourier analysis to predict price trends by decomposing historical price data into its frequency components. Unlike traditional algorithms that often operate in lower-dimensional spaces, this method harnesses a multidimensional approach to capture intricate market behaviors. By utilizing additional dimensions, the algorithm identifies and extrapolates subtle patterns and oscillations that are typically overlooked, providing a more robust and nuanced forecast.
Ideal for traders seeking a deeper understanding of market dynamics, this tool offers an enhanced predictive capability by aligning its calculations with the complexity of real-world financial systems.
Volume Based Price Prediction [EdgeTerminal]This indicator combines price action, volume analysis, and trend prediction to forecast potential future price movements. The indicator creates a dynamic prediction zone with confidence bands, helping you visualize possible price trajectories based on current market conditions.
Key Features
Dynamic price prediction based on volume-weighted trend analysis
Confidence bands showing potential price ranges
Volume-based candle coloring for enhanced market insight
VWAP and Moving Average overlay
Customizable prediction parameters
Real-time updates with each new bar
Technical Components:
Volume-Price Correlation: The indicator analyzes the relationship between price movements and volume, Identifies stronger trends through volume confirmation and uses Volume-Weighted Average Price (VWAP) for price equilibrium
Trend Strength Analysis: Calculates trend direction using exponential moving averages, weights trend strength by relative volume and incorporates momentum for improved accuracy
Prediction Algorithm: combines current price, trend, and volume metrics, projects future price levels using weighted factors and generates confidence bands based on price volatility
Customizable Parameters:
Moving Average Length: Controls the smoothing period for calculations
Volume Weight Factor: Adjusts how much volume influences predictions
Prediction Periods: Number of bars to project into the future
Confidence Band Width: Controls the width of prediction bands
How to use it:
Look for strong volume confirmation with green candles, watch for prediction line slope changes, use confidence bands to gauge potential volatility and compare predictions with key support/resistance levels
Some useful tips:
Start with default settings and adjust gradually
Use wider confidence bands in volatile markets
Consider prediction lines as zones rather than exact levels
Best applications of this indicator:
Trend continuation probability assessment
Potential reversal point identification
Risk management through confidence bands
Volume-based trend confirmation
MACD Cloud with Moving Average and ATR BandsThe algorithm implements a technical analysis indicator that combines the MACD Cloud, Moving Averages (MA), and volatility bands (ATR) to provide signals on market trends and potential reversal points. It is divided into several sections:
🎨 Color Bars:
Activated based on user input.
Controls bar color display according to price relative to ATR levels and moving average (MA).
Logic:
⚫ Black: Potential bearish reversal (price above the upper ATR band).
🔵 Blue: Potential bullish reversal (price below the lower ATR band).
o
🟢 Green: Bullish trend (price between the MA and upper ATR band).
o
🔴 Red: Bearish trend (price between the lower ATR band and MA).
o
📊 MACD Bars:
Description:
The MACD Bars section is activated by default and can be modified based on user input.
🔴 Red: Indicates a bearish trend, shown when the MACD line is below the Signal line (Signal line is a moving average of MACD).
🔵 Blue: Indicates a bullish trend, shown when the MACD line is above the Signal line.
Matching colors between MACD Bars and MACD Cloud visually confirms trend direction.
MACD Cloud Logic: The MACD Cloud is based on Moving Average Convergence Divergence (MACD), a momentum indicator showing the relationship between two moving averages of price.
MACD and Signal Lines: The cloud visualizes the MACD line relative to the Signal line. If the MACD line is above the Signal line, it indicates a potential bullish trend, while below it suggests a potential bearish trend.
☁️ MA Cloud:
The MA Cloud uses three moving averages to analyze price direction:
Moving Average Relationship: Three MAs of different periods are plotted. The cloud turns green when the shorter MA is above the longer MA, indicating an uptrend, and red when below, suggesting a downtrend.
Trend Visualization: This graphical representation shows the trend direction.
📉 ATR Bands:
The ATR bands calculate overbought and oversold limits using a weighted moving average (WMA) and ATR.
Center (matr): Shows general trend; prices above suggest an uptrend, while below indicate a downtrend.
Up ATR 1: Marks the first overbought level, suggesting a potential bearish reversal if the price moves above this band.
Down ATR 1: Marks the first oversold level, suggesting a possible bullish reversal if the price moves below this band.
Up ATR 2: Extends the overbought range to an extreme, reinforcing the possibility of a bearish reversal at this level.
Down ATR 2: Extends the oversold range to an extreme, indicating a stronger bullish reversal possibility if price reaches here.
Español:
El algoritmo implementa un indicador de análisis técnico que combina la nube MACD, promedios móviles (MA) y bandas de volatilidad (ATR) para proporcionar señales sobre tendencias del mercado y posibles puntos de reversión. Se divide en varias secciones:
🎨 Barras de Color:
- Activado según la entrada del usuario.
- Controla la visualización del color de las barras según el precio en relación con los niveles de ATR y el promedio móvil (MA).
- **Lógica:**
- ⚫ **Negro**: Reversión bajista potencial (precio por encima de la banda superior ATR).
- 🔵 **Azul**: Reversión alcista potencial (precio por debajo de la banda inferior ATR).
- 🟢 **Verde**: Tendencia alcista (precio entre el MA y la banda superior ATR).
- 🔴 **Rojo**: Tendencia bajista (precio entre la banda inferior ATR y el MA).
### 📊 Barras MACD:
- **Descripción**:
- La sección de barras MACD se activa por defecto y puede modificarse según la entrada del usuario.
- 🔴 **Rojo**: Indica una tendencia bajista, cuando la línea MACD está por debajo de la línea de señal (la línea de señal es una media móvil de la MACD).
- 🔵 **Azul**: Indica una tendencia alcista, cuando la línea MACD está por encima de la línea de señal.
- La coincidencia de colores entre las barras MACD y la nube MACD confirma visualmente la dirección de la tendencia.
### 🌥️ Nube MACD:
- **Lógica de la Nube MACD**: Basada en el indicador de convergencia-divergencia de medias móviles (MACD), que muestra la relación entre dos medias móviles del precio.
- **Líneas MACD y de Señal**: La nube visualiza la relación entre la línea MACD y la línea de señal. Si la línea MACD está por encima de la de señal, indica una tendencia alcista potencial; si está por debajo, sugiere una tendencia bajista.
### ☁️ Nube MA:
- **Relación entre Medias Móviles**: Se trazan tres medias móviles de diferentes períodos. La nube se vuelve verde cuando la media más corta está por encima de la más larga, indicando una tendencia alcista, y roja cuando está por debajo, sugiriendo una tendencia bajista.
- **Visualización de Tendencias**: Proporciona una representación gráfica de la dirección de la tendencia.
### 📉 Bandas ATR:
- Las bandas ATR calculan límites de sobrecompra y sobreventa usando una media ponderada y el ATR.
- **Centro (matr)**: Muestra la tendencia general; precios por encima indican tendencia alcista y debajo, bajista.
- **Up ATR 1**: Marca el primer nivel de sobrecompra, sugiriendo una reversión bajista potencial si el precio sube por encima de esta banda.
- **Down ATR 1**: Marca el primer nivel de sobreventa, sugiriendo una reversión alcista potencial si el precio baja por debajo de esta banda.
- **Up ATR 2**: Amplía el rango de sobrecompra a un nivel extremo, reforzando la posibilidad de reversión bajista.
- **Down ATR 2**: Extiende el rango de sobreventa a un nivel extremo, sugiriendo una reversión alcista más fuerte si el precio alcanza esta banda.