Stability Max OverloadStability Max Overload was created in another script I have been working on found below.
I have broken the code down to only display the Stability features.
What this is:
I was trying to find a way that could in some form display the Stability or Instability of the US Treasuries Bond Market. To try and help me do that, I came up with 3 values.
*Stability
*Stability Overload
*Stability Max Overload.
I started with STABILITY. This value is generated based off the number of side by side inversions in the Bond Market. I wanted this value to range between 0 and 1 while 1 equaling all Bonds inverted and 0 equaling no Bonds inverted and any number of inversions in between would equal a percentage value based off the actual number.
STABILITY OVERLOAD was created based off the average of each inversion.
STABILITY MAX OVERLOAD was then created based off the total of each inversion.
The most stable Yield Curve would have no inversions and therefore would generate a 0 for Stability, Stability Overload and Stability Max Overload. The more inversions the Yield Curve has the higher in value Stability itself would have as Stability is weighted more per inversion. With each inversion, data is taken based off the amount with which the Yields are inverted.
This display shows where we currently stand since Dec 2018. It's a telling story so say the least. I do plan on continuing the mentioned above script but again wanted to release a standalone of the data generated.
Hope you enjoy,
OpptionsOnly
Cerca negli script per "curve"
ATR RopeATR Rope is inspired by DonovanWall's "Range Filter". It implements a similar concept of filtering out smaller market movements and adjusting only for larger moves. In addition, this indicator goes one step deeper by producing actionable zones to determine market state. (Trend vs. Consolidation)
> Background
When reading up on the Range Filter indicator, it reminded me exactly of a Rope stabilization drawing tool in a program I use frequently. Rope stabilization essentially attaches a fixed length "rope" to your cursor and an anchor point (Brush). As you move your cursor, you are pulling the brush behind it. The cursor (of course) will not pull the brush until the rope is fully extended, this behavior filters out jittery movements and is used to produce smoother drawing curves.
If compared visually side-by-side, you will notice that this indicator bears striking resemblance to its inspiration.
> Goal
Other than simply distinguishing price movements between meaningful and noise, this indicator strives to create a rigid structure to frame market movements and lack-there-of, such as when to anticipate trend, and when to suspect consolidation.
Since the indicator works based on an ATR range, the resulting ATR Channel does well to get reactions from price at its extremes. Naturally, when consolidating, price will remain within the channel, neither pushing the channel significantly up or down. Likewise, when trending, price will continue to push the channel in a single direction.
With the goal of keeping it quick and simple, this indicator does not do any smoothing of data feeds, and is simply based on the deviation of price from the central rope. Adjusting the rope when price extends past the threshold created by +/- ATR from the rope.
> Features & Behaviors
- ATR Rope
ATR Rope is displayed as a 3 color single line.
This can be considered the center line, or the directional line, whichever you'd prefer.
The main point of the Rope display is to indicate direction, however it also is factually the center of the current working range.
- ATR Rope Color
When the rope's value moves up, it changes to green (uptrend), when down, red (downtrend).
When the source crosses the rope, it turns blue (flat).
With these simple rules, we've formed a structure to view market movements.
- Consolidation Zones
Consolidation Zones generate from "Flat" areas, and extend into subsequent trend areas. Consolidation is simply areas where price has crossed the Rope and remains inside the range. Over these periods, the upper and lower values are accumulated and averaged together to form the "Consolidation Zone" values. These zones are draw live, so values are averaged as the flat areas progress and don't repaint, so all values seen historically are as they would appear live.
- ATR Channel
ATR Channel displays the upper and lower bounds of the working range.
When the source moves beyond this range, the rope is adjusted based on the distance from the source to the channel. This range can be extremely useful to view, but by default it is hidden.
> Application
This indicator is not created to provide signals, or serve as a "complete" system.
(People who didn't read this far will still comment for signals. :) )
This is created to be used alongside manual interpretation and intuition. This indicator is not meant to constrain any users into a box, and I would actually encourage an open mind and idea generation, as the application of this indicator can take various forms.
> Examples
As you would probably already know, price movement can be fast impulses, and movement can be slow bleeds. In the screenshot below, we are using movements from and to consolidation zones to classify weak trend and strong trend. As you can see, there are also areas of consolidation which get broken out of and confirmed for the larger moves.
Author's Note: In each of these examples, I have outlined the start and end of each session. These examples come from 1 Min Future charts, and have specifically been framed with day trading in mind.
"Breakout Retest" or "Support/Resistance Flips" or "Structure Retests" are all generally the same thing, with different traders referring to them by different names, all of which can be seen throughout these examples.
In the next example, we have a day which started with an early reversal leading into long, slow, trend. Notice how each area throughout the trend essentially moves slightly higher, then consolidates while holding support of the previous zone. This day had a few sharp movements, however there was a large amount of neutrality throughout this day with continuous higher lows.
In contrast to the previous example, next up, we have a very choppy day. Throughout which we see a significant amount of retests before fast directional movements. We also see a few examples of places where previous zones remained relevant into the future. While the zones only display into the resulting trend area, they do not become immediately meaningless once they stop drawing.
> Abstract
In the screenshot below, I have stacked 2 of these indicators, using the high as the source for one and the low as the source for the other. I've hidden lines of the high and low channels to create a 4 lined channel based on the wicks of price.
This is not necessary to use the indicator, but should help provide an idea of creative ways the simple indicator could be used to produce more complicated analysis.
If you've made it this far, I would hope it's clear to you how this indicator could provide value to your trading.
Thank you to DonovonWall for the inspiration.
Enjoy!
Triple StochasticTriple Stochastic Elasticity Indicator
This custom indicator leverages the power of multi-timeframe analysis by combining three Stochastic Oscillators across different timeframes to identify potential trade entries based on elasticity and divergence between momentum curves.
📊 How It Works:
The indicator plots Stochastic values from three timeframes (e.g., 5m, 15m, and 1h), allowing you to observe how momentum behaves at different scales.
It highlights moments of elasticity—where the Stochastics stretch apart and then begin to converge—potentially signaling a reversion opportunity or trend continuation.
By identifying these stretches and snapbacks in momentum alignment, you can better time your entries and exits with improved confidence.
🔍 Use Case:
Look for divergence or convergence between the Stochastics.
Ideal for trend-following entries, pullback setups, and momentum reversal spotting.
Works best when combined with price action, S/R zones, or volume confirmation.
🛠 Customization:
Timeframes for each Stochastic are fully customizable.
Options to tweak %K, %D, and smoothing values to fit your strategy.
I recommend to remove the D%
And set the following settings
5 : 3 : 3
14 : 3 : 3
56 : 12 :12
Visual alerts can be added for when certain conditions are met (e.g., all three Stochs cross overbought/oversold levels).
Bitcoin Polynomial Regression ModelThis is the main version of the script. Click here for the Oscillator part of the script.
💡Why this model was created:
One of the key issues with most existing models, including our own Bitcoin Log Growth Curve Model , is that they often fail to realistically account for diminishing returns. As a result, they may present overly optimistic bull cycle targets (hence, we introduced alternative settings in our previous Bitcoin Log Growth Curve Model).
This new model however, has been built from the ground up with a primary focus on incorporating the principle of diminishing returns. It directly responds to this concept, which has been briefly explored here .
📉The theory of diminishing returns:
This theory suggests that as each four-year market cycle unfolds, volatility gradually decreases, leading to more tempered price movements. It also implies that the price increase from one cycle peak to the next will decrease over time as the asset matures. The same pattern applies to cycle lows and the relationship between tops and bottoms. In essence, these price movements are interconnected and should generally follow a consistent pattern. We believe this model provides a more realistic outlook on bull and bear market cycles.
To better understand this theory, the relationships between cycle tops and bottoms are outlined below:https://www.tradingview.com/x/7Hldzsf2/
🔧Creation of the model:
For those interested in how this model was created, the process is explained here. Otherwise, feel free to skip this section.
This model is based on two separate cubic polynomial regression lines. One for the top price trend and another for the bottom. Both follow the general cubic polynomial function:
ax^3 +bx^2 + cx + d.
In this equation, x represents the weekly bar index minus an offset, while a, b, c, and d are determined through polynomial regression analysis. The input (x, y) values used for the polynomial regression analysis are as follows:
Top regression line (x, y) values:
113, 18.6
240, 1004
451, 19128
655, 65502
Bottom regression line (x, y) values:
103, 2.5
267, 211
471, 3193
676, 16255
The values above correspond to historical Bitcoin cycle tops and bottoms, where x is the weekly bar index and y is the weekly closing price of Bitcoin. The best fit is determined using metrics such as R-squared values, residual error analysis, and visual inspection. While the exact details of this evaluation are beyond the scope of this post, the following optimal parameters were found:
Top regression line parameter values:
a: 0.000202798
b: 0.0872922
c: -30.88805
d: 1827.14113
Bottom regression line parameter values:
a: 0.000138314
b: -0.0768236
c: 13.90555
d: -765.8892
📊Polynomial Regression Oscillator:
This publication also includes the oscillator version of the this model which is displayed at the bottom of the screen. The oscillator applies a logarithmic transformation to the price and the regression lines using the formula log10(x) .
The log-transformed price is then normalized using min-max normalization relative to the log-transformed top and bottom regression line with the formula:
normalized price = log(close) - log(bottom regression line) / log(top regression line) - log(bottom regression line)
This transformation results in a price value between 0 and 1 between both the regression lines. The Oscillator version can be found here.
🔍Interpretation of the Model:
In general, the red area represents a caution zone, as historically, the price has often been near its cycle market top within this range. On the other hand, the green area is considered an area of opportunity, as historically, it has corresponded to the market bottom.
The top regression line serves as a signal for the absolute market cycle peak, while the bottom regression line indicates the absolute market cycle bottom.
Additionally, this model provides a predicted range for Bitcoin's future price movements, which can be used to make extrapolated predictions. We will explore this further below.
🔮Future Predictions:
Finally, let's discuss what this model actually predicts for the potential upcoming market cycle top and the corresponding market cycle bottom. In our previous post here , a cycle interval analysis was performed to predict a likely time window for the next cycle top and bottom:
In the image, it is predicted that the next top-to-top cycle interval will be 208 weeks, which translates to November 3rd, 2025. It is also predicted that the bottom-to-top cycle interval will be 152 weeks, which corresponds to October 13th, 2025. On the macro level, these two dates align quite well. For our prediction, we take the average of these two dates: October 24th 2025. This will be our target date for the bull cycle top.
Now, let's do the same for the upcoming cycle bottom. The bottom-to-bottom cycle interval is predicted to be 205 weeks, which translates to October 19th, 2026, and the top-to-bottom cycle interval is predicted to be 259 weeks, which corresponds to October 26th, 2026. We then take the average of these two dates, predicting a bear cycle bottom date target of October 19th, 2026.
Now that we have our predicted top and bottom cycle date targets, we can simply reference these two dates to our model, giving us the Bitcoin top price prediction in the range of 152,000 in Q4 2025 and a subsequent bottom price prediction in the range of 46,500 in Q4 2026.
For those interested in understanding what this specifically means for the predicted diminishing return top and bottom cycle values, the image below displays these predicted values. The new values are highlighted in yellow:
And of course, keep in mind that these targets are just rough estimates. While we've done our best to estimate these targets through a data-driven approach, markets will always remain unpredictable in nature. What are your targets? Feel free to share them in the comment section below.
Bitcoin Polynomial Regression OscillatorThis is the oscillator version of the script. Click here for the other part of the script.
💡Why this model was created:
One of the key issues with most existing models, including our own Bitcoin Log Growth Curve Model , is that they often fail to realistically account for diminishing returns. As a result, they may present overly optimistic bull cycle targets (hence, we introduced alternative settings in our previous Bitcoin Log Growth Curve Model).
This new model however, has been built from the ground up with a primary focus on incorporating the principle of diminishing returns. It directly responds to this concept, which has been briefly explored here .
📉The theory of diminishing returns:
This theory suggests that as each four-year market cycle unfolds, volatility gradually decreases, leading to more tempered price movements. It also implies that the price increase from one cycle peak to the next will decrease over time as the asset matures. The same pattern applies to cycle lows and the relationship between tops and bottoms. In essence, these price movements are interconnected and should generally follow a consistent pattern. We believe this model provides a more realistic outlook on bull and bear market cycles.
To better understand this theory, the relationships between cycle tops and bottoms are outlined below:https://www.tradingview.com/x/7Hldzsf2/
🔧Creation of the model:
For those interested in how this model was created, the process is explained here. Otherwise, feel free to skip this section.
This model is based on two separate cubic polynomial regression lines. One for the top price trend and another for the bottom. Both follow the general cubic polynomial function:
ax^3 +bx^2 + cx + d.
In this equation, x represents the weekly bar index minus an offset, while a, b, c, and d are determined through polynomial regression analysis. The input (x, y) values used for the polynomial regression analysis are as follows:
Top regression line (x, y) values:
113, 18.6
240, 1004
451, 19128
655, 65502
Bottom regression line (x, y) values:
103, 2.5
267, 211
471, 3193
676, 16255
The values above correspond to historical Bitcoin cycle tops and bottoms, where x is the weekly bar index and y is the weekly closing price of Bitcoin. The best fit is determined using metrics such as R-squared values, residual error analysis, and visual inspection. While the exact details of this evaluation are beyond the scope of this post, the following optimal parameters were found:
Top regression line parameter values:
a: 0.000202798
b: 0.0872922
c: -30.88805
d: 1827.14113
Bottom regression line parameter values:
a: 0.000138314
b: -0.0768236
c: 13.90555
d: -765.8892
📊Polynomial Regression Oscillator:
This publication also includes the oscillator version of the this model which is displayed at the bottom of the screen. The oscillator applies a logarithmic transformation to the price and the regression lines using the formula log10(x) .
The log-transformed price is then normalized using min-max normalization relative to the log-transformed top and bottom regression line with the formula:
normalized price = log(close) - log(bottom regression line) / log(top regression line) - log(bottom regression line)
This transformation results in a price value between 0 and 1 between both the regression lines.
🔍Interpretation of the Model:
In general, the red area represents a caution zone, as historically, the price has often been near its cycle market top within this range. On the other hand, the green area is considered an area of opportunity, as historically, it has corresponded to the market bottom.
The top regression line serves as a signal for the absolute market cycle peak, while the bottom regression line indicates the absolute market cycle bottom.
Additionally, this model provides a predicted range for Bitcoin's future price movements, which can be used to make extrapolated predictions. We will explore this further below.
🔮Future Predictions:
Finally, let's discuss what this model actually predicts for the potential upcoming market cycle top and the corresponding market cycle bottom. In our previous post here , a cycle interval analysis was performed to predict a likely time window for the next cycle top and bottom:
In the image, it is predicted that the next top-to-top cycle interval will be 208 weeks, which translates to November 3rd, 2025. It is also predicted that the bottom-to-top cycle interval will be 152 weeks, which corresponds to October 13th, 2025. On the macro level, these two dates align quite well. For our prediction, we take the average of these two dates: October 24th 2025. This will be our target date for the bull cycle top.
Now, let's do the same for the upcoming cycle bottom. The bottom-to-bottom cycle interval is predicted to be 205 weeks, which translates to October 19th, 2026, and the top-to-bottom cycle interval is predicted to be 259 weeks, which corresponds to October 26th, 2026. We then take the average of these two dates, predicting a bear cycle bottom date target of October 19th, 2026.
Now that we have our predicted top and bottom cycle date targets, we can simply reference these two dates to our model, giving us the Bitcoin top price prediction in the range of 152,000 in Q4 2025 and a subsequent bottom price prediction in the range of 46,500 in Q4 2026.
For those interested in understanding what this specifically means for the predicted diminishing return top and bottom cycle values, the image below displays these predicted values. The new values are highlighted in yellow:
And of course, keep in mind that these targets are just rough estimates. While we've done our best to estimate these targets through a data-driven approach, markets will always remain unpredictable in nature. What are your targets? Feel free to share them in the comment section below.
DeepSignalFilterHelpersLibrary "DeepSignalFilterHelpers"
filter_intraday_intensity(useIiiFilter)
Parameters:
useIiiFilter (bool)
filter_vwma(src, length, useVwmaFilter)
Parameters:
src (float)
length (int)
useVwmaFilter (bool)
filter_nvi(useNviFilter)
Parameters:
useNviFilter (bool)
filter_emv(length, emvThreshold, useEmvFilter, useMovingAvg)
EMV filter for filtering signals based on Ease of Movement
Parameters:
length (int) : The length of the EMV calculation
emvThreshold (float) : The EMV threshold
useEmvFilter (bool) : Whether to apply the EMV filter
useMovingAvg (bool) : Whether to use moving average as threshold
Returns: Filtered result indicating whether the signal should be used
filter_adi(length, threshold, useAdiFilter, useMovingAvg)
ADI filter for filtering signals based on Accumulation/Distribution Index
Parameters:
length (int) : The length of the ADI moving average calculation
threshold (float) : The ADI threshold
useAdiFilter (bool) : Whether to apply the ADI filter
useMovingAvg (bool) : Whether to use moving average as threshold
Returns: Filtered result indicating whether the signal should be used
filter_mfi(length, mfiThreshold, useMfiFilter, useMovingAvg)
MFI filter for filtering signals based on Money Flow Index
Parameters:
length (int) : The length of the MFI calculation
mfiThreshold (float) : The MFI threshold
useMfiFilter (bool) : Whether to apply the MFI filter
useMovingAvg (bool) : Whether to use moving average as threshold
Returns: Filtered result indicating whether the signal should be used
detect_obv_states(obvThresholdStrong, obvThresholdModerate, lookbackPeriod, obvMode)
detect_obv_states: Identify OBV states with three levels (Strong, Moderate, Weak) over a configurable period
Parameters:
obvThresholdStrong (float) : Threshold for strong OBV movements
obvThresholdModerate (float) : Threshold for moderate OBV movements
lookbackPeriod (int) : Number of periods to analyze OBV trends
obvMode (string) : OBV mode to filter ("Strong", "Moderate", "Weak")
Returns: OBV state ("Strong Up", "Moderate Up", "Weak Up", "Positive Divergence", "Negative Divergence", "Consolidation", "Weak Down", "Moderate Down", "Strong Down")
filter_obv(src, length, obvMode, threshold, useObvFilter, useMovingAvg)
filter_obv: Filter signals based on OBV states
Parameters:
src (float) : The source series (default: close)
length (int) : The length of the OBV moving average calculation
obvMode (string) : OBV mode to filter ("Strong", "Moderate", "Weak")
threshold (float) : Optional threshold for additional filtering
useObvFilter (bool) : Whether to apply the OBV filter
useMovingAvg (bool) : Whether to use moving average as threshold
Returns: Filtered result indicating whether the signal should be used
filter_cmf(length, cmfThreshold, useCmfFilter, useMovingAvg)
CMF filter for filtering signals based on Chaikin Money Flow
Parameters:
length (int) : The length of the CMF calculation
cmfThreshold (float) : The CMF threshold
useCmfFilter (bool) : Whether to apply the CMF filter
useMovingAvg (bool) : Whether to use moving average as threshold
Returns: Filtered result indicating whether the signal should be used
filter_vwap(useVwapFilter)
VWAP filter for filtering signals based on Volume-Weighted Average Price
Parameters:
useVwapFilter (bool) : Whether to apply the VWAP filter
Returns: Filtered result indicating whether the signal should be used
filter_pvt(length, pvtThreshold, usePvtFilter, useMovingAvg)
PVT filter for filtering signals based on Price Volume Trend
Parameters:
length (int) : The length of the PVT moving average calculation
pvtThreshold (float) : The PVT threshold
usePvtFilter (bool) : Whether to apply the PVT filter
useMovingAvg (bool) : Whether to use moving average as threshold
Returns: Filtered result indicating whether the signal should be used
filter_vo(shortLength, longLength, voThreshold, useVoFilter, useMovingAvg)
VO filter for filtering signals based on Volume Oscillator
Parameters:
shortLength (int) : The length of the short-term volume moving average
longLength (int) : The length of the long-term volume moving average
voThreshold (float) : The Volume Oscillator threshold
useVoFilter (bool) : Whether to apply the VO filter
useMovingAvg (bool) : Whether to use moving average as threshold
Returns: Filtered result indicating whether the signal should be used
filter_cho(shortLength, longLength, choThreshold, useChoFilter, useMovingAvg)
CHO filter for filtering signals based on Chaikin Oscillator
Parameters:
shortLength (int) : The length of the short-term ADI moving average
longLength (int) : The length of the long-term ADI moving average
choThreshold (float) : The Chaikin Oscillator threshold
useChoFilter (bool) : Whether to apply the CHO filter
useMovingAvg (bool) : Whether to use moving average as threshold
Returns: Filtered result indicating whether the signal should be used
filter_fi(length, fiThreshold, useFiFilter, useMovingAvg)
FI filter for filtering signals based on Force Index
Parameters:
length (int) : The length of the FI calculation
fiThreshold (float) : The Force Index threshold
useFiFilter (bool) : Whether to apply the FI filter
useMovingAvg (bool) : Whether to use moving average as threshold
Returns: Filtered result indicating whether the signal should be used
filter_garman_klass_volatility(length, useGkFilter)
Parameters:
length (int)
useGkFilter (bool)
filter_frama(src, length, useFramaFilter)
Parameters:
src (float)
length (int)
useFramaFilter (bool)
filter_bollinger_bands(src, length, stdDev, useBollingerFilter)
Parameters:
src (float)
length (int)
stdDev (float)
useBollingerFilter (bool)
filter_keltner_channel(src, length, atrMult, useKeltnerFilter)
Parameters:
src (float)
length (simple int)
atrMult (float)
useKeltnerFilter (bool)
regime_filter(src, threshold, useRegimeFilter)
Regime filter for filtering signals based on trend strength
Parameters:
src (float) : The source series
threshold (float) : The threshold for the filter
useRegimeFilter (bool) : Whether to apply the regime filter
Returns: Filtered result indicating whether the signal should be used
regime_filter_v2(src, threshold, useRegimeFilter)
Regime filter for filtering signals based on trend strength
Parameters:
src (float) : The source series
threshold (float) : The threshold for the filter
useRegimeFilter (bool) : Whether to apply the regime filter
Returns: Filtered result indicating whether the signal should be used
filter_adx(src, length, adxThreshold, useAdxFilter)
ADX filter for filtering signals based on ADX strength
Parameters:
src (float) : The source series
length (simple int) : The length of the ADX calculation
adxThreshold (int) : The ADX threshold
useAdxFilter (bool) : Whether to apply the ADX filter
Returns: Filtered result indicating whether the signal should be used
filter_volatility(minLength, maxLength, useVolatilityFilter)
Volatility filter for filtering signals based on volatility
Parameters:
minLength (simple int) : The minimum length for ATR calculation
maxLength (simple int) : The maximum length for ATR calculation
useVolatilityFilter (bool) : Whether to apply the volatility filter
Returns: Filtered result indicating whether the signal should be used
filter_ulcer(src, length, ulcerThreshold, useUlcerFilter)
Ulcer Index filter for filtering signals based on Ulcer Index
Parameters:
src (float) : The source series
length (int) : The length of the Ulcer Index calculation
ulcerThreshold (float) : The Ulcer Index threshold (default: average Ulcer Index)
useUlcerFilter (bool) : Whether to apply the Ulcer Index filter
Returns: Filtered result indicating whether the signal should be used
filter_stddev(src, length, stdDevThreshold, useStdDevFilter)
Standard Deviation filter for filtering signals based on Standard Deviation
Parameters:
src (float) : The source series
length (int) : The length of the Standard Deviation calculation
stdDevThreshold (float) : The Standard Deviation threshold (default: average Standard Deviation)
useStdDevFilter (bool) : Whether to apply the Standard Deviation filter
Returns: Filtered result indicating whether the signal should be used
filter_macdv(src, shortLength, longLength, signalSmoothing, macdVThreshold, useMacdVFilter)
MACD-V filter for filtering signals based on MACD-V
Parameters:
src (float) : The source series
shortLength (simple int) : The short length for MACD calculation
longLength (simple int) : The long length for MACD calculation
signalSmoothing (simple int) : The signal smoothing length for MACD
macdVThreshold (float) : The MACD-V threshold (default: average MACD-V)
useMacdVFilter (bool) : Whether to apply the MACD-V filter
Returns: Filtered result indicating whether the signal should be used
filter_atr(length, atrThreshold, useAtrFilter)
ATR filter for filtering signals based on Average True Range (ATR)
Parameters:
length (simple int) : The length of the ATR calculation
atrThreshold (float) : The ATR threshold (default: average ATR)
useAtrFilter (bool) : Whether to apply the ATR filter
Returns: Filtered result indicating whether the signal should be used
filter_candle_body_and_atr(length, bodyThreshold, atrThreshold, useFilter)
Candle Body and ATR filter for filtering signals
Parameters:
length (simple int) : The length of the ATR calculation
bodyThreshold (float) : The threshold for candle body size (relative to ATR)
atrThreshold (float) : The ATR threshold (default: average ATR)
useFilter (bool) : Whether to apply the candle body and ATR filter
Returns: Filtered result indicating whether the signal should be used
filter_atrp(length, atrpThreshold, useAtrpFilter)
ATRP filter for filtering signals based on ATR Percentage (ATRP)
Parameters:
length (simple int) : The length of the ATR calculation
atrpThreshold (float) : The ATRP threshold (default: average ATRP)
useAtrpFilter (bool) : Whether to apply the ATRP filter
Returns: Filtered result indicating whether the signal should be used
filter_jma(src, length, phase, useJmaFilter)
Parameters:
src (float)
length (simple int)
phase (float)
useJmaFilter (bool)
filter_cidi(src, rsiLength, shortMaLength, longMaLength, useCidiFilter)
Parameters:
src (float)
rsiLength (simple int)
shortMaLength (int)
longMaLength (int)
useCidiFilter (bool)
filter_rsi(src, length, rsiThreshold, useRsiFilter)
Parameters:
src (float)
length (simple int)
rsiThreshold (float)
useRsiFilter (bool)
filter_ichimoku_oscillator(length, threshold, useFilter)
Ichimoku Oscillator filter for filtering signals based on Ichimoku Oscillator
Parameters:
length (int) : The length of the Ichimoku Oscillator calculation
threshold (float) : The threshold for the filter (default: average Ichimoku Oscillator)
useFilter (bool) : Whether to apply the filter
Returns: Filtered result indicating whether the signal should be used
filter_cmb_composite_index(src, shortLength, longLength, threshold, useFilter)
CMB Composite Index filter for filtering signals based on CMB Composite Index
Parameters:
src (float) : The source series
shortLength (simple int) : The short length for CMB calculation
longLength (simple int) : The long length for CMB calculation
threshold (float) : The threshold for the filter (default: average CMB Composite Index)
useFilter (bool) : Whether to apply the filter
Returns: Filtered result indicating whether the signal should be used
filter_connors_rsi(src, rsiLength, rocLength, streakLength, threshold, useFilter)
Connors RSI filter for filtering signals based on Connors RSI
Parameters:
src (float) : The source series
rsiLength (simple int) : The length for RSI calculation
rocLength (int) : The length for ROC calculation
streakLength (simple int) : The length for streak calculation
threshold (float) : The threshold for the filter (default: average Connors RSI)
useFilter (bool) : Whether to apply the filter
Returns: Filtered result indicating whether the signal should be used
filter_coppock_curve(src, roc1Length, roc2Length, wmaLength, threshold, useFilter)
Coppock Curve filter for filtering signals based on Coppock Curve
Parameters:
src (float) : The source series
roc1Length (int) : The length for the first ROC calculation
roc2Length (int) : The length for the second ROC calculation
wmaLength (int) : The length for the WMA calculation
threshold (float) : The threshold for the filter (default: average Coppock Curve)
useFilter (bool) : Whether to apply the filter
Returns: Filtered result indicating whether the signal should be used
filter_pmo(src, pmoLength, smoothingLength, threshold, useFilter)
DecisionPoint Price Momentum Oscillator filter for filtering signals based on PMO
Parameters:
src (float) : The source series
pmoLength (simple int) : The length for PMO calculation
smoothingLength (simple int) : The smoothing length for PMO
threshold (float) : The threshold for the filter (default: average PMO Oscillator)
useFilter (bool) : Whether to apply the filter
Returns: Filtered result indicating whether the signal should be used
filter_macd(src, shortLength, longLength, signalSmoothing, threshold, useFilter)
MACD filter for filtering signals based on MACD
Parameters:
src (float) : The source series
shortLength (simple int) : The short length for MACD calculation
longLength (simple int) : The long length for MACD calculation
signalSmoothing (simple int) : The signal smoothing length for MACD
threshold (float) : The threshold for the filter (default: average MACD)
useFilter (bool) : Whether to apply the filter
Returns: Filtered result indicating whether the signal should be used
filter_macd_histogram(src, shortLength, longLength, signalSmoothing, threshold, useFilter)
MACD-Histogram filter for filtering signals based on MACD-Histogram
Parameters:
src (float) : The source series
shortLength (simple int) : The short length for MACD calculation
longLength (simple int) : The long length for MACD calculation
signalSmoothing (simple int) : The signal smoothing length for MACD
threshold (float) : The threshold for the filter (default: average MACD-Histogram)
useFilter (bool) : Whether to apply the filter
Returns: Filtered result indicating whether the signal should be used
filter_kst(src, r1, r2, r3, r4, sm1, sm2, sm3, sm4, signalLength, threshold, useFilter)
Pring's Know Sure Thing filter for filtering signals based on KST
Parameters:
src (float) : The source series
r1 (int) : The first ROC length
r2 (int) : The second ROC length
r3 (int) : The third ROC length
r4 (int) : The fourth ROC length
sm1 (int) : The first smoothing length
sm2 (int) : The second smoothing length
sm3 (int) : The third smoothing length
sm4 (int) : The fourth smoothing length
signalLength (int) : The signal line smoothing length
threshold (float) : The threshold for the filter (default: average KST Oscillator)
useFilter (bool) : Whether to apply the filter
Returns: Filtered result indicating whether the signal should be used
filter_special_k(src, r1, r2, r3, r4, sm1, sm2, sm3, sm4, threshold, useFilter)
Pring's Special K filter for filtering signals based on Special K
Parameters:
src (float) : The source series
r1 (int) : The first ROC length
r2 (int) : The second ROC length
r3 (int) : The third ROC length
r4 (int) : The fourth ROC length
sm1 (int) : The first smoothing length
sm2 (int) : The second smoothing length
sm3 (int) : The third smoothing length
sm4 (int) : The fourth smoothing length
threshold (float) : The threshold for the filter (default: average Special K)
useFilter (bool) : Whether to apply the filter
Returns: Filtered result indicating whether the signal should be used
filter_roc_momentum(src, rocLength, momentumLength, threshold, useFilter)
ROC and Momentum filter for filtering signals based on ROC and Momentum
Parameters:
src (float) : The source series
rocLength (int) : The length for ROC calculation
momentumLength (int) : The length for Momentum calculation
threshold (float) : The threshold for the filter (default: average ROC and Momentum)
useFilter (bool) : Whether to apply the filter
Returns: Filtered result indicating whether the signal should be used
filter_rrg_relative_strength(src, length, threshold, useFilter)
RRG Relative Strength filter for filtering signals based on RRG Relative Strength
Parameters:
src (float) : The source series
length (int) : The length for RRG Relative Strength calculation
threshold (float) : The threshold for the filter (default: average RRG Relative Strength)
useFilter (bool) : Whether to apply the filter
Returns: Filtered result indicating whether the signal should be used
filter_alligator(useFilter)
Parameters:
useFilter (bool)
filter_wyckoff(useFilter)
Parameters:
useFilter (bool)
filter_squeeze_momentum(bbLength, bbStdDev, kcLength, kcMult, useFilter)
Parameters:
bbLength (int)
bbStdDev (float)
kcLength (simple int)
kcMult (float)
useFilter (bool)
filter_atr_compression(length, atrThreshold, useFilter)
Parameters:
length (simple int)
atrThreshold (float)
useFilter (bool)
filter_low_volume(length, useFilter)
Parameters:
length (int)
useFilter (bool)
filter_nvi_accumulation(useFilter)
Parameters:
useFilter (bool)
filter_ma_slope(src, length, slopeThreshold, useFilter)
Parameters:
src (float)
length (int)
slopeThreshold (float)
useFilter (bool)
filter_adx_low(len, lensig, adxThreshold, useFilter)
Parameters:
len (simple int)
lensig (simple int)
adxThreshold (int)
useFilter (bool)
filter_choppiness_index(length, chopThreshold, useFilter)
Parameters:
length (int)
chopThreshold (float)
useFilter (bool)
filter_range_detection(length, useFilter)
Parameters:
length (int)
useFilter (bool)
Vesica Piscis Visualization-Secret Geometry-AYNETExplanation
Customization Options:
circle_radius: Adjust the size of the circles.
line_color: Choose the color of the circles.
line_width: Adjust the thickness of the circle lines.
segments: Increase or decrease the smoothness of the circles (higher values make smoother circles but use more computational resources).
Placement:
The first circle is centered at circle1_x and the second is offset horizontally by 2 * circle_radius to ensure their centers intersect each other's circumference.
Intersection Highlight:
The intersection area is visually emphasized with a semi-transparent background (bgcolor), which can be customized or removed if unnecessary.
Smoothness:
The segments input determines how many points are used to create each circle. Higher values create smoother curves.
Adjustments
Ensure the circles fit within the visible chart area by adjusting circle1_x and circle_radius.
If needed, you can add additional features, such as drawing lines to connect the centers or labeling the Vesica Piscis region.
Let me know if you want further refinements or additional features!
DRKMetricsLibrary "DRKMetrics"
TODO: add library description here
curve(disp_ind)
Call function to get a certain curve of your strategy.
Parameters:
disp_ind (string)
Returns: Returns type of curve plot.
cleaner(disp_ind, plot)
Call function to filter out your Strategy plots
Parameters:
disp_ind (string)
plot (float)
cobraTable(option, position)
Assign this function to a random variable to get the "Performance Table"
Parameters:
option (simple string)
position (simple string)
Enhanced Economic Composite with Dynamic WeightEnhanced Economic Composite with Dynamic Weight
Overview of the Indicator :
The "Enhanced Economic Composite with Dynamic Weight" is a comprehensive tool that combines multiple economic indicators, technical signals, and dynamic weighting to provide insights into market and economic health. It adjusts based on current volatility and recession risk, offering a detailed view of market conditions.
What This Indicator Does :
Tracks Economic Health: Uses key economic and market indicators to assess overall market conditions.
Dynamic Weighting: Adjusts the importance of components like stock indices, gold, and bonds based on volatility (VIX) and yield curve inversion.
Technical Signals: Identifies market momentum shifts through key crossovers like the Golden Cross, Death Cross, Silver Cross, and Hospice Cross.
Recession Shading: Marks known recessions for historical context.
Economic Factors Considered :
TIP (Treasury Inflation-Protected Securities): Reflects inflation expectations.
Gold: A safe-haven asset, increases in weight during volatility or rising momentum.
US Dollar Index (DXY): Measures USD strength, fixed weight of 10%, smoothed with EMA.
Commodities (DBC): Indicates global demand; weight increases with momentum or volatility.
Volatility Index (VIX): Reflects market risk, inversely related to market confidence.
Stock Indices (S&P 500, DJIA, NASDAQ, Russell 2000): Represent market performance, with weights reduced during high volatility or negative yield spread.
Yield Spread (10Y - 2Y Treasuries): Predicts recessions; negative spread reduces stock weighting.
Credit Spread (HYG - TLT): Indicates market risk through corporate vs. government bond yields.
How and Why Factors are Weighted:
Stock Indices get more weight in stable markets (low VIX, positive yield spread), while safe-haven assets like gold and bonds gain weight in volatile markets or during yield curve inversions. This dynamic adjustment ensures the composite reflects current market sentiment.
Technical Signals:
Golden Cross: 50 EMA crossing above 200 SMA, signaling bullish momentum.
Death Cross: 50 EMA below 200 SMA, indicating bearish momentum.
Silver Cross: 21 EMA crossing above 50 EMA, plotted only if below the 200-day SMA, signaling potential upside in downtrend conditions.
Hospice Cross: 50 EMA crosses below 21 EMA, plotted only if 21 EMA is below 200 SMA, a leading bearish signal.
Recession Shading:
Recession periods like the Great Recession, Early 2000s Recession, and COVID-19 Recession are shaded to provide historical context.
Benefits of Using This Indicator:
Comprehensive Analysis: Combines economic fundamentals and technical analysis for a full market view.
Dynamic Risk Adjustment: Weights shift between growth and safe-haven assets based on volatility and recession risk.
Early Signals: The Silver Cross and Hospice Cross provide early warnings of potential market shifts.
Recession Forecasting: Helps predict downturns through the yield curve and recession indicators.
Who Can Benefit:
Traders: Identify market momentum shifts early through crossovers.
Long-term Investors: Use recession warnings and dynamic adjustments to protect portfolios.
Analysts: A holistic tool for analyzing both economic trends and market movements.
This indicator helps users navigate varying market conditions by dynamically adjusting based on economic factors and providing early technical signals for market momentum shifts.
Machine Learning Signal FilterIntroducing the "Machine Learning Signal Filter," an innovative trading indicator designed to leverage the power of machine learning to enhance trading strategies. This tool combines advanced data processing capabilities with user-friendly customization options, offering traders a sophisticated yet accessible means to optimize their market analysis and decision-making processes. Importantly, this indicator does not repaint, ensuring that signals remain consistent and reliable after they are generated.
Machine Learning Integration
The "Machine Learning Signal Filter" employs machine learning algorithms to analyze historical price data and identify patterns that may not be immediately apparent through traditional technical analysis. By utilizing techniques such as regression analysis and neural networks, the indicator continuously learns from new data, refining its predictive capabilities over time. This dynamic adaptability allows the indicator to adjust to changing market conditions, potentially improving the accuracy of trading signals.
Key Features and Benefits
Dynamic Signal Generation: The indicator uses machine learning to generate buy and sell signals based on complex data patterns. This approach enables it to adapt to evolving market trends, offering traders timely and relevant insights. Crucially, the indicator does not repaint, providing reliable signals that traders can trust.
Customizable Parameters: Users can fine-tune the indicator to suit their specific trading styles by adjusting settings such as the temporal synchronization and neural pulse rate. This flexibility ensures that the indicator can be tailored to different market environments.
Visual Clarity and Usability: The indicator provides clear visual cues on the chart, including color-coded signals and optional display of signal curves. Users can also customize the table's position and text size, enhancing readability and ease of use.
Comprehensive Performance Metrics: The indicator includes a detailed metrics table that displays key performance indicators such as return rates, trade counts, and win/loss ratios. This feature helps traders assess the effectiveness of their strategies and make data-driven decisions.
How It Works
The core of the "Machine Learning Signal Filter" is its ability to process and learn from large datasets. By applying machine learning models, the indicator identifies potential trading opportunities based on historical data patterns. It uses regression techniques to predict future price movements and neural networks to enhance pattern recognition. As new data is introduced, the indicator refines its algorithms, improving its accuracy and reliability over time.
Use Cases
Trend Following: Ideal for traders seeking to capitalize on market trends, the indicator helps identify the direction and strength of price movements.
Scalping: With its ability to provide quick signals, the indicator is suitable for scalpers aiming for rapid profits in volatile markets.
Risk Management: By offering insights into trade performance, the indicator aids in managing risk and optimizing trade setups.
In summary, the "Machine Learning Signal Filter" is a powerful tool that combines the analytical strength of machine learning with the practical needs of traders. Its ability to adapt and provide actionable insights makes it an invaluable asset for navigating the complexities of financial markets.
The "Machine Learning Signal Filter" is a tool designed to assist traders by providing insights based on historical data and machine learning techniques. It does not guarantee profitable trades and should be used as part of a comprehensive trading strategy. Users are encouraged to conduct their own research and consider their financial situation before making trading decisions. Trading involves significant risk, and it is possible to lose more than the initial investment. Always trade responsibly and be aware of the risks involved.
25-Day Momentum IndexDescription:
The 25-Day Momentum Index (25D MI) is a technical indicator designed to measure the strength and direction of price movements over a 25-day period. Inspired by classic momentum analysis, this indicator helps traders identify trends and potential reversal points in the market.
How It Works:
Momentum Calculation: The 25D MI calculates momentum as the difference between the current closing price and the closing price 25 days ago. This difference provides insights into the market's recent strength or weakness.
Plotting: The indicator plots the Momentum Index as a blue line, showing the raw momentum values. A zero line is also plotted in gray to serve as a reference point for positive and negative momentum.
Highlighting Zones:
Positive Momentum: When the Momentum Index is above zero, it is plotted in green, highlighting positive momentum phases.
Negative Momentum: When the Momentum Index is below zero, it is plotted in red, highlighting negative momentum phases.
Usage:
A rising curve means an increase in upward momentum - if it is above the zero line. A rising curve below the zero line signifies a decrease in downward momentum. By the same token, a falling curve means an increase in downward momentum below the zero line, a decrease in upward momentum above the zero line.
This indicator is ideal for traders looking to complement their strategy with a visual tool that captures the essence of market momentum over a significant period. Use it to enhance your technical analysis and refine your trading decisions.
Relative VolumeHello traders,
"There's nothing new on Wall Street" is an age-old saying that still shows its relevance in modern day financial markets; volume still serves as a valuable tool for any trader just as it did for those that came and succeeded before us; in order to succeed in modern day markets one has to take it up a notch and dabble in complicated topics, like math. Now I dunno about you reader but I’m not keen on sitting around all day just to watch numbers on a screen; it’s pretty important to add some color into your life before it becomes dull but how can someone add colors into their trading toolkit as an aid rather than bother? With a bit of help from 3 other amazing open-source indicators you too can become a statistics enjoyer by combining math and colors to make pattern recognition much more intuitive and offering more peace of mind when trading. “Sir but how?”, glad you didn’t ask, it helps with simplifying statistics, in this case a Gaussian bellcurve
“HUH?”, you say? Alright class, Gaussian bellcurves for math dislikers 101 is in session
- Imagine that we have a bunch of numbers that we want to graph. We could just draw a line and plot the numbers on it, but that might not be very interesting.
- Instead, we can use the shape of a bell to show how many of each number we have.
- Let's say we have a lot of people and we want to graph how tall they are. We would start by making a line from the shortest person to the tallest person, and then we would draw the bell shape around the line.
- The bell shape is called a "Gaussian Bell Curve," and it shows us how many people are a certain height.
- In the middle of the bell, where it's the widest, we have the most people who are about average height. As we move to the sides of the bell, the curve gets lower because there are fewer people who are really tall or really short.
The bell curve discussed is the main idea for the candle coloring component of this indicator as being able to analyze the distribution of an entire dataset, in this case volume, can alert us when volume/participation in the market is away from its average using color, and therefore an opportunity could be present. Fair warning, it’s important to not strictly focus on volume as volume is meant to be confluence to the current structure of the market rather than causing tunnel vision.
Why 3 indicators to combine?
It starts with the RVOL by Mik3Christ3ns3n indicator as the backbone by calculating the average volume over a specified period of time, and then compares each new volume value to this average to determine whether it is above or below the average. The indicator then normalizes the volume data and calculates the z-score/standard deviation to determine whether the volume is within normal range or is an anomaly beyond a specified threshold which can also be set into an alert to aid in eyeing possible opportunities.
The code also includes Candle Coloring by Morty as it calculates a function to get the z-score for the size of the candle's body, and then compares it to the z-score for volume to determine whether the body size is a factor in the price action.
Finally, the code plots the anomalies and the normalized volume data on the chart using the first RVOL indicator mentioned, and colors the bars of the chart based on whether they are within normal range or are anomalies which comes from using code from veryfid's relative volume indicator.
Overall, this custom technical indicator is best used to identify unusual changes in trading volume, which may indicate potential price movements in the underlying.
How about some examples?
This first example is for my scalpers wanting to get in and out but not having much of an idea where or let alone how; using a tool like VWAP can be great for determining the area value to execute mean reversion trades once a speculator spots a colored candle anomaly at standard deviation band. Works best when VWAP is flat as it signals lack of conviction from both bulls and bears
This second example is for my fire and forget intraweek swing traders who want to execute a higher timeframe trend-following bias. A speculator starting 2023 off notices that the negative sentiment around Binance from late last year has quieted down and has conviction in upside after BTC began an uptrend as monthly VWAP (right chart) has began sloping up as well as a rally with momentum shown with the blue colored candle so the trader waits wait for a pullback for entry. On the chart to the left of the 4H the speculator notices a pullback into the area of interest to do business so a limit bid is left to enter for continued upside in Bitcoin through January 2023 just by keeping things simple
That’s really the main purpose of this indicator: simplicity of statistics for confluence using volume
Volume precedes price and price moves only for narrative to follow- why wait for your subjective Twitter timeline to give you a biased narrative to trade when you can use objective analysis by combining statistics and colors to allow for a cleaner execution process
“But what about risk management?” Glad you didn’t ask reader!
One last example then, we meet our trend following trader again feeling euphoric so they know profit taking season is coming soon but wants to leave emotion out of it. How to go about it? Same idea as our last trend following example: we see on the 4h chart to the right side shows Bitcoin lose and trade back within the 2nd standard deviation of quarterly VWAP which is telling our speculator that the uptrend has broken on top of which notices on the 30 minute chart on the left that aggressive market buyers have been steadily absorbed by limit sellers on multiple occasions of retesting 30,500 shown with the green colored candles and volume bars below, time to sell.
Turns out that selling was proactive risk management because price dumped thereafter
Hope this explanation gave you some useful insights on using statistics as colors from cherrypicked examples, remember that just because my examples are cherrypicked doesn’t invalidate these concepts at all as the market only does two things, initiate aggressive auctions and respond passively to auctions. This tool makes for seeing where that initiative aggressive activity is happening much simpler to deduce if others will respond to an anomaly of initiative aggressive activity or if the aggression will continue.
If there’s just one thing you take from this- simplicity above all, cheers and good luck
Fake StrategyTHIS IS A FAKE STRATEGY. PLEASE DO NOT USE THIS FOR TRADING.
Just publishing this to display how easily you can fake backtest results in the strategies. However, there are ways to identify the scams. Let's discuss about major red herrings in a strategy. How to identify them and stay away from them.
Any strategy which proclaims significantly high win rate (such as this) are not practical and can only be achieved via following means
Significantly high risk compared to reward
Trades are set in such a way that profits are taken in small movement whereas stops are significantly farther. By doing this, win rate will surely increase. But, will be picking pennies by risking plenty of capital. General trait of such strategies can be identified by comparing average trade and max drawdown . These kind of strategies will have significantly higher drawdown even though the number of losses are less. For example, 1 losing trade leading to drawdown of 10+% whereas every winning only contributes 0.25%.
We can also see this kind of behaviour in option selling strategies such as 0 and 1 DTE option selling strategies. Here too probability of winning can be pretty high (north of 90%). But, on every winning, you make 1-2% of your capital however on remaining trades, you will lose your complete capital - which leads to overall losing position.
Inducing repainting through code
This strategy is an excellent example of how repainting can be induced via code using request.securities method. There are plenty of ways a strategy or code can be made to repaint. Tradingview user manual has lots of information about repainting. Feel free to read through if you have extra time. If you look at this code, it is very simple to induce repainting in a strategy to make it look like an infinite money printing machine.
High Leverage and lack of usage of margin
Using leverage in pine can show false results. This is because, the strategy engine will not stop when equity goes below 0% until the trade is closed. But, that does not happen in real life. This is the reason why using leverage along with high risk and low reward trades can show false results overall making it look like the strategy is unbeatable. But, when you try to use that in real time, it is likely that account will be blown out.
To understand leverage conditions, please have a look at the strategy property fields - Order Size, Pyramiding, Commission, Slippage, Margin Long/Short.
Curve fitting
If the author claims that strategy will only work on particular set of instrument and particular timeframe, then the strategy is not real. It is curve fitting. Knowingly/Unknowingly author has moulded his strategy to fit what has happened in the past. This is general issue even non malicious author go through. It is very much essential to test the strategy across various set of instruments and timeframes to understand the real capability. Use back-testing as test cases. More test cases you have, more bug free your strategy will be. There are many methods to understand curve fitting and perform better testing of the strategy in hand which can be studied and implemented by authors.
Significantly short trades - a sign of lack of strategy
A strategy built using pine in general work on close of candle. So, all the calculations generally happen upon close of the candle. You can force intra-bar calculations using bar magnifier. But, that is not equivalent to tick data. Due to this reason, I consider any trade happening within a bar (Meaning open and close within the same bar) as not reliable. This is because, it is not possible for strategy back-tester to know whether entry condition is satisfied first or exit in a completely foolproof way. Bar magnifier can help reduce this issue - but will not eradicate this problem completely. If there are lots of trades in a strategy - which are closing within the same bar, this is very likely that the strategy backtest results are not reliable.
Hope this helps at least some people to understand the scams and stay away from it.
Multi TF Trend Indicator
...Mark Douglas in his book Trading in the Zone wrote
The longer the time frame, the more significant the trend, so a trending market on a daily bar chart is more significant than a trending market on a 30-minute bar chart. Therefore, the trend on the daily bar chart would take precedence over the trend on the 30-minute bar chart and would be considered the major trend. To determine the direction of the major trend, look at what is happening on a daily bar chart. If the trend is up on the daily, you are only going to look for a sell-off or retracement down to what your edge defines as support on the 30-minute chart. That's where you will become a buyer. On the other hand, if the trend is down on the daily, you are only going to look for a rally up to what your edge defines as a resistance level to be a seller on the 30-minute chart. Your objective is to determine, in a downtrending market, how far it can rally on an intraday basis and still not violate the symmetry of the longer trend. In an up-trending market, your objective is to determine how far it can sell off on an intraday basis without violating the symmetry of the longer trend. There's usually very little risk associated with these intraday support and resistance points, because you don't have to let the market go very far beyond them to tell you the trade isn't working.
The purpose of this indicator to show both the major and minor trend on the same chart with no need to switch between timeframes
Script includes
timeframe to determine the major trend
price curve, close price is default, but you can pick MA you want
type of coloring, either curve color or the background color
Implementation details
major trend is determined by the slope of the price curve
Further improvements
a variation of techniques for determining the major trend (crossing MA, pivot points etc.)
major trend change alerts
Thanks @loxx for pullData helper function
Many Moving AveragesA smooth looking indicator created from a mix of ALMA and LRC curves. Includes alternative calculation for both which I came up with through trial and error so a variety of combinations work to varying degrees. Just something I was playing around with that looked pretty nice in the end.
One-Sided Gaussian Filter w/ Channels [Loxx]One-Sided Gaussian Filter w/ Channels is a Gaussian Moving Average that is calculated using a Fibonacci weighting function. Keltner channels have been added to show zones of exhaustion. A better name would be "Half Gaussian bell weighted" or "Half normal distribution weighted" indicator, since the weights for calculation of the average (similar to linear weighted average) are taken from a normal distribution curve like function--but only the half of the curve is used for calculation.
Information of the Gaussian distribution can be found here : en.wikipedia.org and once you take a look at the standard normal distribution curve, it will be much clearer what is exactly done in this indicator.
After the Gaussian Filter is applied to the source input, an Ehlers' 2-Pole Super Smoother is applied to reduce noise without significant lag.
Included:
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
Bitcoin Power Law Bands (BTC Power Law) Indicator█ OVERVIEW
The 'Bitcoin Power Law Bands' indicator is a set of three US dollar price trendlines and two price bands for bitcoin , indicating overall long-term trend, support and resistance levels as well as oversold and overbought conditions. The magnitude and growth of the middle (Center) line is determined by double logarithmic (log-log) regression on the entire USD price history of bitcoin . The upper (Resistance) and lower (Support) lines follow the same trajectory but multiplied by respective (fixed) factors. These two lines indicate levels where the price of bitcoin is expected to meet strong long-term resistance or receive strong long-term support. The two bands between the three lines are price levels where bitcoin may be considered overbought or oversold.
All parameters and visuals may be customized by the user as needed.
█ CONCEPTS
Long-term models
Long-term price models have many challenges, the most significant of which is getting the growth curve right overall. No one can predict how a certain market, asset class, or financial instrument will unfold over several decades. In the case of bitcoin , price history is very limited and extremely volatile, and this further complicates the situation. Fortunately for us, a few smart people already had some bright ideas that seem to have stood the test of time.
Power law
The so-called power law is the only long-term bitcoin price model that has a chance of survival for the years ahead. The idea behind the power law is very simple: over time, the rapid (exponential) initial growth cannot possibly be sustained (see The seduction of the exponential curve for a fun take on this). Year-on-year returns, therefore, must decrease over time, which leads us to the concept of diminishing returns and the power law. In this context, the power law translates to linear growth on a chart with both its axes scaled logarithmically. This is called the log-log chart (as opposed to the semilog chart you see above, on which only one of the axes - price - is logarithmic).
Log-log regression
When both price and time are scaled logarithmically, the power law leads to a linear relationship between them. This in turn allows us to apply linear regression techniques, which will find the best-fitting straight line to the data points in question. The result of performing this log-log regression (i.e. linear regression on a log-log scaled dataset) is two parameters: slope (m) and intercept (b). These parameters fully describe the relationship between price and time as follows: log(P) = m * log(T) + b, where P is price and T is time. Price is measured in US dollars , and Time is counted as the number of days elapsed since bitcoin 's genesis block.
DPC model
The final piece of our puzzle is the Dynamic Power Cycle (DPC) price model of bitcoin . DPC is a long-term cyclic model that uses the power law as its foundation, to which a periodic component stemming from the block subsidy halving cycle is applied dynamically. The regression parameters of this model are re-calculated daily to ensure longevity. For the 'Bitcoin Power Law Bands' indicator, the slope and intercept parameters were calculated on publication date (March 6, 2022). The slope of the Resistance Line is the same as that of the Center Line; its intercept was determined by fitting the line onto the Nov 2021 cycle peak. The slope of the Support Line is the same as that of the Center Line; its intercept was determined by fitting the line onto the Dec 2018 trough of the previous cycle. Please see the Limitations section below on the implications of a static model.
█ FEATURES
Inputs
• Parameters
• Center Intercept (b) and Slope (m): These log-log regression parameters control the behavior of the grey line in the middle
• Resistance Intercept (b) and Slope (m): These log-log regression parameters control the behavior of the red line at the top
• Support Intercept (b) and Slope (m): These log-log regression parameters control the behavior of the green line at the bottom
• Controls
• Plot Line Fill: N/A
• Plot Opportunity Label: Controls the display of current price level relative to the Center, Resistance and Support Lines
Style
• Visuals
• Center: Control, color, opacity, thickness, price line control and line style of the Center Line
• Resistance: Control, color, opacity, thickness, price line control and line style of the Resistance Line
• Support: Control, color, opacity, thickness, price line control and line style of the Support Line
• Plots Background: Control, color and opacity of the Upper Band
• Plots Background: Control, color and opacity of the Lower Band
• Labels: N/A
• Output
• Labels on price scale: Controls the display of current Center, Resistance and Support Line values on the price scale
• Values in status line: Controls the display of current Center, Resistance and Support Line values in the indicator's status line
█ HOW TO USE
The indicator includes three price lines:
• The grey Center Line in the middle shows the overall long-term bitcoin USD price trend
• The red Resistance Line at the top is an indication of where the bitcoin USD price is expected to meet strong long-term resistance
• The green Support Line at the bottom is an indication of where the bitcoin USD price is expected to receive strong long-term support
These lines envelope two price bands:
• The red Upper Band between the Center and Resistance Lines is an area where bitcoin is considered overbought (i.e. too expensive)
• The green Lower Band between the Support and Center Lines is an area where bitcoin is considered oversold (i.e. too cheap)
The power law model assumes that the price of bitcoin will fluctuate around the Center Line, by meeting resistance at the Resistance Line and finding support at the Support Line. When the current price is well below the Center Line (i.e. well into the green Lower Band), bitcoin is considered too cheap (oversold). When the current price is well above the Center Line (i.e. well into the red Upper Band), bitcoin is considered too expensive (overbought). This idea alone is not sufficient for profitable trading, but, when combined with other factors, it could guide the user's decision-making process in the right direction.
█ LIMITATIONS
The indicator is based on a static model, and for this reason it will gradually lose its usefulness. The Center Line is the most durable of the three lines since the long-term growth trend of bitcoin seems to deviate little from the power law. However, how far price extends above and below this line will change with every halving cycle (as can be seen for past cycles). Periodic updates will be needed to keep the indicator relevant. The user is invited to adjust the slope and intercept parameters manually between two updates of the indicator.
█ RAMBLINGS
The 'Bitcoin Power Law Bands' indicator is a useful tool for users wishing to place bitcoin in a macro context. As described above, the price level relative to the three lines is a rough indication of whether bitcoin is over- or undervalued. Users wishing to gain more insight into bitcoin price trends may follow the author's periodic updates of the DPC model (contact information below).
█ NOTES
The author regularly posts on Twitter using the @DeFi_initiate handle.
█ THANKS
Many thanks to the following individuals, who - one way or another - made the 'Bitcoin Power Law Bands' indicator possible:
• TradingView user 'capriole_charles', whose open-source 'Bitcoin Power Law Corridor' script was the basis for this indicator
• Harold Christopher Burger, whose Bitcoin’s natural long-term power-law corridor of growth article (2019) was the basis for the 'Bitcoin Power Law Corridor' script
• Bitcoin Forum user "Trololo", who posted the original power law model at Logarithmic (non-linear) regression - Bitcoin estimated value (2014)
Grid Bot AutoThis script is an auto-adjusting grid bot simulator. This is an improved version of the original Grid Bot Simulator. The grid bot is best used for ranging/choppy markets. Prices are divided into grids, or trade zones, that will trigger signals each time a new zone is entered. During ranging markets, each transaction is followed by a “take profit.” As the market starts to trend, transactions are stacked (compare to DCA ), until the market consolidates. No signals are triggered above the Upper Limit or Below the Lower Limit. Unlike the previous version, the upper and lower limits are calculated automatically. Grid levels are determined by four factors: Smoothing, Laziness, Elasticity, and Grid Intervals.
Smoothing:
A moving average (or linear regression) is applied to each close price as a basis. Options for smoothing are Linear Regression, Simple Moving Average, Exponential Moving Average, Volume-Weighted Moving Average, Triple-Exponential Moving Average.
Laziness:
Laziness is the percentage change required to reach the next level. If laziness is 1.5, the price must move up or down by 1.5% before the grid will change. This concept is based on Alex Grover’s Efficient Trend Step. This allows the grids to be based on even price levels, as opposed to jagged moving averages.
Elasticity:
Elasticity is the degree of “stickiness” to the current price trend. If the smoothing line remains above (or below) the current grid center without reverting but still not enough to reach the next grid level, the grid line will start to curve toward the next grid level. Elasticity is added to (or subtracted from) the gridline by a factor of minimum system ticks for the current pair. Elasticity of zero will keep the gridlines horizontal. If elasticity is too high, the grid will distort.
Grid Intervals:
Grid intervals are the percentage of space between each grid.
Laziness = 4%, Elasticity = 0. Price must move at least 4% before reaching the next level. With zero elasticity, gridlines are straight.
Laziness = 5%, Elasticity = 100. For each bar at a new grid level, the grid will start “curve” toward the next price level (up if price is greater than the middle grid, down if less than middle grid). Elasticity is calculated by the user-inputted “Elasticity” multiplied by the minimum tick for the current pair (ELSTX = syminfo.mintick * iELSTX)
Try experimenting with different combinations of the Smoothing Length, Smoothing Type, Laziness, Elasticity, and Grid Intervals to find the optimum settings for each chart. Lower-priced pairs (e.g. XRP/ADA/DODGE) will require lower Elasticity. Also note that different exchanges may have different minimum tick values. For example, minimum tick for BITMEX:XBTUSD and BYBIT:BTCUSD is .5, but BINANCE:BTCUSDT and COINBASE:BTCUSD is .01.
s3.tradingview.com
DODGEUSDT, 5min. Laziness: 4%, Elasticity 2.5
Number of Grids: 2. Laziness: 3.75%. Elasticity: 150. Grid Interval 2%.
Settings Overview
Smoothing Length : Smoothing period
Smoothing Type : Linear Regression, Simple Moving Average, Exponential Moving Average, Volume-Weighted Moving Average, Triple-Exponential Moving Average
Laziness : Percentage required for price to move until it reaches the next level. If price does not reach the next level (up or down), the grid will remain the same as previous grid (because it’s lazy).
Elasticity : Amount of curvature toward the next grid, based on the current price trend. As elasticity increases, gridlines will curve up or down by a factor of the number of ticks since the last grid change.
Grid Interval : Percent between grid levels.
Number of Grids : Number of grids to show.
Cooldown : Number of bars to wait to prevent consecutive signals.
Grid Line Transparency : Lower transparencies brighten the gridlines; higher transparencies dim the gridlines. To hide the gridlines completely, enter 100.
Fill Transparency: Lower transparencies brighten the fill box; higher transparencies dim the fill box. To hide the fill box completely, enter 100.
Signal Size : Make signal triangles large or small.
Reset Buy/Sell Index When Grids Change : When a new grid is formed, resetting the index may prevent false signals (experimental)
Use Highs/Lows for Signals : If enabled, signals are triggered as soon as the price touches the next zone. If disabled, signals are triggered after bar closes. Enable this for “Once Per Bar alerts. Disable for “Once Per Bar Close” alerts.
Show Min Tick : If checked, syminfo.mintick is displayed in upper-righthand corner. Useful for estimating Laziness.
Reverse Fill Colors : Default fill for fill boxes is green after buy and red after sell. Check this box to reverse.
Note: The Grid Bot Simulator scripts are experimental and works in progress. Please feel free to comment or contact me if you have suggestions/complaints.
Raff Regression Channel by DGTRᴀꜰꜰ Rᴇɢʀᴇꜱꜱɪᴏɴ Cʜᴀɴɴᴇʟ (RRC)
This study aims to automate Raff Regression Channel drawing either based on ZigZag Indicator or optionally User Preference
The Raff Regression Channel , developed by Gilbert Raff, is based on a linear regression, which is the least-squares line-of-best-fit for a price series, with evenly spaced trend lines above and below . The width of the channel is set by determining the high or low that is the furthest from the linear regression.
Because the channel distance is based off the largest pullback or highest peak within a trend, for effectively drawing and using a Raff Regression Channel it is recommend/required that a Raff Regression Channel is applied to “mature” trends. Knowing this requirement, for better automated drawing results this study benefits from the Zig Zag Indicator, where the Zig Zag indicator is used to help identify price trends and changes in price trends. Option to manually adjust lengths for drawing a Raff Regression Channel is also made available.
Using a Raff Regression Channel
Once The Raff Regression Channel is drawn, covering an existing trend, Exᴛᴇɴꜱɪᴏɴ Lɪɴᴇꜱ are drawn to identify ᴛʜᴇ ꜱᴜᴘᴘᴏʀᴛ﹐ʀᴇꜱɪꜱᴛᴀɴᴄᴇ ᴏʀ ʀᴇᴠᴇʀꜱᴀʟ ᴘᴏɪɴᴛꜱ
The trend is up as long as prices rise within this channel. An uptrend may be reversing (not always, but likely) when price breaks below the channel extension . The trend is down as long as prices decline within the channel. Similarly, a downtrend may be reversing (not always, but likely) when price breaks above the channel extension . Moves outside the channel extensions can be indication of a reversal or can denote overbought or oversold conditions
For further details please refer to education post Raff Regression Channel
█ FEATURES
- AUTO or MANUALLY adjusted Raff Regression Channel and Channel Extentions drawing
- ALERTs, for Linear Regression Line, Raff Regression Upper and Lower Channel Extentions
- LSMA , Least Squares Moving Average, in other words Linear Regression Curve
█ SETTINGS
Setting Loopback and Number of Bars are the most important part for The Raff Regression Channel, where ;
- Lookback, defines where the Raff Regression Channel is starting, it is recommended to set to a trend begining
- Number of Bars, defines how many bars to be assumed for calculation, or simply stated the end of the Raff Regression Channel drawing (not extentions but the main channel, extentions by default will be drawn till the last bar)
Setting of Loopback and Number of Bars is performed eigher automatically based on Zig Zag indicator or users may prefer to set them manually. If selected automatically then
- Deviation and Depth values of Zig Zag indicator are used for calculations (enabling visually plotting of ZigZag Lines will help to identify better visually the points), where ;
Deviation, is a multiplier that affects how much the price should deviate from the previous pivot in order for the bar to become a new pivot.
Depth, affects the minimum number of bars that will be taken into account when building
Short-term traders may wish to apply the channel to small waves of a trend so they can reduce the value of the Deviation and Depth
█ OTHER CHANNEL CONSEPTS
Linear Regression Channels, , what linear regression channels are? and linear regression channel/curve/slope study
Fibonacci Channels, how to apply fibonacci channels and automated fibonacci channels study
Andrews’ Pitchfork, how to apply pitchfork and automated pitchfork study
Special Thanks to @Kiss66000 for his kind suggestion, je vous remercie beaucoup @Kiss66000
Disclaimer :
Trading success is all about following your trading strategy and the indicators should fit within your trading strategy, and not to be traded upon solely
The script is for informational and educational purposes only. Use of the script does not constitute professional and/or financial advice. You alone have the sole responsibility of evaluating the script output and risks associated with the use of the script. In exchange for using the script, you agree not to hold dgtrd TradingView user liable for any possible claim for damages arising from any decision you make based on use of the script
Excellent ADXThe Average Directional movement indeX (ADX) is an indicator that helps you determine the trend direction, pivot points, and much more else! But it looks not so easy as other famous indicators. It seems strange or even terrible, but don't be afraid. Let's understand how it works and get its power into your analysis tactics.
In the beginning, imagine a drunk man goes through a ladder: step by step. Up, up, down, up, down, down, up...
How can we understand which direction he goes? Exactly! We can count the number of steps in each direction. In the above example, in the upward – 4, in the downward – 3. So, it looks like he goes in an upward direction.
The ADX indicator counts the same steps, but for price. The size of each step equals 1 ATR for "DI Length" candles. On the indicator chart, we have the green and red lines. The green line represents a number of steps upward. The red line shows one downward. When the red line upper green, then the price goes below, then the trend is directed down. Later the green line comes above the red one, and then the trend changes the direction to upward. Wow? After that, you can easy detect the trend direction on the market!
But it is still not the end. On the chart, we also have the fat blue line. This is the ADX line, and it represents the power of the trend. It is calculated from a distance between the green and red curves. The ADX line value grows if the distance is increased. If the movement is really powerful, then a number of steps into a direction much more prominent than one in an opposed direction. Then the blue line grows faster. But if the growth has stopped and the blue line turns back or already had changed self-direction, then it is a signal that the trend has ended too. It's an excellent sign to close the position (but not always). Easy? Not quite. Thresholds help you there. The indicator has two additional parameters: upper and lower thresholds to evaluate the trend-over signal strength. An u-turn of the ADX line above the upper threshold sends a strong signal. If one occurs between both thresholds, it is a bit weak signal. But if the blue line goes below the lower threshold, it looks like there is no trend, and the price goes side. We can also say that the price goes side when the ADX value gradually falls down.
The Excellent ADX indicator helps you catch pivot/pullback signals based on green, red, and blue lines. Each such signal is highlighted as a green (buy) or red (sell) dot on the plot. The size of the dot represents the strength of the signal. You can also check the position of green and red lines from each other to determine the trend direction and the place where it has been changed. The Excellent ADX indicator helps you there too. It highlights the trend direction by the background-color, so you'll never miss it! The Excellent ADX good compliance with the Price Channel indicator built for the same length. You can use them together to be on a trend wave always!
MACD ProMoving average convergence divergence pro.
Original MACD with new features, Including...
1. Three different modes.
Basic, Logarithmic, Percent (calculates difference of oscillator MAs in percent)
2. Additional moving averages for oscillator, signal and even histogram.
EMA, WMA (linearly weighted), LMA (logarithmically weighted), SMA
Volume Weighted RMA (I've been suggested to make a MACD with the VWEMA that I published recently but that was too fast, this almost 2 times slower because of using RMA instead of EMA)
VWRMA(s) (an alternative for VWRMA which uses candle formation to simulate the volume, can be useful when volume is not provided for the symbol or it is not proper)
And DEMA (Double Exponential MA)
3. Signal Displacement.
If you want to add some delay to signal, could help for extra confirmation of center crosses and removal of some falss ones.
4. Histogram Smoother.
For those who like the smooth curves. Can deliver a cleaner histogram even in volatile markets.
5. Bar color for more fun.
Basic BIASBasic BIAS
Deviation rate (bias), also known as deviation rate, or y-value for short, is an indicator to reflect the deviation degree between the price and MA in a certain period of time by calculating the percentage difference between the market index or closing price and a moving average, so as to obtain the possibility that the price will reverse or rebound due to deviation from moving average trend in case of severe fluctuation, and that the price will move within the normal fluctuation range Form the credibility of continuing the original potential.
The deviation rate is a percentage of the deviation degree (gap rate) between the price and ma.
The departure rate curve (bias) is a curve that connects the values of each bias into a line and obtains a wave extension curve with the value of 0 as the horizontal axis.
Quadratic Least Squares Moving Average - Smoothing + Forecast Introduction
Technical analysis make often uses of classical statistical procedures, one of them being regression analysis, and since fitting polynomial functions that minimize the sum of squares can be achieved with the use of the mean, variance, covariance...etc, technical analyst only needed to replace the mean in all those calculations with a moving average, we then end up with a low lag filter called least squares moving average (lsma) .
The least squares moving average could be classified as a rolling linear regression, altho this sound really bad it is useful to understand the relationship of both methods, both have the same form, that is ax + b , where a and b are coefficients of the model. However in a simple linear regression a and b are constant, while the lsma use variables instead.
In a simple lsma we model the relationship of the closing price (dependent variable) with a linear sequence (independent variable), therefore x = 1,2,3,4..etc. However we can use polynomial of higher degrees to model such relationship, this is required if we want more reactivity. Therefore we can use a quadratic form, that is ax^2 + bx + c , where a,b and c are variables.
This is the quadratic least squares moving average (qlsma), a not so official term, but we'll stick with it because it still represent the aim of the filter quite well. In this indicator i make the calculations of the qlsma less troublesome, therefore one might understand how it would work, note that in general the coefficients of a polynomial regression model are found using matrix calculus.
The Indicator
A qlsma, unlike the classic lsma, will fit better to the price and will be more reactive, this is the advantage of using an higher degrees for its calculation, we can model more complex relationship.
lsma in green, qlsma in red, with both length = 200
However the over/under shoots are greater, i'll explain why in the next sections, but this is one of the drawbacks of using higher degrees.
The indicator allow to forecast future values, the ahead period of the forecast is determined by the forecast setting. The value for this setting should be lower than length, else the forecasts can easily over/under shoot which heavily damage the forecast. In order to get a view on how well the forecast is performing you can check the option "Show past predicted values".
Of course understanding the logic behind the forecast is important, in short regressions models best fit a certain curve to the data, this curve can be a line (linear regression), a parabola (quadratic regression) and so on, the type of curve is determined by the degree of the polynomial used, here 2, which is a parabola. Lets use a linear regression model as example :
ax + b where x is a linear sequence 1,2,3...and a/b are constants. Our goal is to find the values for a and b that minimize the sum of squares of the line with the dependent variable y, here the closing price, so our hypothesis is that :
closing price = ax + b + ε
where ε is white noise, a component that the model couldn't forecast. The forecast of the closing price 14 step ahead would be equal to :
closing price 14 step aheads = a(x+14) + b
Since x is a linear sequence we only need to sum it with the forecasting horizon period, the same is done here with :
a*(n+forecast)^2 + b*(n + forecast) + c
Note that the forecast proposed in the indicator is more for teaching purpose that anything else, this indicator can't possibly forecast future values, even on a meh rate.
Low lag filters have been used to provide noise free crosses with slow moving average, a bad practice in my opinion due to the ability low lag filters have to overshoot/undershoot, more interesting use cases might be to use the qlsma as input for other indicators.
On The Code
Some of you might know that i posted a "quadratic regression" indicator long ago, the original calculations was coming from a forum, but because the calculation was ugly as hell as well as extra inefficient (dogfood level) i had to do something about it, the name was also terribly misleading.
We can see in the code that we make heavy use of the variance and covariance, both estimated with :
VAR(x) = SMA(x^2) - SMA(x)^2
COV(x,y) = SMA(xy) - SMA(x)SMA(y)
Those elements are then combined, we can easily recognize the intercept element c , who don't change much from the classical lsma.
As Digital Filter
The frequency response of the qlsma is similar to the one of the lsma, those filters amplify certain frequencies in the passband, and have ripples in the stop band. There is something interesting about those filters, first using higher degrees allow to greater boost of the frequencies in the passband, which result in greater over/under shoots. Another funny thing is that the peak/valley of the ripples is equal the peak or valley in the ripples of another lsma of different degree.
The transient response of those filters, that is impulse response, step response...etc is related to the degree of the polynomial used, therefore lets denote a lsma of degree p : lsma(p) , the impulse response of lsma(p) is a polynomial of degree p, and the step response is simple a polynomial of order p+1.
This is why it was more interesting to estimate the qlsma using convolution, however we can no longer forecast future values.
Conclusion
I proposed a more usable quadratic least squares moving average, with more options, as well as a cleaner and more efficient code. The process of shrinking the original code is made easier when you know about the estimations of both variance and covariance.
I hope the proposed indicator/calculation is useful.
Thx for reading !