Channel Based Zigzag [HeWhoMustNotBeNamed]🎲 Concept
Zigzag is built based on the price and number of offset bars. But, in this experiment, we build zigzag based on different bands such as Bollinger Band, Keltner Channel and Donchian Channel. The process is simple:
🎯 Derive bands based on input parameters
🎯 High of a bar is considered as pivot high only if the high price is above or equal to upper band.
🎯 Similarly low of a bar is considered as pivot low only if low price is below or equal to lower band.
🎯 Adding the pivot high/low follows same logic as that of regular zigzag where pivot high is always followed by pivot low and vice versa.
🎯 If the new pivot added is of same direction as that of last pivot, then both pivots are compared with each other and only the extreme one is kept. (Highest in case of pivot high and lowest in case of pivot low)
🎯 If a bar has both pivot high and pivot low - pivot with same direction as previous pivot is added to the list first before adding the pivot with opposite direction.
🎲 Use Cases
Can be used for pattern recognition algorithms instead of standard zigzag. This will help derive patterns which are relative to bands and channels.
Example: John Bollinger explains how to manually scan double tap using Bollinger Bands in this video: www.youtube.com This modified zigzag base can be used to achieve the same using algorithmic means.
🎲 Settings
Few simple configurations which will let you select the band properties. Notice that there is no zigzag length here. All the calculations depend on the bands.
With bands display, indicator looks something like this
Note that pivots do not always represent highest/lowest prices. They represent highest/lowest price relative to bands.
As mentioned many times, application of zigzag is not for buying at lower price and selling at higher price. It is mainly used for pattern recognition either manually or via algorithms. Lets build new Harmonic, Chart patterns, Trend Lines using the new zigzag?
Cerca negli script per "algo"
Machine Learning: kNN (New Approach)Description:
kNN is a very robust and simple method for data classification and prediction. It is very effective if the training data is large. However, it is distinguished by difficulty at determining its main parameter, K (a number of nearest neighbors), beforehand. The computation cost is also quite high because we need to compute distance of each instance to all training samples. Nevertheless, in algorithmic trading KNN is reported to perform on a par with such techniques as SVM and Random Forest. It is also widely used in the area of data science.
The input data is just a long series of prices over time without any particular features. The value to be predicted is just the next bar's price. The way that this problem is solved for both nearest neighbor techniques and for some other types of prediction algorithms is to create training records by taking, for instance, 10 consecutive prices and using the first 9 as predictor values and the 10th as the prediction value. Doing this way, given 100 data points in your time series you could create 10 different training records. It's possible to create even more training records than 10 by creating a new record starting at every data point. For instance, you could take the first 10 data points and create a record. Then you could take the 10 consecutive data points starting at the second data point, the 10 consecutive data points starting at the third data point, etc.
By default, shown are only 10 initial data points as predictor values and the 6th as the prediction value.
Here is a step-by-step workthrough on how to compute K nearest neighbors (KNN) algorithm for quantitative data:
1. Determine parameter K = number of nearest neighbors.
2. Calculate the distance between the instance and all the training samples. As we are dealing with one-dimensional distance, we simply take absolute value from the instance to value of x (| x – v |).
3. Rank the distance and determine nearest neighbors based on the K'th minimum distance.
4. Gather the values of the nearest neighbors.
5. Use average of nearest neighbors as the prediction value of the instance.
The original logic of the algorithm was slightly modified, and as a result at approx. N=17 the resulting curve nicely approximates that of the sma(20). See the description below. Beside the sma-like MA this algorithm also gives you a hint on the direction of the next bar move.
GA - Value at RiskGA Value at Risk is a multifunctional tool. Its main purpose is to plot on the chart the Value at Risk . But it shows also integrated features related to the Volatility.
Value at Risk is a measure of the risk of loss for investments, given normal market conditions, in a period.
It measures and quantifies the level of financial risk. In this case, the risk is within position over a specific time frame.
Defining p as VaR, the probability of a loss greater than VaR is p, at most. Instead, the probability of loss that is less than VaR is 1-p, at least.
The VaR Breach occurs when a loss exceeds the VaR threshold .
For this case, VaR calculation uses the volatility estimation in a time interval. It defines the Probability Confidence according to the Normal Distribution. VaR is a percentile of the Normal Distribution. This is a multiplier of the Standard Deviation that define a Volatility Range.
The Normal Distribution Area around +- the Standard Deviation gives 68% of Confidence. 2 times the Standard Deviation returns a 95% of probability area. 3 time the Standard Deviation the Area returns 99.7% of Confidence.
Knowing VaR modeling, it is possible to determine the amount of a potential loss . Then, it is possible to know if there is enough capital to cover losses. In the same way, higher-than-acceptable risks forces reducing exposure in a financial instrument.
One of its practical use is to estimate the risk of an investment that is already at portfolio. Indeed, this is the purpose of the Value at Risk calculated in this script.
At the VaR Breach that investment has reached its worst scenario. Then, it can be the case to manage that investment into the balanced portfolio.
The Value at Risk does not tell when to enter the market.
Moving Averages
GA Value at Risk bases its calculations on a set of Moving Averages. Every feature of the script uses one of these Moving Averages for its algorithm.
Moving Averages from MA0 to MA8, are the core of each feature of the script.
By default, from MA0 to MA8, Moving Averages use the Fibonacci Series to define their lengths. This happens because of the power of the Golden Ratio in the market behavior.
Instead, the first moving average is an extra resource. Its purpose is to plot a Signal Line on the chart.
The script does not consider plotting every Moving Average on the chart. But it lets you enable the plotting of 7 Moving Averages (from MA0 to MA5 + Signal Line).
It is possible to select the Moving Average Formula to use in the script. This is a setting that affects every Moving Average. Then, it changes also the result of every feature of the script.
The selection is between:
Exponential Moving Average.
Simple Moving Average.
Weighted moving Average.
Simple Moving Averages and Pointers - Full Visibility
Moving Averages and Partial Visibility
The plotting of each Moving Average can be total or partial.
By default, the plotting of Moving Averages and Signal Line is partial.
When the price approaches a Moving Average a little part of the curve becomes visible. This highlights supports or resistances.
Besides, this tracking remains on the chart. Then it shows supports and resistances that the price reached during its progression.
The Partial Visibility Algorithm is a great advantage, ruling how to plot curves. It uses a parameter to set how much of the curves is to plot.
Exponential Moving Averages and Pointers - Partial Visibility
Exponential Moving Averages and Pointers - Full Visibility
Moving Averages and Pointers
As it is clear, it is not necessary to plot entire curves of Moving Averages on the chart. But it becomes relevant to plot Pointers to Moving Averages.
Indeed, the script plots horizontal segments that point to the latest Average Prices.
Every segment has a Label that shows Average Price, Length, and its related Moving Average (from MA0 to MA8). Besides, it is possible to extend the segment to right.
These pointers are a very useful automatization. They point to the Moving Averages. In this way, they show Dynamic Supports and Resistances as horizontal segments.
They are adaptive. Used together with the Volume Profile their progression approaches Edges of High Nodes.
This adaptive behavior makes easy to see when the price reaches Volume High Nodes and slows down.
Moving Average Pointers use the Partial Visibility Algorithm. In this case, the algorithm shows pointers with higher frequency than curves.
Moving Averages Pointers have:
Horizontal Segment as a Pointer with Arrow.
Label with details.
Circle to the current Average Price.
Weighted Moving Averages and Pointers - Full Visibility
Volatility Channels
Having Moving Averages, from MA0 to MA8, it is possible to plot 9 Volatility Channels.
Each Volatility Channel uses one of the Moving Averages, from MA0 to MA8.
Indeed, each Volatility Channel has the same designation of the Moving Average used.
The Standard Deviation defines the Volatility Range. It uses the length of the Moving Average related to the Volatility Channel.
The Volatility Range is unique for each Volatility Channel. In the same way, each Volatility Channel is unique because of its relation to only one Moving Average.
By default, each volatility channel has the 2 value as Standard Deviation Multiplier. This gives 95% of Confidence that the price will stay into the Volatility Range.
Using the Simple Moving Average, each Volatility Channel becomes a Bollinger Bands envelop.
Volatility Channels work very well even using Exponential or Weighted Moving Averages.
MA0 - Volatility Channel
Volatility Channels - From MA0 to MA8
Value at Risk (VaR)
GA Value at Risk plots VaR according to the volatility. The VaR plotting follows the Trend Momentum or Buying-Selling Waves.
By default, VaR follows the Trend Momentum by 2 times the Standard Deviation of MA0. Where MA0 is the first Moving Average and Volatility Channel of the set.
Besides, by default, the calculation of the Value at Risk is adaptive. It does not follow the Volatility Channel Bands. But it changes according to the fast reaction of the price into the Volatility Range.
By default, VaR follows the main momentum even if the price is moving in opposition to it. This occurs as long as the Trend Momentum persists.
In the settings box, It is possible to select the following of the latest Buying Wave or Selling Wave.
In this case, VaR changes according to the change of Buying Wave or Selling Wave. This means that, on these conditions, VaR follows main swings. Then it follows the weakening and the strengthening of the trend momentum as long as it persists.
The plotting of the Value at Risk can show these features:
Red cycle to show the Value at Risk at the current price.
Look Back Red Line that shows the progression of the Value at Risk.
Label with details.
MA0 - Value at Risk - Not Adaptive
MA0 - Value at Risk - Adaptive
It is possible to use a different Moving Average and Volatility Channel from the set. This affects the calculation and the plotting of the Value at Risk. In this way, the algorithm return the Value at Risk for the short, middle, or long-term.
Then, you can get the Value at Risk for that Financial Instrument, calculated for ~1 year or more so as for 1 month.
The Value at Risk does not tell you when to enter the market. Besides, it does not show you that the trend is changing.
MA3 - Value at Risk - Adaptive
Value at Profit (VaP)
The Value at Profit has a descriptive purpose. It points the Volatility Band that is opposite to the Value at Risk.
I chose Value at Profit as a designation for this feature. It does not tell you where to exit the market.
But is shows what the price progression is pointing on. This happens following the switching between Volatility Ranges.
The VaP follows the Volatility Band where the price tends to converge.
An outperforming or underperforming price is running faster than the average trend. Then when the price runs enough to converge to the Volatility Band, it is over extended or under extended.
At these conditions, the increased buying or selling pressure affects the price behavior. This slows down the price progression.
The Algorithm behind the Value at Profit is adaptive. Then the pointer jumps up and down the Volatility Bands of the 9 Volatility Channels. This occurs according to the price progression, following the switching between Volatility Ranges.
So, the VaP points a Volatility Band as long as the price can have chances to converges on it. Instead, when the price has chances to exceed the Volatility Band, the VaP points to the next one.
The plotting of the Value at Profit occurs enabling its Label with details.
Value at Profit - MA0 Volatility Channel Upper Band
Value at Profit - MA6 Volatility Channel Upper Band
Price Extension
When the price runs far away from the average trend price, GA Value at Risk can plot the price extension.
It shows the distance in percentage of the price from a Moving Average of the set. This tends to highlight conditions where the price is over or under extended.
An overbought or oversold condition precedes the shortening of the Trust. It is a cause of the hesitation of the price to continue its progression. This includes also Climactic Points and Signs of Dominance.
The Price Extension plotting uses a variation of the Partial Visibility Algorithm. It plots the Price Extension Arrow only when there are specific volatility conditions.
When the Partial Visibility is set to 0, the Price Extension Arrow is always visible on the chart.
The plotting of the Price Extension includes a Label with details.
Over Extension - The Price is Outperforming MA0
Under Extension - The Price is Underperforming MA0
Price Extension Coloring for Bars and Line Chart
GA Value at Risk lets you enable the coloring of vertical charts. Green and Red colors mark the over and under extended price on bars, candle sticks, and also on the Line Chart.
The Price Extension Algorithm colors Bars and Line Chart by a momentum function.
Indeed, the coloring happens following Relative Strength Index or Bollinger Bands %B.
These 2 Momentum functions are different. Indeed, they color the chart according to the purpose of their curves.
Coloring the Line Chart, it is necessary to put on front the script visibility.
Overbought and Oversold Conditions on Line Chart by Bollinger Bands %B
Overbought and Oversold Conditions on Candlesticks Chart by Relative Strength Index
Note: I restrict access to the tool. Use the links in my signature field to gain access to the script. Feel free to send me a PM for any question.
Thank you
Girolamo Aloe
Founder of Profiting Me Finance Analytics
-
Disclaimer
Nobody in Girolamo Aloe websites and trading view profile is a Financial Advisor. Nothing therein is intended to be constructed as Financial Advice. The content on his websites is for information and educational purposes only.
Trading carries high risk. You should not invest money that you cannot afford to lose. Past performance is not an indication of future results.
KarkadannKarkadann is an indicator derived from a Naberius trading algorithm. It represents a medium ground between our two other algorithms Mammon and Malphas.
It detects the current trend ranges in the market and prints a suggested entry accordingly at assumed trend channel tops & bottoms upon encountering stalled out price action usually indicative of a retracement. As such, Karakadann can be traded on nearly any timeframe.
This algorithm was developed to trade primarily leveraged XBT; however, after exploring larger alt coins and the more traditional markets outside of cryptocurrency we found that Karkadann does better than the average trader regardless of the pair or ticker being traded at the time. Any core changes to the live trading algorithm will be added to this indicator as they are deployed.
Suggested Methods of Operation:
1. Buy and Sell signals represent a possible trading opportunity. Based on our testing, manual traders should use the 15m - 60m for scalping and 240m - 1D for larger swings.
2. Upon signal print, place your limit orders spread throughout the current candles total body range. DO NOT MARKET IN. DO NOT CHASE. If the limit orders don't fill within the following candle regardless of timeframe being traded remove them and re-evaluate.
3. Use standard candles. Heikin Ashi candles are ok but can be deceiving in times of localized price volatility
4. Trade the trend or wait for extreme price action, counter to the trend, to take up positions.
Machine Learning Adaptive Trend Toolkit [Velowave]The Machine Learning Adaptive Trend Toolkit is a technical analysis tool that combines adaptive algorithms with comprehensive market feature extraction to provide insights into changing market conditions. Unlike static indicators with fixed parameters, this system continuously analyzes and adapts to the evolving market environment.
Core Technology
At the heart of this system is a dynamic approach to market analysis:
• Feature Engineering Pipeline: Extracts and normalizes volatility, momentum, volume, and trend strength metrics
• Market Regime Classification: Identifies 10 distinct market environments including trending, ranging, breakout, and reversal conditions
• Parameter Optimization: Automatically adjusts sensitivity based on detected market conditions
• Dynamic Wave Technology: Creates adaptive support/resistance levels that respond to changing volatility
⚠️ Signal Interpretation
Important: The indicator's wave crosses should be interpreted as trend change signals rather than direct buy/sell recommendations. These signals represent potential trend changes based on adaptive parameters, but require confirmation from other analysis before making trading decisions.
(Image showing example color customizability)
Custom Candle Behavior
The custom candlesticks in this indicator are designed to enhance trend visualization but will behave differently than default candlesticks:
• They use linear regression smoothing to reduce noise
• Their coloring is based on position relative to the adaptive wave, not merely open/close relationships
• They may show different patterns than traditional candlesticks on the same chart
• Trading strategies developed using traditional candlestick patterns may not apply directly to these custom candles
This modified representation helps visualize trend conditions more clearly but should be understood as an analytical tool rather than a direct replacement for traditional price action analysis.
Practical Applications
• Trend Identification
The adaptive wave system provides clear visualization of trend direction and strength, with dynamic support and resistance levels that adjust to current volatility conditions.
• Volatility-Adjusted Analysis
Parameters automatically optimize during high and low volatility periods, preventing false signals during consolidation while remaining responsive during breakouts.
• Regime-Based Strategy Selection
Knowing the current market regime allows you to apply appropriate trading techniques for specific conditions rather than using a one-size-fits-all approach.
• Visual Price Action Analysis
Enhanced candlestick coloring instantly communicates price position relative to the adaptive trend, helping you process market information more efficiently.
(Image showing only the supertrend wave and dynamic moving average)
Technical Components
• Adaptive Wave Algorithm: Creates dynamic support/resistance bands based on volatility, volume, and detected regime
• Dynamic Moving Average: Period automatically adjusts based on market conditions - shorter in trending markets, longer in ranging conditions
• Market Regime Engine: Continuously analyzes feature patterns to classify current conditions
• Custom Candlestick Visualization: Provides instant visual feedback on trend position and momentum
Implementation Details
For full transparency, the core calculations include:
• Volatility normalization through comparative ATR analysis
• Momentum feature extraction using multi-timeframe momentum indicators
• Trend strength quantification through price structure analysis
• Regime detection through feature pattern recognition
• Adaptive parameter adjustment based on detected market conditions
The system uses only historical and current price data for its calculations and analyses. It does not use predictive methodologies that could lead to misleading results. The indicator will show different values on an open bar than it will after the bar closes, which is standard behavior for indicators that use closing prices in their calculations.
Risk Disclaimer:
Trading involves significant risk. This indicator is designed as an analytical tool to enhance decision-making, not as a standalone trading system. Past performance is not indicative of future results.
[𝘚𝘏] - 𝘋𝘐𝘗 𝘏𝘜𝘕𝘛𝘌𝘙Dip Hunter is a cryptocurrency trading algorithm designed to help users accumulate profits by identifying the best entry and exit points for various cryptocurrencies. Here are its key features:
Versatility: The algo can be used with any cryptocurrency, not just Bitcoin or Ethereum.
Market Analysis: It uses machine-learning to analyze market trends and pinpoint the best entry points before a bounce.
Buy and Sell Signals: The algo identifies the absolute best prices to buy and sell, making it easy for users to follow along and make trades.
Ease of Use: It is designed to be easy to set up, even for those with minimal technical knowledge, and can be configured in under 15 minutes.
Accuracy: The algo boasts an over 90% accuracy rate, which is typically seen in high-end trading algorithms costing thousands of dollars per month.
MarketLuminaMarketLumina: A Comprehensive Technical Analysis Tool
MarketLumina is a technical analysis indicator crafted by a team of traders and developers in Germany. Built for TradingView’s Pine Script, it integrates trend visualization, signal generation, and real-time market insights to provide a multifaceted view of market conditions. This tool is designed to support traders in analyzing trends, spotting potential reversals, and evaluating market dynamics across various timeframes.
The best way to get started with MarketLumina is to take your time exploring its wide range of features. Dive in, experiment, and find the 2-3 tools that feel just right for you. Whether you’re a day trader looking for quick signals, a swing trader tracking trends, or an investor watching the bigger picture, MarketLumina lets you pick and choose what works best. Over time, you’ll craft your own unique trading strategy, perfectly tailored to your goals, preferences, and risk tolerance.
Key Features
Fibonacci Trend-Cloud
Displays market direction through Fibonacci-weighted moving averages. The cloud’s color—green (bullish), red (bearish), or yellow (caution)—reflects prevailing conditions, while its width indicates trend intensity.
Advanced Signal System
Generates signals derived from RSI, momentum, volume, money flow, volatility, price action, divergences, specific cloud-interactions, divergences and historical data. Signal categories include strong reversals, potential reversals, short-term tops/bottoms, strong trend, oversold/overbought conditions, exit signals, and money flow strategy triggers.
LuminaPulse – Real-Time Market Insight
A proprietary module that delivers real-time market analysis through a dashboard of six progress bars, each tailored to the symbol and timeframe using a machine learning approach. It screens historical data—key levels, consolidation zones, volatility spikes, and past price reactions—to optimize insights.
Support & Resistance Zones
Highlights critical price levels using volume-weighted historical data and price-action pivot points.
Candlestick-Overlay
Applies color coding to candlesticks—green (bullish), red (bearish), yellow (caution)—to emphasize signal-relevant bars.
Usage Instructions
MarketLumina is intended as a component of a broader analytical framework.
Below are general guidelines for its application:
Multi-Timeframe Analysis
Align signals with trends on higher timeframes for context.
LuminaPulse Interpretation
Evaluate confluence across trend strength, momentum, money flow, and volume to assess market conditions. Additionally, monitor squeeze conditions for potential breakout signals and volatility to gauge market activity.
Trend-Cloud Context
Use the Fibonacci Trend-Cloud’s direction and width as a filter for signal relevance.
Usage Instructions for MarketLumina’s Advanced Signal System
The Advanced Signal System is a core component of MarketLumina, designed to empower traders by generating a variety of signals derived from RSI, momentum, volume, money flow, volatility, divergences, price action, and more. These signals are organized into distinct categories to help you identify key market conditions and uncover potential trading opportunities.
Below is a comprehensive guide to each signal category, including descriptions, interpretations, and practical applications to enhance your trading decisions:
Strong Reversals
Reversal Signals are generated using a complex price action and volatility algorithm, pinpointing significant potential turning points in the market with elevated confidence.
How to Use:
Look for these signals near critical support or resistance levels, especially when supported by the Fibonacci Trend-Cloud or LuminaPulse metrics.
Treat them as powerful reversal cues when they align with overarching market trends or follow prolonged price movements.
Interpretation:
A bullish Reversal signal flags a strong probability of an upward reversal, often in oversold conditions, suggesting a shift to bullish momentum.
A bearish Reversal signal points to a likely downward reversal, typically in overbought scenarios, indicating bearish potential.
Their reliability increases with confluence factors like divergences or a notable shift in money flow.
Potential Reversals
These signals flag possible trend continuation after a pullback based on price action, RSI thresholds and specific trend-cloud interaction, offering early insights with moderate certainty compared to strong reversals.
How to Use:
Use them as preliminary alerts for potential reversals of a pullback continuing its trend, particularly near support or resistance zones.
Validate their strength with additional tools like the Trend-Cloud thickness or LuminaPulse to gauge reliability.
Interpretation:
Bullish potential reversals hint at the onset of an upward move, while bearish ones suggest a downward continuation may be brewing.
Ideal for spotting early opportunities, these signals gain credibility when paired with confirming indicators.
Short-Term Tops/Bottoms
These signals mark temporary price extremes, identifying short-term tops or bottoms within a trend, driven by Multi-RSI algorithms.
How to Use:
In trending markets, leverage these signals to anticipate brief pullbacks or corrections within the dominant direction.
In range-bound markets, use them to pinpoint reversal points within the established range.
Interpretation:
A short-term top indicates a temporary possible high, offering opportunities to lock in profits or brace for a dip.
A short-term bottom suggests a fleeting low, signaling a potential bounce or recovery within the larger trend.
Oversold/Overbought Conditions
This category highlights extreme market states with oversold/overbought conditions, derived from RSI and price action.
How to Use:
In strong trends, these signals affirm the likelihood of potential temporary exhaustion.
In weaker trends, they signal potential exhaustion and could early indicate reversals.
Interpretation:
Oversold signals in strong trends could mark a short-term break or slower trend continuation and should not be interpreted as a reversal signal.
Strong Trend
These signals flag possible trend continuation based on six key metrics—RSI, Money Flow, Momentum, and more—align to confirm robust momentum.
How to Use:
In strong trends, these signals affirm the likelihood of a continuation.
Interpretation:
Strong trend signals could be interpreted as a confirmation of the bullish movement and a possible continuation.
Money Flow Strategy Triggers
Built on money flow analysis, these signals track capital inflows and outflows on multiple timeframes to reveal shifts in buying or selling pressure, offering a window into market sentiment.
How to Use:
Deploy these triggers to refine entry or exit timing, especially when they sync with other signals and the Trend-Cloud’s direction.
Pair them with LuminaPulse’s Money Flow, Momentum and volume sentiment for a deeper understanding of market participation.
Interpretation:
Positive money flow triggers indicate rising buying pressure, often a precursor to upward price action.
Negative money flow triggers signal increasing selling pressure, potentially foreshadowing a downturn.
Their value shines when diverging from price action, exposing hidden strength or weakness in the market.
Usage Instructions for LuminaPulse
LuminaPulse is a standout feature of MarketLumina, delivering real-time insights into market conditions through a sophisticated, machine-learning-driven approach. It analyzes historical data unique to each symbol and timeframe—examining past key levels, consolidation zones, volatility spikes, and price reactions—to create a dashboard of six progress bars.
These bars represent the strength of critical market factors:
Money Flow
Momentum
Volume
Strength (Trend Strength)
Squeeze
Volatility
Each bar is color-coded—green for bullish conditions, red for bearish—and its fill level reflects the factor’s strength relative to historical patterns. A fully loaded bar suggests a high likelihood of a notable price reaction, based on how the market has responded to similar conditions in the past. What makes LuminaPulse unique is its ability to tailor these insights to the specific symbol and timeframe, going beyond raw metrics to show their historical significance.
Additionally, each bar features a "Ghost-Progress" overlay, marking the highest strength level reached in the current trend. This allows you to see whether the current strength is nearing or retreating from recent peaks, adding depth to your analysis.
How to Use LuminaPulse
LuminaPulse is a confirmation tool, not a standalone signal generator. It shines when paired with other MarketLumina features, like the Fibonacci Trend-Cloud or Advanced Signal System, as part of a broader trading strategy.
Here’s how to apply it effectively:
Seek Confluence
Check for alignment across multiple bars. For example, if Money Flow, Momentum, and Volume are all green and highly filled, it could indicate strong bullish potential.
Spot Divergences
Look for mismatches between price action and the bars. If price rises but Momentum weakens, it might hint at a fading trend.
Monitor Squeeze: A fully loaded Squeeze bar signals consolidation and potential volatility ahead. Use other tools to predict the breakout direction.
Assess Volatility: The Volatility bar sets the context—high levels suggest bigger price swings, while low levels indicate a calmer market.
Interpreting Each Progress Bar
1. Money Flow
Measures the strength of money flowing into or out of the market, compared to historical thresholds, key-levels and past price reactions, using a machine learning approach, tailored to the symbol and timeframe. It’s not just the raw money flow index—it’s the likelihood of a price move based on historical similar money flow movements.
How to Use:
Look for a fully loaded bar alongside a strong Momentum bar near key levels or signals.
Watch for a bar switching colors (e.g., red to green) with a robust Momentum bar for potential trend shifts.
Treat it as the fuel behind price moves, not the absolute flow level.
Interpretation:
A fully loaded green bar suggests strong buying pressure; a red bar indicates selling pressure.
Divergence (e.g., price up, Money Flow down) can signal an impending reversal—confirm with other tools.
2. Momentum
Gauges the strength and direction of price momentum, factoring in historical key levels, volatility, and past reactions, optimized by a machine learning approach, tailored to the symbol and timeframe. It reflects momentum’s strength and potential impact, not just its current state.
How to Use:
Pair a fully loaded bar with a strong Money Flow bar near signals or key levels.
A switching bar (e.g., bearish to bullish) with a solid Money Flow bar may hint at a trend change.
View it as the driving force behind price momentum.
Interpretation:
A fully loaded green bar signals powerful upward momentum; a red bar shows downward force.
Divergence from price action (e.g., price down, Momentum up) can be a reversal clue—verify with confluence.
3. Volume
Shows whether volume is pushing price up or down, based on historical patterns and key levels near the current price, tailored to the symbol and timeframe.
How to Use:
Look for a bar over 50% filled, aligned with Money Flow and Momentum, near signals or key levels.
Combine a strong bar with a fully loaded Squeeze bar for breakout potential.
See it as the muscle behind buying or selling pressure.
Interpretation:
A green bar over 50% suggests volume supports upward moves; a red bar indicates downward pressure.
Alignment with other bars near support/resistance can confirm breakouts or rejections.
4. Strength (Trend Strength)
Focuses on the current trend’s robustness, comparing it to historical price movements, trend direction, and volatility. It helps spot pullbacks or early trend-shift warnings.
How to Use:
Watch for a fully loaded bar opposite your trade, paired with weakening Money Flow or Momentum, as an exit cue.
For reversals, confirm a fully loaded bar with at least two other aligned bars.
Use it to gauge the power of short-term price action.
Interpretation:
A fully loaded bar with supporting bars confirms trend strength.
A dropping bar as price tests key levels may signal a pullback or shift—check support/resistance.
5. Squeeze
Highlights consolidation and building pressure from buyers and sellers, suggesting a big move ahead. Its color reflects the trend but isn’t a reliable directional guide.
How to Use:
A fully loaded bar signals an imminent breakout—use other indicators for direction.
Pair with strong Strength and Volume for timing confirmation.
Treat it as a timing tool, not a directional one.
Interpretation:
A fully loaded bar means a significant move is likely, but not where it’s headed.
Use it to prepare for action, not to predict the outcome—direction comes from confluence.
6. Volatility
Measures current volatility relative to historical levels, using a machine learning approach to analyze past volatility and duration patterns specific to the symbol and timeframe. A calm bar might still appear during big swings if that’s normal for the asset or a calm bar could appear after a big move if it's normal for the asset to show single volatility spikes with consolidation afterwards.
How to Use:
Use a high Volatility bar (fully loaded) to favor short-term trades; a low bar (empty) suggests a quieter market.
Pair with Squeeze to anticipate breakout strength.
Adjust your strategy based on the market’s activity level.
Interpretation:
A fully loaded bar signals high volatility and bigger swings; an empty bar indicates low volatility and smaller moves.
Context is key—high volatility for one symbol might be calm for another, based on its history.
Key Features of LuminaPulse
Tailored Insights: Each bar’s strength is customized to the symbol and timeframe’s historical behavior, making it uniquely relevant.
Ghost-Progress: See the peak strength in the current trend, helping you judge if conditions are peaking or fading.
Individual-Adapting Edge: Algorithms adapt to historical data, ensuring insights reflect past reactions, not just current values.
Important Notes
LuminaPulse is a complex, unique tool designed to enhance your analysis, not dictate trades. Its strength lies in its historical context and real-time adaptability, but it’s most effective when combined with other MarketLumina features and your own strategy.
Illustrative Scenarios
Trend Continuation Example
Picture a market where momentum is steadily building. The Fibonacci Trend-Cloud turns red across both the primary and higher timeframes, reflecting a strong bearish direction. As this trend takes shape, reversal or strategy-based signals begin to line up with the cloud’s downward tilt, hinting at sustained weakness. Short-term bottoms and tops might start forming, offering clues about the trend’s rhythm, while a widening cloud could suggest growing confidence in the move. This setup showcases how the indicator can highlight a trend gathering steam, with multiple features reinforcing the direction.
Reversal Example
Imagine a market that’s been rising but approaches a key support zone. Suddenly, strong reversal signals flash on the chart, catching attention near this critical level. Price action starts to stabilize or reject, while LuminaPulse metrics show a subtle uptick in momentum or a shift in volume sentiment. As the market tests this zone, opposing signals fade, and the potential for a downward turn becomes clearer. This scenario illustrates how the indicator’s signals and metrics can converge to spotlight a possible shift in direction.
Pullback Analysis Example
Consider a strong bullish trend unfolding on the higher timeframe, painting a broad picture of upward movement. Zooming into the lower timeframe, a brief retracement emerges, pulling price back toward a support level. Here, strategy-based or reversal signals might pop up, marking this as a key area to watch. LuminaPulse could reveal a slowdown in downward momentum or a tightening of trend strength, suggesting the retracement might be running out of energy. This example demonstrates how the indicator can help dissect a pullback, revealing opportunities within an ongoing trend.
Range-Bound Market Example
Envision a market stuck in a sideways drift, with the Fibonacci Trend-Cloud narrowing and turning yellow—a sign of consolidation. Reversal signals begin appearing near support and resistance zones, hinting at potential bounces within the range. LuminaPulse metrics might spike, showing bursts of volatility or squeeze conditions building up. As price nears these boundaries, the chance of a breakout looms, with retests of the zones offering further clarity. These examples show how MarketLumina’s features—like the cloud’s color and width, signal alignments, and LuminaPulse shifts—can work together to illuminate market dynamics. Whether it’s a trend gaining traction, a reversal brewing, a pullback pausing, or a range tightening, the indicator provides visual and analytical cues to explore. By watching how these elements evolve, you can get a feel for the market’s rhythm and sharpen your understanding of what to look for in different situations.
Legal Notices
MarketLumina is a technical analysis tool, not a substitute for professional financial advice.
Trading carries inherent risks; past performance does not guarantee future outcomes.
All content is provided for educational purposes only and does not constitute trading recommendations. Users bear full responsibility for their trading decisions and are urged to prioritize robust risk management.
Volume Predictor [PhenLabs]📊 Volume Predictor
Version: PineScript™ v6
📌 Description
The Volume Predictor is an advanced technical indicator that leverages machine learning and statistical modeling techniques to forecast future trading volume. This innovative tool analyzes historical volume patterns to predict volume levels for upcoming bars, providing traders with valuable insights into potential market activity. By combining multiple prediction algorithms with pattern recognition techniques, the indicator delivers forward-looking volume projections that can enhance trading strategies and market analysis.
🚀 Points of Innovation:
Machine learning pattern recognition using Lorentzian distance metrics
Multi-algorithm prediction framework with algorithm selection
Ensemble learning approach combining multiple prediction methods
Real-time accuracy metrics with visual performance dashboard
Dynamic volume normalization for consistent scale representation
Forward-looking visualization with configurable prediction horizon
🔧 Core Components
Pattern Recognition Engine : Identifies similar historical volume patterns using Lorentzian distance metrics
Multi-Algorithm Framework : Offers five distinct prediction methods with configurable parameters
Volume Normalization : Converts raw volume to percentage scale for consistent analysis
Accuracy Tracking : Continuously evaluates prediction performance against actual outcomes
Advanced Visualization : Displays actual vs. predicted volume with configurable future bar projections
Interactive Dashboard : Shows real-time performance metrics and prediction accuracy
🔥 Key Features
The indicator provides comprehensive volume analysis through:
Multiple Prediction Methods : Choose from Lorentzian, KNN Pattern, Ensemble, EMA, or Linear Regression algorithms
Pattern Matching : Identifies similar historical volume patterns to project future volume
Adaptive Predictions : Generates volume forecasts for multiple bars into the future
Performance Tracking : Calculates and displays real-time prediction accuracy metrics
Normalized Scale : Presents volume as a percentage of historical maximums for consistent analysis
Customizable Visualization : Configure how predictions and actual volumes are displayed
Interactive Dashboard : View algorithm performance metrics in a customizable information panel
🎨 Visualization
Actual Volume Columns : Color-coded green/red bars showing current normalized volume
Prediction Columns : Semi-transparent blue columns representing predicted volume levels
Future Bar Projections : Forward-looking volume predictions with configurable transparency
Prediction Dots : Optional white dots highlighting future prediction points
Reference Lines : Visual guides showing the normalized volume scale
Performance Dashboard : Customizable panel displaying prediction method and accuracy metrics
📖 Usage Guidelines
History Lookback Period
Default: 20
Range: 5-100
This setting determines how many historical bars are analyzed for pattern matching. A longer period provides more historical data for pattern recognition but may reduce responsiveness to recent changes. A shorter period emphasizes recent market behavior but might miss longer-term patterns.
🧠 Prediction Method
Algorithm
Default: Lorentzian
Options: Lorentzian, KNN Pattern, Ensemble, EMA, Linear Regression
Selects the algorithm used for volume prediction:
Lorentzian: Uses Lorentzian distance metrics for pattern recognition, offering excellent noise resistance
KNN Pattern: Traditional K-Nearest Neighbors approach for historical pattern matching
Ensemble: Combines multiple methods with weighted averaging for robust predictions
EMA: Simple exponential moving average projection for trend-following predictions
Linear Regression: Projects future values based on linear trend analysis
Pattern Length
Default: 5
Range: 3-10
Defines the number of bars in each pattern for machine learning methods. Shorter patterns increase sensitivity to recent changes, while longer patterns may identify more complex structures but require more historical data.
Neighbors Count
Default: 3
Range: 1-5
Sets the K value (number of nearest neighbors) used in KNN and Lorentzian methods. Higher values produce smoother predictions by averaging more historical patterns, while lower values may capture more specific patterns but could be more susceptible to noise.
Prediction Horizon
Default: 5
Range: 1-10
Determines how many future bars to predict. Longer horizons provide more forward-looking information but typically decrease accuracy as the prediction window extends.
📊 Display Settings
Display Mode
Default: Overlay
Options: Overlay, Prediction Only
Controls how volume information is displayed:
Overlay: Shows both actual volume and predictions on the same chart
Prediction Only: Displays only the predictions without actual volume
Show Prediction Dots
Default: false
When enabled, adds white dots to future predictions for improved visibility and clarity.
Future Bar Transparency (%)
Default: 70
Range: 0-90
Controls the transparency of future prediction bars. Higher values make future bars more transparent, while lower values make them more visible.
📱 Dashboard Settings
Show Dashboard
Default: true
Toggles display of the prediction accuracy dashboard. When enabled, shows real-time accuracy metrics.
Dashboard Location
Default: Bottom Right
Options: Top Left, Top Right, Bottom Left, Bottom Right
Determines where the dashboard appears on the chart.
Dashboard Text Size
Default: Normal
Options: Small, Normal, Large
Controls the size of text in the dashboard for various display sizes.
Dashboard Style
Default: Solid
Options: Solid, Transparent
Sets the visual style of the dashboard background.
Understanding Accuracy Metrics
The dashboard provides key performance metrics to evaluate prediction quality:
Average Error
Shows the average difference between predicted and actual values
Positive values indicate the prediction tends to be higher than actual volume
Negative values indicate the prediction tends to be lower than actual volume
Values closer to zero indicate better prediction accuracy
Accuracy Percentage
A measure of how close predictions are to actual outcomes
Higher percentages (>70%) indicate excellent prediction quality
Moderate percentages (50-70%) indicate acceptable predictions
Lower percentages (<50%) suggest weaker prediction reliability
The accuracy metrics are color-coded for quick assessment:
Green: Strong prediction performance
Orange: Moderate prediction performance
Red: Weaker prediction performance
✅ Best Use Cases
Anticipate upcoming volume spikes or drops
Identify potential volume divergences from price action
Plan entries and exits around expected volume changes
Filter trading signals based on predicted volume support
Optimize position sizing by forecasting market participation
Prepare for potential volatility changes signaled by volume predictions
Enhance technical pattern analysis with volume projection context
⚠️ Limitations
Volume predictions become less accurate over longer time horizons
Performance varies based on market conditions and asset characteristics
Works best on liquid assets with consistent volume patterns
Requires sufficient historical data for pattern recognition
Sudden market events can disrupt prediction accuracy
Volume spikes may be muted in predictions due to normalization
💡 What Makes This Unique
Machine Learning Approach : Applies Lorentzian distance metrics for robust pattern matching
Algorithm Selection : Offers multiple prediction methods to suit different market conditions
Real-time Accuracy Tracking : Provides continuous feedback on prediction performance
Forward Projection : Visualizes multiple future bars with configurable display options
Normalized Scale : Presents volume as a percentage of maximum volume for consistent analysis
Interactive Dashboard : Displays key metrics with customizable appearance and placement
🔬 How It Works
The Volume Predictor processes market data through five main steps:
1. Volume Normalization:
Converts raw volume to percentage of maximum volume in lookback period
Creates consistent scale representation across different timeframes and assets
Stores historical normalized volumes for pattern analysis
2. Pattern Detection:
Identifies similar volume patterns in historical data
Uses Lorentzian distance metrics for robust similarity measurement
Determines strength of pattern match for prediction weighting
3. Algorithm Processing:
Applies selected prediction algorithm to historical patterns
For KNN/Lorentzian: Finds K nearest neighbors and calculates weighted prediction
For Ensemble: Combines multiple methods with optimized weighting
For EMA/Linear Regression: Projects trends based on statistical models
4. Accuracy Calculation:
Compares previous predictions to actual outcomes
Calculates average error and prediction accuracy
Updates performance metrics in real-time
5. Visualization:
Displays normalized actual volume with color-coding
Shows current and future volume predictions
Presents performance metrics through interactive dashboard
💡 Note:
The Volume Predictor performs optimally on liquid assets with established volume patterns. It’s most effective when used in conjunction with price action analysis and other technical indicators. The multi-algorithm approach allows adaptation to different market conditions by switching prediction methods. Pay special attention to the accuracy metrics when evaluating prediction reliability, as sudden market changes can temporarily reduce prediction quality. The normalized percentage scale makes the indicator consistent across different assets and timeframes, providing a standardized approach to volume analysis.
Accurate Bollinger Bands mcbw_ [True Volatility Distribution]The Bollinger Bands have become a very important technical tool for discretionary and algorithmic traders alike over the last decades. It was designed to give traders an edge on the markets by setting probabilistic values to different levels of volatility. However, some of the assumptions that go into its calculations make it unusable for traders who want to get a correct understanding of the volatility that the bands are trying to be used for. Let's go through what the Bollinger Bands are said to show, how their calculations work, the problems in the calculations, and how the current indicator I am presenting today fixes these.
--> If you just want to know how the settings work then skip straight to the end or click on the little (i) symbol next to the values in the indicator settings window when its on your chart <--
--------------------------- What Are Bollinger Bands ---------------------------
The Bollinger Bands were formed in the 1980's, a time when many retail traders interacted with their symbols via physically printed charts and computer memory for personal computer memory was measured in Kb (about a factor of 1 million smaller than today). Bollinger Bands are designed to help a trader or algorithm see the likelihood of price expanding outside of its typical range, the further the lines are from the current price implies the less often they will get hit. With a hands on understanding many strategies use these levels for designated levels of breakout trades or to assist in defining price ranges.
--------------------------- How Bollinger Bands Work ---------------------------
The calculations that go into Bollinger Bands are rather simple. There is a moving average that centers the indicator and an equidistant top band and bottom band are drawn at a fixed width away. The moving average is just a typical moving average (or common variant) that tracks the price action, while the distance to the top and bottom bands is a direct function of recent price volatility. The way that the distance to the bands is calculated is inspired by formulas from statistics. The standard deviation is taken from the candles that go into the moving average and then this is multiplied by a user defined value to set the bands position, I will call this value 'the multiple'. When discussing Bollinger Bands, that trading community at large normally discusses 'the multiple' as a multiplier of the standard deviation as it applies to a normal distribution (gaußian probability). On a normal distribution the number of standard deviations away (which trades directly use as 'the multiple') you are directly corresponds to how likely/unlikely something is to happen:
1 standard deviation equals 68.3%, meaning that the price should stay inside the 1 standard deviation 68.3% of the time and be outside of it 31.7% of the time;
2 standard deviation equals 95.5%, meaning that the price should stay inside the 2 standard deviation 95.5% of the time and be outside of it 4.5% of the time;
3 standard deviation equals 99.7%, meaning that the price should stay inside the 3 standard deviation 99.7% of the time and be outside of it 0.3% of the time.
Therefore when traders set 'the multiple' to 2, they interpret this as meaning that price will not reach there 95.5% of the time.
---------------- The Problem With The Math of Bollinger Bands ----------------
In and of themselves the Bollinger Bands are a great tool, but they have become misconstrued with some incorrect sense of statistical meaning, when they should really just be taken at face value without any further interpretation or implication.
In order to explain this it is going to get a bit technical so I will give a little math background and try to simplify things. First let's review some statistics topics (distributions, percentiles, standard deviations) and then with that understanding explore the incorrect logic of how Bollinger Bands have been interpreted/employed.
---------------- Quick Stats Review ----------------
.
(If you are comfortable with statistics feel free to skip ahead to the next section)
.
-------- I: Probability distributions --------
When you have a lot of data it is helpful to see how many times different results appear in your dataset. To visualize this people use "histograms", which just shows how many times each element appears in the dataset by stacking each of the same elements on top of each other to form a graph. You may be familiar with the bell curve (also called the "normal distribution", which we will be calling it by). The normal distribution histogram looks like a big hump around zero and then drops off super quickly the further you get from it. This shape (the bell curve) is very nice because it has a lot of very nifty mathematical properties and seems to show up in nature all the time. Since it pops up in so many places, society has developed many different shortcuts related to it that speed up all kinds of calculations, including the shortcut that 1 standard deviation = 68.3%, 2 standard deviations = 95.5%, and 3 standard deviations = 99.7% (these only apply to the normal distribution). Despite how handy the normal distribution is and all the shortcuts we have for it are, and how much it shows up in the natural world, there is nothing that forces your specific dataset to look like it. In fact, your data can actually have any possible shape. As we will explore later, economic and financial datasets *rarely* follow the normal distribution.
-------- II: Percentiles --------
After you have made the histogram of your dataset you have built the "probability distribution" of your own dataset that is specific to all the data you have collected. There is a whole complicated framework for how to accurately calculate percentiles but we will dramatically simplify it for our use. The 'percentile' in our case is just the number of data points we are away from the "middle" of the data set (normally just 0). Lets say I took the difference of the daily close of a symbol for the last two weeks, green candles would be positive and red would be negative. In this example my dataset of day by day closing price difference is:
week 1:
week 2:
sorting all of these value into a single dataset I have:
I can separate the positive and negative returns and explore their distributions separately:
negative return distribution =
positive return distribution =
Taking the 25th% percentile of these would just be taking the value that is 25% towards the end of the end of these returns. Or akin the 100%th percentile would just be taking the vale that is 100% at the end of those:
negative return distribution (50%) = -5
positive return distribution (50%) = +4
negative return distribution (100%) = -10
positive return distribution (100%) = +20
Or instead of separating the positive and negative returns we can also look at all of the differences in the daily close as just pure price movement and not account for the direction, in this case we would pool all of the data together by ignoring the negative signs of the negative reruns
combined return distribution =
In this case the 50%th and 100%th percentile of the combined return distribution would be:
combined return distribution (50%) = 4
combined return distribution (100%) = 10
Sometimes taking the positive and negative distributions separately is better than pooling them into a combined distribution for some purposes. Other times the combined distribution is better.
Most financial data has very different distributions for negative returns and positive returns. This is encapsulated in sayings like "Price takes the stairs up and the elevator down".
-------- III: Standard Deviation --------
The formula for the standard deviation (refereed to here by its shorthand 'STDEV') can be intimidating, but going through each of its elements will illuminate what it does. The formula for STDEV is equal to:
square root ( (sum ) / N )
Going back the the dataset that you might have, the variables in the formula above are:
'mean' is the average of your entire dataset
'x' is just representative of a single point in your dataset (one point at a time)
'N' is the total number of things in your dataset.
Going back to the STDEV formula above we can see how each part of it works. Starting with the '(x - mean)' part. What this does is it takes every single point of the dataset and measure how far away it is from the mean of the entire dataset. Taking this value to the power of two: '(x - mean) ^ 2', means that points that are very far away from the dataset mean get 'penalized' twice as much. Points that are very close to the dataset mean are not impacted as much. In practice, this would mean that if your dataset had a bunch of values that were in a wide range but always stayed in that range, this value ('(x - mean) ^ 2') would end up being small. On the other hand, if your dataset was full of the exact same number, but had a couple outliers very far away, this would have a much larger value since the square par of '(x - mean) ^ 2' make them grow massive. Now including the sum part of 'sum ', this just adds up all the of the squared distanced from the dataset mean. Then this is divided by the number of values in the dataset ('N'), and then the square root of that value is taken.
There is nothing inherently special or definitive about the STDEV formula, it is just a tool with extremely widespread use and adoption. As we saw here, all the STDEV formula is really doing is measuring the intensity of the outliers.
--------------------------- Flaws of Bollinger Bands ---------------------------
The largest problem with Bollinger Bands is the assumption that price has a normal distribution. This is assumption is massively incorrect for many reasons that I will try to encapsulate into two points:
Price return do not follow a normal distribution, every single symbol on every single timeframe has is own unique distribution that is specific to only itself. Therefore all the tools, shortcuts, and ideas that we use for normal distributions do not apply to price returns, and since they do not apply here they should not be used. A more general approach is needed that allows each specific symbol on every specific timeframe to be treated uniquely.
The distributions of price returns on the positive and negative side are almost never the same. A more general approach is needed that allows positive and negative returns to be calculated separately.
In addition to the issues of the normal distribution assumption, the standard deviation formula (as shown above in the quick stats review) is essentially just a tame measurement of outliers (a more aggressive form of outlier measurement might be taking the differences to the power of 3 rather than 2). Despite this being a bit of a philosophical question, does the measurement of outlier intensity as defined by the STDEV formula really measure what we want to know as traders when we're experiencing volatility? Or would adjustments to that formula better reflect what we *experience* as volatility when we are actively trading? This is an open ended question that I will leave here, but I wanted to pose this question because it is a key part of what how the Bollinger Bands work that we all assume as a given.
Circling back on the normal distribution assumption, the standard deviation formula used in the calculation of the bands only encompasses the deviation of the candles that go into the moving average and have no knowledge of the historical price action. Therefore the level of the bands may not really reflect how the price action behaves over a longer period of time.
------------ Delivering Factually Accurate Data That Traders Need------------
In light of the problems identified above, this indicator fixes all of these issue and delivers statistically correct information that discretionary and algorithmic traders can use, with truly accurate probabilities. It takes the price action of the last 2,000 candles and builds a huge dataset of distributions that you can directly select your percentiles from. It also allows you to have the positive and negative distributions calculated separately, or if you would like, you can pool all of them together in a combined distribution. In addition to this, there is a wide selection of moving averages directly available in the indicator to choose from.
Hedge funds, quant shops, algo prop firms, and advanced mechanical groups all employ the true return distributions in their work. Now you have access to the same type of data with this indicator, wherein it's doing all the lifting for you.
------------------------------ Indicator Settings ------------------------------
.
---- Moving average ----
Select the type of moving average you would like and its length
---- Bands ----
The percentiles that you enter here will be pulled directly from the return distribution of the last 2,000 candles. With the typical Bollinger Bands, traders would select 2 standard deviations and incorrectly think that the levels it highlights are the 95.5% levels. Now, if you want the true 95.5% level, you can just enter 95.5 into the percentile value here. Each of the three available bands takes the true percentile you enter here.
---- Separate Positive & Negative Distributions----
If this box is checked the positive and negative distributions are treated indecently, completely separate from each other. You will see that the width of the top and bottom bands will be different for each of the percentiles you enter.
If this box is unchecked then all the negative and positive distributions are pooled together. You will notice that the width of the top and bottom bands will be the exact same.
---- Distribution Size ----
This is the number of candles that the price return is calculated over. EG: to collect the price return over the last 33 candles, the difference of price from now to 33 candles ago is calculated for the last 2,000 candles, to build a return distribution of 2000 points of price differences over 33 candles.
NEGATIVE NUMBERS(<0) == exact number of candles to include;
EG: setting this value to -20 will always collect volatility distributions of 20 candles
POSITIVE NUMBERS(>0) == number of candles to include as a multiple of the Moving Average Length value set above;
EG: if the Moving Average Length value is set to 22, setting this value to 2 will use the last 22*2 = 44 candles for the collection of volatility distributions
MORE candles being include will generally make the bands WIDER and their size will change SLOWER over time.
I wish you focus, dedication, and earnest success on your journey.
Happy trading :)
MidnightQuant Buy/Exit SignalsThe MidnightQuant Indicator is a sophisticated trend-following tool designed for traders seeking an edge in market analysis through a multi-symbol, multi-timeframe approach. Built on an enhanced Supertrend algorithm, this indicator goes beyond traditional trend-following methods by integrating advanced features that cater to both novice and experienced traders. Its unique design provides comprehensive market insights, empowering traders to make informed decisions with confidence.
Keep in mind that it was tested mainly with higher timeframes, 4H, 1D, 1W.
Overview:
MidnightQuant is specifically engineered to simplify the complexity of market analysis by monitoring and analyzing multiple currency pairs simultaneously. It combines trend detection, reversal signals, and a user-friendly dashboard to present a holistic view of market conditions. Whether you're trading a single asset or managing a portfolio, MidnightQuant delivers actionable insights in real-time.
Key Features:
Multi-Symbol Trend Analysis:
MidnightQuant's most distinguishing feature is its ability to track and analyze up to ten different currency pairs simultaneously. Unlike traditional indicators that focus on a single asset, this multi-symbol capability provides a broader view of market dynamics, allowing traders to identify correlations and divergences across various pairs. This is particularly useful for traders who want to confirm the strength of a trend across different markets before making a trading decision.
Enhanced Supertrend Algorithm:
At the core of MidnightQuant lies an optimized Supertrend algorithm that has been fine-tuned for both accuracy and responsiveness. The algorithm calculates trend directions by factoring in average true range (ATR) data, which helps in identifying significant price movements while filtering out market noise. This results in more reliable trend detection and fewer false signals, making it a powerful tool for trend-following strategies.
Intuitive Dashboard Display:
The MidnightQuant dashboard is designed to centralize critical information, making it accessible at a glance. It displays four key columns: Potential Reversals, Confirmed Reversals, Bullish Trends, and Bearish Trends. Each column provides a quick summary of the current market state for all tracked symbols, allowing traders to see where potential opportunities lie. This streamlined presentation reduces the need for constant chart monitoring and helps traders focus on the most promising setups.
Visual Signals and Candlestick Integration:
MidnightQuant enhances chart readability by incorporating visual signals directly on the price chart. Buy and sell signals are clearly marked at points where trend reversals are detected, providing immediate entry and exit cues. Additionally, the indicator color-codes candlesticks according to the current trend direction—purple for bullish and light lavender for bearish—enabling traders to instantly gauge market sentiment.
Customizable Alerts:
The indicator includes flexible alert conditions that can be customized according to your trading preferences. Alerts are triggered for trend direction changes, providing timely notifications for potential buy or sell opportunities. This feature is invaluable for traders who need to stay informed of market movements even when they are not actively monitoring their charts.
Trend Reversal Detection:
One of MidnightQuant's core functionalities is its ability to detect and signal trend reversals. The indicator monitors changes in the trend direction with precision, helping traders to identify potential turning points in the market. This feature is particularly useful for swing traders and those who aim to capitalize on shifts in market momentum.
Customizable Settings:
The indicator comes with various settings that allow traders to tailor it to their specific needs. From selecting which symbols to track to adjusting the sensitivity of the Supertrend algorithm, users have full control over how the indicator behaves. This customization ensures that MidnightQuant can be adapted to different trading styles and strategies.
How It Works:
MidnightQuant uses a proprietary calculation based on the Supertrend algorithm, which leverages ATR to dynamically adjust to market volatility. The indicator tracks the midpoint of each trading range and applies a factor that defines the threshold for trend changes. When the closing price crosses this threshold, a new trend is identified, and corresponding signals are generated.
The multi-symbol feature is powered by the request.security function, which allows MidnightQuant to pull in data from multiple symbols and timeframes. This data is then processed through the Supertrend algorithm to determine the trend direction for each symbol, which is subsequently displayed on the dashboard.
The indicator also includes a built-in dashboard that provides a summarized view of market conditions, including potential and confirmed reversals, as well as current trend directions. This dashboard updates in real-time, giving traders a continuously updated snapshot of market sentiment across multiple assets.
Use Cases:
Swing Traders: The trend reversal detection and real-time alerts help swing traders identify potential entry and exit points, making it easier to capitalize on market swings.
GG Short & Long IndicatorGG Short & Long Indicator is a powerful signal indicator with AI
How do indicator signals work?
The main purpose of the indicator is to give a signal that is most likely to bring profit based on historical data. This ORIGINAL trend algorithm gives SHORT and LONG signals when several conditions coincide: 1) Breakout of the average value of the modernized VWAP (this VWAP takes data only from certain time periods and trading sessions, as a result, its breakout most often coincides with the beginning of a strong trend); 2) The previous condition must be confirmed by volumes. I noticed that on some crypto exchanges, depending on whether the breakout is false or true, the volumes are different relative to each other. I applied this knowledge for additional filtering of signals (this point works only on crypto assets, on other assets the algorithm works without taking it into account, maybe later I will refine it); 3) When some of my original formulas to determine overbought (similar in principle to RSI, but more designed to work with the trader algorithm), should not show overbought - so that the entry into the transaction was not at too unfavorable values. To summarize, the algorithm tries to find a balance to determine a true breakout, during which the price will not go too far (for an acceptable RR).
But the most important thing is that the parameters to customize the algorithm are governed by our original AI algorithm. It can adjust the indicator in two modes: 1) Settings are selected based on the most profitable historical settings. 2) The settings are selected based not only on historical profitability, but also on winrate, frequency of trades, and a few other items that we will not disclose (so the code is closed) - we consider this approach as a priority, because according to our observations, it gives the highest performance compared to manual tuning. In addition, AI simply simplifies the work with the indicator - you do not need to adjust the settings manually for different trading pairs or timeframes, AI will do it all by itself and immediately give the ready result (backtest) on the table.
How to trade?
After the signal is issued, the indicator determines the recommended levels to close the trade (green dots). Stop loss should be placed behind the corresponding gray SL mark. Levels for closing a deal (TP) and the level of stop loss setting (SL) are also determined automatically for the selected pair and TF, based on volatility and selected indicator settings
To make a trade, you can also use the built-in “Support and Resistance Zones” tool, which displays ranges on the chart based on the modernized ATR, from which the price is more likely to rebound (here I also used my own approach, where in addition to the classic ATR formula, I also used volumes from certain crypto exchanges to determine more accurate price rebound zones)
These zones are also adjusted by AI - the algorithm compares several dozens of variations of these zones (with different settings) and chooses the one that best fits the current settings of the signal algorithm. For example, if the indicator is set up for frequent trades - the zones will be updated faster and will be less deep than if the indicator is set up for medium-term trading
If desired, you can customize the indicator manually using the corresponding section of the settings. Each paramater has a tooltip describing how and what it affects.
Statistisc panel
The panel can be divided into 2 conditional parts:
1) Statistics for each individual TP for the selected strategy. It shows the winrate and gross profit, if you fix a trade on a single target completely
2) Total trading result, if you trade clearly according to the strategy and fix the position by equal hours on 4 TPs. The total trading result is displayed for the current indicator settings, it also shows the best, worst and optimal of the possible indicator settings and the trading result of these settings on the side.
How do setup the indicator?
The indicator has preset settings for several major pairs and timeframes. These are fixed settings specifically selected for individual pairs and timeframes. You can use these presets, or you can choose one of the adaptive settings, which will AUTOMATICALLY select the best/optimal indicator settings.
I recommend choosing the “Adaptive Optimal” preset, as it uses more data to determine the optimal indicator settings and according to my observations this method works better in comparison to manual indicator settings or the “Adaptive Best” preset
Or you can use the manual settings, as mentioned earlier.
[Pandora] Error Function Treasure Trove - ERF/ERFI/Sigmoids+PRAISE:
At this time, I have to graciously thank the wonderful minds behind the new "Pine Profiler Mode" (PPM). Directly prior to this release, it allowed me to ascertain script performance even more. While I usually write mostly in highly optimized Pine code, PPM visually identified a few bottlenecks that would otherwise be hard to identify. Anyone who contributed to PPMs creation and testing before release... BRAVO!!! I commend all of those who assisted in it's state-of-the-art engineering and inception, well done!
BACKSTORY:
This script is specifically being released in defense of another member, an exceptionally unique PhD. It was brought to my attention that a script-mod-event occurred, regarding the publishing of a measly antiquated error function (ERF) calculation within his script. This sadly resulted in the now former member jumping ship after receiving unmannerly responses amidst his curious inquiries as to why his erf() was modded. To forbid rusty and rudimentary formulations because a mod-on-duty is temporally offended by a non-nefarious release of code, is in MY opinion an injustice to principles of perpetuating open-source code intended to benefit thousands to millions of community members. While Pine is the heart and soul of TV, the mathematical concepts contributed from the minds of members is the inspirational fuel of curiosity that powers it's pertinent reason to exist and evolve.
It is an indisputable fact that most members are not greatly skilled Pine Poets. Many members may be incapable of innovating robust function code in Pine, even if they have one or more PhDs. We ALL come from various disciplines of mathematical comprehension and education. Some mathematicians are not greatly skilled at coding, while some coders are not exceptional at math. So... what am I to do to attempt to resolve this circumstantial challenge??? Those who know me best are aware that I will always side with "the right side of history" in order to accomplish my primary self-defined missions I choose to accept. Serving as an algorithmic advocate, I felt compelled to intercede by compiling numerous error functions into elegant code of very high caliber that any and every TV member may choose to employ, so this ERROR never happens again.
After weeks of contemplation into algorithms I knew little about, I prioritized myself to resolve an unanticipated matter by creating advanced formulas of exquisitely crafted error functions refined to the best of my current abilities. My aversion for unresolved problems motivated me to eviscerate error function insufficiencies with many more rigid formulations beyond what is thought to exist. ERF needed a proper algorithmic exorcism anyways. In my furiosity, I contemplated an array of madMAXimum diplomatic demolition methods, choosing the chain saw massacre technique to slaughter dysfunctionalities I encountered on a battered ERF roadway. This resulted in prolific solutions that should assuredly endure the test of time. Poetically, as you will come to see, I am ripping the lid off of Pandora's box of error functions in this case to correct wrongs into a splendid bundle of rights for members.
INTENTION:
Error function (ERF) enthusiasts... PREPARE FOR GLORY!! The specific purpose of this script is to deprecate classic error functions with the creation of a fierce and formidable army of superior formulations, each having varying attributes of computational complexity with differing absolute error ranges in their results for multiple compute scenarios. This is NOT an indicator... It is intended to allow members to embark on endeavors to advance the profound knowledge base of this growing worldwide community of 60+ million inquisitive minds. For those of you who believe computational mathematics and statistics is near completion at its finest; I am here to inform you, this is ridiculous to ponder. We are no where near statistical excellence that can and will exist eventually. At this time, metaphorically speaking, we are merely scratching microns off of the surface of the skin of a statistical apple Isaac Newton once pondered.
THIS RELEASE:
Following weeks of pondering methodical experiments beyond the ordinary, I am liberating these wild notions of my error function explorations to the entire globe as copyleft code, not just Pine. This Pandora's basket of ERFs is being openly disclosed for the sake of the sanctity of mathematics, empirical science (not the garbage we are told by CONTROLocrats to blindly trust), revolutionary cutting edge engineering, cosmology, physics, information technology, artificial intelligence, and EVERY other mathematical branch of human knowledge being discovered over centuries. I do believe James Glaisher would favor my aims concerning ERF aspirations embracing the "Power of Pine".
The included functions are intended for TV members to use in any way they see fit. This is a gift to ALL members to foster future innovative excellence on this platform. Any attempt to moderate this code without notification of "self-evident clear and just cause" will be considered an irrevocable egregious action. The original foundational PURPOSE of establishing script moderation (I clearly remember) was primarily to maintain active vigilance over a growing community against intentional nefarious actions and/or behaviors in blatant disrespect to other author's works AND also thwart rampant copypasting bandit operations, all while accommodating balanced principles of fairness for an educational community cause via open source publishing that should support future algorithmic inventions well beyond my lifespan.
APPLICATIONS:
The related error functions are used in probability theory, statistics, and numerous and engineering scientific disciplines. Its key characteristics and applications are innumerable in computational realms. Its versatility and significance make it a fundamental tool in arenas of quantitative analysis and scientific research...
Probability Theory - Is widely used in probability theory to calculate probabilities and quantiles of the normal distribution.
Statistics - It's related to the Gaussian integral and plays a crucial role in statistics, especially in hypothesis testing and confidence interval calculations.
Physics - In physics, it arises in the study of diffusion equations, quantum mechanics, and heat conduction problems.
Engineering - Applications exist in engineering disciplines such as signal processing, control theory, and telecommunications.
Error Analysis - It's employed in error analysis and uncertainty quantification.
Numeric Approximations - Due to its lack of a closed-form expression, numerical methods are often employed to approximate erf/erfi().
AI, LLMs, & MACHINE LEARNING:
The error function (ERF) is indispensable to various AI applications, particularly due to its relation to Gaussian distributions and error analysis. It is used in Gaussian processes for regression and classification, probabilistic inference for Bayesian networks, soft margin computation in SVMs, neural networks involving Gaussian activation functions or noise, and clustering algorithms like Gaussian Mixture Models. Improved ERF approximations can enhance precision in these applications, reduce computational complexity, handle outliers and noise better, and improve optimization and convergence, possibly leading to more accurate, efficient, and robust AI systems.
BONUS ALGORITHMS:
While ERFs are versatile, its opposite also exists in the form of inverse error functions (ERFIs). I have also included a modified form of the inverse fisher transform along side MY sigmoid (sigmyod). I am uncertain what sigmyod() may be used for, but it's a culmination of my examinations deep into "sigmoid domains", something I am fascinated by. Whatever implications it may possess, I am unveiling it along with it's cousin functions. For curious minds, this quality of composition seen here is ideally what underlies what I would term "Pandora functionality" that empowers my Pandora indication. I go through hordes of formulations, testing, and inspection to find what appears to be the most beneficial logical/mathematical equation to apply...
SCRIPT OPERATION:
To showcase the characteristics and performance of my ERF/ERFI formulations, I devised a multi-modal script. By using bar_index , I generated a broad sequence of numeric values to input into the first ERF/ERFI parameter. These sequences allow you to inspect the contours of the error function's outputs for both ERF and ERFI. When combined with compute-intensive precision functions (CIPFs), the polynomial function output values can be subtracted from my CIPFs to obtain results of absolute error, displaying the accuracy of the many polynomial estimation functions I tuned in testing for Pine's float environment.
A host of numeric input settings are wildly adjustable to inspect values/curvatures across the range of numeric input sequences. Very large numbers, such as Divisor:100,000,100/Offset:200,000,000 for ERF modes or... Divisor:100,000,100/Offset:100,000,000 for ERFI modes, will display miniscule output values calculated from input values in close proximity to 0.0 for the various estimates, similar to a microscope. ERFI approximations very near in proximity to +/-1.0 will always yield large deviations of absolute error. Dragging/zooming your chart or using the Offset input will aid with visually clipping off those ERFI extremes where float precision functions cannot suffice.
NOTICE:
perf() and perfi() are intended for precision computation (as good as it basically gets) in a float environment. However, they are CPU intensive (especially perfi). I wouldn't recommend these being used in ANY Pine script unless it's an "absolute necessity" to do so to accomplish your goal. I only built them to obtain "absolute error curvatures" of the error functions for the polynomial approximations. These are visible in the accuracy modes in the indicator Settings.
Intellect_city - Halvings Bitcoin CycleWhat is halving?
The halving timer shows when the next Bitcoin halving will occur, as well as the dates of past halvings. This event occurs every 210,000 blocks, which is approximately every 4 years. Halving reduces the emission reward by half. The original Bitcoin reward was 50 BTC per block found.
Why is halving necessary?
Halving allows you to maintain an algorithmically specified emission level. Anyone can verify that no more than 21 million bitcoins can be issued using this algorithm. Moreover, everyone can see how much was issued earlier, at what speed the emission is happening now, and how many bitcoins remain to be mined in the future. Even a sharp increase or decrease in mining capacity will not significantly affect this process. In this case, during the next difficulty recalculation, which occurs every 2014 blocks, the mining difficulty will be recalculated so that blocks are still found approximately once every ten minutes.
How does halving work in Bitcoin blocks?
The miner who collects the block adds a so-called coinbase transaction. This transaction has no entry, only exit with the receipt of emission coins to your address. If the miner's block wins, then the entire network will consider these coins to have been obtained through legitimate means. The maximum reward size is determined by the algorithm; the miner can specify the maximum reward size for the current period or less. If he puts the reward higher than possible, the network will reject such a block and the miner will not receive anything. After each halving, miners have to halve the reward they assign to themselves, otherwise their blocks will be rejected and will not make it to the main branch of the blockchain.
The impact of halving on the price of Bitcoin
It is believed that with constant demand, a halving of supply should double the value of the asset. In practice, the market knows when the halving will occur and prepares for this event in advance. Typically, the Bitcoin rate begins to rise about six months before the halving, and during the halving itself it does not change much. On average for past periods, the upper peak of the rate can be observed more than a year after the halving. It is almost impossible to predict future periods because, in addition to the reduction in emissions, many other factors influence the exchange rate. For example, major hacks or bankruptcies of crypto companies, the situation on the stock market, manipulation of “whales,” or changes in legislative regulation.
---------------------------------------------
Table - Past and future Bitcoin halvings:
---------------------------------------------
Date: Number of blocks: Award:
0 - 03-01-2009 - 0 block - 50 BTC
1 - 28-11-2012 - 210000 block - 25 BTC
2 - 09-07-2016 - 420000 block - 12.5 BTC
3 - 11-05-2020 - 630000 block - 6.25 BTC
4 - 20-04-2024 - 840000 block - 3.125 BTC
5 - 24-03-2028 - 1050000 block - 1.5625 BTC
6 - 26-02-2032 - 1260000 block - 0.78125 BTC
7 - 30-01-2036 - 1470000 block - 0.390625 BTC
8 - 03-01-2040 - 1680000 block - 0.1953125 BTC
9 - 07-12-2043 - 1890000 block - 0.09765625 BTC
10 - 10-11-2047 - 2100000 block - 0.04882813 BTC
11 - 14-10-2051 - 2310000 block - 0.02441406 BTC
12 - 17-09-2055 - 2520000 block - 0.01220703 BTC
13 - 21-08-2059 - 2730000 block - 0.00610352 BTC
14 - 25-07-2063 - 2940000 block - 0.00305176 BTC
15 - 28-06-2067 - 3150000 block - 0.00152588 BTC
16 - 01-06-2071 - 3360000 block - 0.00076294 BTC
17 - 05-05-2075 - 3570000 block - 0.00038147 BTC
18 - 08-04-2079 - 3780000 block - 0.00019073 BTC
19 - 12-03-2083 - 3990000 block - 0.00009537 BTC
20 - 13-02-2087 - 4200000 block - 0.00004768 BTC
21 - 17-01-2091 - 4410000 block - 0.00002384 BTC
22 - 21-12-2094 - 4620000 block - 0.00001192 BTC
23 - 24-11-2098 - 4830000 block - 0.00000596 BTC
24 - 29-10-2102 - 5040000 block - 0.00000298 BTC
25 - 02-10-2106 - 5250000 block - 0.00000149 BTC
26 - 05-09-2110 - 5460000 block - 0.00000075 BTC
27 - 09-08-2114 - 5670000 block - 0.00000037 BTC
28 - 13-07-2118 - 5880000 block - 0.00000019 BTC
29 - 16-06-2122 - 6090000 block - 0.00000009 BTC
30 - 20-05-2126 - 6300000 block - 0.00000005 BTC
31 - 23-04-2130 - 6510000 block - 0.00000002 BTC
32 - 27-03-2134 - 6720000 block - 0.00000001 BTC
Trend and Reversal ScannerHello Traders!
The TRN Trend and Reversal Scanner highlights in a user-friendly and easy to read table trend and reversal signals from up to 20 assets of your choosing. With it, you can efficiently monitor your preferred instruments simultaneously without jumping from one chart to the next. You will never miss a signal again. The indicator automatically finds swing-based up and down trends, bullish and bearish divergences, detects ranges and range breakouts as well as trend and reversal signals by the built-in trend detection algorithm called TRN Bars. Furthermore, you can conveniently stay updated with real-time alerts, notifying you whenever the scanner finds interesting market situations.
Feature List
Swing-based up and down trend detection
Divergence detection for any given (Custom) Indicator
Price range and breakout detection
Bar trend and reversal detection
Scanner alerts
The value of this indicator is to support traders to easily identify trend-based signals in an automated way and across many different markets at the same time. The trader saves a lot of time scanning the markets for up and down swings, divergences, consolidations and bar pattern-based trends and reversals, since finding and alerting these signals is done automatically for the trader.
For a visualization of the detected signals, you can add the TRN Bars and the Swing Suite indicator to your chart.
How does Trend Scanner work?
On the right side of the chart, you can find a table displaying the symbols monitored by the TRN Trend and Reversal Scanner for signal detection (first column). The table provides information on the status of each symbol. This visual representation allows you to quickly identify evolving signals across different symbols, helping you stay informed and make timely trading decisions.
The scanner operates specifically on the timeframe you are currently viewing, ensuring that the detected signals align precisely with your trading perspective.
In the following, we will describe the different signals displayed in the different columns of the table
Column 1 – Symbols
Column 2 – Bar Trend & Signals
Column 3 – Up & Down Swing Trend
Column 4 – Ranges & Range Breakouts
Column 5 – Bullish Divergences
Column 6 – Bearish Divergences
Bar Trend & Signals
In the second column, you can observe the status of TRN Bars, the built-in trend detection algorithm.
UP – Uptrend
DN – Downtrend
REV (Green) – Bullish Reversal Bar
REV (Red) – Bearish Reversal Bar
CON (Green) – Bullish Continuation Bar
CON (Red) – Bearish Continuation Bar
B/O (Green) – Bullish Range Breakout Bar
B/O (Red) – Bearish Range Breakout Bar
TRN Bars is designed to spot bullish and bearish trends and reversals. The trend analysis is based on a new algorithm that weights several different inputs:
classical and advanced bar patterns and their statistical frequency
probability distributions of price expansions after certain bar patterns
bar information such as wick length in %, overlapping of the previous bar in % and many more
historical trend and consolidation analysis
It provides high-probability trend continuation analysis and reversal detections.
Up and Downtrend
The second column (Trend) indicates whether the price of the asset moves within an uptrend (UP) or a downtrend (DN), as detected by our unique swing detection algorithm, on the selected timeframe.
The swing detection algorithm identifies pivot points (swings) with high accuracy. It works in real-time and does not need a look-a-head to find swings.
Ranges & Range Breakouts
The third column provides insights into the price behavior of a symbol within the selected timeframe, as analyzed by the range feature of the TRN Bars algorithm.
ACTIVE – Price moves within a price range
UP – Breakout detected
DN – Breakdown detected
UP CONF – Breakout confirmed
DN CONF – Breakdown confirmed
The bar range feature automatically finds consolidations where the price range of several consecutives bars is rather small. The detection of the bar ranges includes among other things the overlapping percentage of these bars.
Divergence Detection for any given (Custom) Indicator
The divergence detector finds with unrivaled precision bullish and bearish as well as regular and hidden divergences. The main difference compared to other divergences indicators is that this indicator finds rigorously the extreme peaks of each swing, both in price and in the corresponding indicator. This precision is unmatched and therefore this is one of the best divergences detectors.
The build in divergence detector works with any given indicator, even custom ones. In addition, there are 11 built-in indicators. Most noticeable is the cumulative delta indicator, which works astonishingly well as a divergence indicator. Full list:
External Indicator (see next section for the setup)
Awesome Oscillator (AO)
Commodity Channel Index (CCI)
Cumulative Delta Volume (CDV)
Chaikin Money Flow (CMF)
Moving Average Convergence Divergence (MACD)
Money Flow Index (MFI)
Momentum
On Balance Volume (OBV)
Relative Strength Index (RSI)
Stochastic
Williams Percentage Range (W%R)
Another highlight of the divergence detection is that it works with every indicator, even custom ones. To do this, you must add the (custom) indicator to your chart. Afterwards, simply go to the “Divergence Detection” section in the indicator settings and choose "External Indicator". If the custom indicator has one reference value, then choose this value in the “External Indicator (High)” field. If there are high and low values (e.g. candles), then you also must set the “External Indicator Low” field.
The visualization of the divergence detection is represented in the fifth column (Div Bull) and the sixth and last column (Div Bear).
REG – Regular divergence detected
HID – Hidden divergence detected
Scanner Alerts
You can opt to receive alerts for the following scenarios:
Detected up and down swings
Detected bullish and bearish divergences
Detected bar trend changes
Confirmed Reversal Bars
Confirmed Continuation Bars
Confirmed ange breakouts
The alert function is activated for all symbols listed in the scanner and corresponds to the timeframe of the chart you are currently viewing. This ensures that you receive alerts specifically tailored to the symbols and timeframe you are interested in.
Risk Disclaimer
The content, tools, scripts, articles, and educational resources offered by TRN Trading are intended solely for informational and educational purposes. Remember, past performance does not ensure future outcomes.
Edge AI Forecast [Edge Terminal]This indicator inputs the previous 150 closing prices in a simple two-layer neural network, normalizes the network inputs using a sigmoid function, uses a feedforward calculation to send it to the second layer, shows the MSE loss curve and uses both automatic and manual backpropagation (user input) to find the most likely forecast values and uses the analog forecasting algorithm to adjust and optimize the data furthermore to display potential prices on the chart.
Here's how it works:
The idea behind this script is to train a simple neural network to predict the future x values based on the sample data. For this, we use 2 types of data, Price and Volume.
The thinking behind this is that price alone can’t be used in this case because it doesn’t provide enough meaningful pattern data for the network but price and volume together can change the game. We’re planning to use more different data sets and expand on this in the future.
To avoid a bad mix of results, we technically have two neural networks, each processing a different data type, one for volume data and one for price data.
The actual prediction is decided by the way price and volume of the closing price relate to each other. Basically, the network passes the price and volume and finds the best relation between the two data set outputs and predicts where the price could be based on the upcoming volume of the latest candle.
The network adjusts the weights and biases using optimization algorithms like gradient descent to minimize the difference between the predicted and actual stock prices, typically measured by a loss function, (in this case, mean squared error) which you can see using the error rate bubble.
This is a good measure to see how well the network is performing and the idea is to adjust the settings inputs such as learning rate, epochs and data source to get the lowest possible error rate. That’s when you’re getting the most accurate prediction results.
For each data set, we use a multi-layer network. In a multi-layer neural network, the outputs of neurons in one layer serve as inputs to neurons in the next layer. Initially, the input layer of the neural network receives the historical data. Each input neuron represents a feature, such as previous stock prices and trading volumes over a specific period.
The hidden layers perform feature extraction and transformation through a series of weighted connections and activation functions. Each neuron in a hidden layer computes a weighted sum of the inputs from the previous layer, applies an activation function to the sum, and passes the result to the next layer using the feedforward (activation) function.
For extraction, we use a normalization function. This function takes a value or data (such as bar price) and divides it up by max scale which is the highest possible value of the bar. The idea is to take a normalized number, which is either below 1 or under 2 for simple use in the neural network layers.
For the activation, after computing the weighted sum, the neuron applies an activation function a(x). To introduce non-linearity into the model to pass it to the next layer. We use sigmoid activation functions in this case. The main reason we use sigmoid function is because the resulting number is between 0 to 1 and is better for models where we have to predict the probability as an output.
The final output of the network is passed as an input to the analog forecasting function. This is an algorithm commonly used in weather prediction systems. In this case, this is used to make predictions by comparing current values and assuming the patterns might repeat in the future.
There are many different ways to build an analog forecasting function but in our case, we’re used similarity measurement model:
X, as the current situation or set of current variables.
Y, as the outcome or variable of interest.
Si as the historical situations or patterns, where i ranges from 1 to n.
Vi as the vector of variables describing historical situation Si.
Oi as the outcome associated with historical situation Si.
First, we define a similarity measure sim(X,Vi) that quantifies the similarity between the current situation X and historical situation Si based on their respective variables Vi.
Then we select the K most similar historical situations (KNN Machine learning) based on the similarity measure sim(X,Vi). We denote the rest of the selected historical situations as {Si1, Si2,...Sik).
Then we examine the outcomes associated with the selected historical situations {Oi1, Oi2,...,Oik}.
Then we use the outcomes of the selected historical situations to forecast the future outcome Y^ using weighted averaging.
Finally, the output value of the analog forecasting is standardized using a standardization function which is the opposite of the normalization function. This function takes a normalized number and turns it back to its original value by multiplying it by the max scale (highest value of the bar). This function is used when the final number is produced by the network output at the end of the analog forecasting to turn the final value back into a price so it can be displayed on the chart with PineScript.
Settings:
Data source: Source of the neural network's input data.
Sample Bars: How many historical bars do you want to input into the neural network
Prediction Bars: How many bars you want the script to forecast
Show Training Rate: This shows the neural network's error rate for the optimization phase
Learning Rate: how many times you want the script to change the model in response to the estimated error (automatic)
Epochs: the network cycle or how many times you want to run the data through the network from the first layer to the last one.
Usage:
The sample bars input determines the number of historical bars to be used as a reference for the network. You need to change the Epochs and Learning Rate inputs for each asset and chart timeframe to get the lowest error rate.
On the surface, the highest possible epoch and learning rate should produce the most effective results but that's not always the case.
If the epochs rate is too high, there is a chance we face overfitting. Essentially, you might be over processing good data which can make it useless.
On the other hand, if the learning rate is too high, the network may overshoot the optimal solution and diverge. This is almost like the same issue I mentioned above with a high epoch rate.
Access:
It took over 4 months to develop this script and we’re constantly improving it so it took a lot of manpower to develop this script. Also when it comes to neural networks, Pine Script isn’t the most optimal language to build a neural network in, so we had to resort to a few proprietary mathematical formulas to ensure this runs smoothly without giving out an error for overprocessing, specially when you have multiple neural networks with many layers.
The optimization done to make this script run on Pine Script is basically state of the art and because of this, we would like to keep the code closed source at the moment.
On the other hand we don’t want to publish the code publicly as we want to keep the trading edge this script gives us in a closed loop, for our own small group of members so we have to keep the code closed. We only accept invites from expert traders who understand how this script and algo trading works and the type of edge it provides.
Additionally, at the moment we don’t want to share the code as some of the parts of this network, specifically the way we hand the data from neural network output into the analog method formula are proprietary code and we’d like to keep it that way.
You can contact us for access and if we believe this works for your trading case, we will provide you with access.
RBS | Profitholders Thanks for source code author , I have modified this for especially Indian market.
RBS Indicator is Rang Breakout System, This is same "Opening Range Breakout" which is a common trading strategy. The indicator can analyze the market trend in the current session and give "Buy / Sell", "Take Profit" and "Stop Loss" signals. For more information about the analyzing process of the indicator, you can read "How Does It Work ?" section of the description.
Features of RBS indicator :
Buy & Sell Signals
Up To 3 Take Profit Signals
Stop-Loss Signals
Alerts for Buy / Sell, Take-Profit and Stop-Loss
Session Dashboard
Back testing Dashboard
HOW DOES IT WORK ?
This indicator works best in 15-minute timeframe. Need to change Chart time frame depends on symbols , The idea is that the trend of the current session can be forecasted by analyzing the market for a while after the session starts. However, each market has it's own dynamics and the algorithm will need fine-tuning to get the best performance possible. So, we've implemented a "Back testing Dashboard" that shows the past performance of the algorithm in the current ticker with your current settings. Always keep in mind that past performance does not guarantee future results. So this is for educational purpose.
Here are the steps of the algorithm explained briefly :
1. The algorithm follows and analyzes the first 15 minutes (can be adjusted) of the session.
2. Then, algorithm checks for breakouts of the opening range's high or low.
3. If a breakout happens in a bullish or a bearish direction, the algorithm will now check for retests of the breakout. Depending on the sensitivity setting, there must be 0 / 1 / 2 / 3 failed retests for the breakout to be considered as reliable.
4. If the breakout is reliable, the algorithm will give an entry signal.
5. After the position entry, algorithm will now wait for Take-Profit or Stop-Loss zones and signal if any of them occur.
If you wonder how does the indicator find Take-Profit & Stop-Loss zones, you can check the "Settings" section of the description.
UNIQUENESS
While there are indicators that show the opening range of the session, they come short with features like indicating breakouts, entries, and Take-Profit & Stop-Loss zones. We are also aware of that different stock markets have different dynamics, and tuning the algorithm for different markets is really important for better results, so we decided to make the algorithm fully customizable. Besides all that, our indicator contains a detailed back testing dashboard, so you can see past performance of the algorithm in the current ticker. While past performance does not yield any guarantee for future results, we believe that a back testing dashboard is necessary for tuning the algorithm. Another strength of this indicator is that there are multiple options for detection of Take-Profit and Stop-Loss zones, which the trader can select one of their liking.
⚙️SETTINGS
Keep in mind that best chart timeframe for this indicator to work is the 15-minute timeframe on Indian Market.
TP = Take-Profit
SL = Stop-Loss
EMA = Exponential Moving Average
OR = Opening Range
ATR = Average True Range
1. Algorithm
RBS Timeframe -> This setting determines the timeframe that the algorithm will analyze the market after a new session begins before giving any signals. It's important to experiment with this setting and find the best option that suits the current ticker for the best performance. More volatile stocks will often require this setting to be larger, while more stabilized stocks may have this setting shorter.
Sensitivity -> This setting determines how much failed retests are needed to take a position entry. Higher sensitivity means that less retests are needed to consider the breakout as reliable. If you think that the current ticker makes strong movements in a bullish & bearish direction after a breakout, you should set this setting higher. If you think the opposite, meaning that the ticker does not decide the trend right after a breakout, this setting show be lower.
(High = 0 Retests, Medium = 1 Retest, Low = 2 Retests, Lowest = 3 Retests)
Breakout Condition -> The condition for the algorithm to detect breakouts.
Close = Bar needs to close higher than the OR High Line in a bullish breakout, or lower than the OR Low Line in a bearish breakout. EMA = The EMA of the bar must be higher / lower than OR Lines instead of the close price.
TP Method -> The method for the algorithm to use when determining TP zones.
Dynamic = This TP method essentially tries to find the bar that price starts declining the current trend and going to the other direction, and puts a TP zone there. To achieve this, it uses an EMA line, and when the close price of a bar crosses the EMA line, It's a TP spot.
ATR = In this TP method, instead of a dynamic approach the TP zones are pre-determined using the ATR of the entry bar. This option is generally for traders who just want to know their TP spots beforehand while trading. Selecting this option will also show TP zones at the ORB Dashboard.
"Dynamic" option generally performs better, while the "ATR" method is safer to use.
EMA Length -> This setting determines the length of the EMA line used in "Dynamic TP method" and "EMA Breakout Condition". This is completely up to the trader's choice, though the default option should generally perform well. You might want to experiment with this setting and find the optimal length for the current ticker.
Stop-Loss -> Algorithm will place the Stop-Loss zone using setting.
Safer = The SL zone will be placed closer to the OR High for a bullish entry, and closer to the OR Low for a bearish entry.
Balanced = The SL zone will be placed in the center of OR High & OR Low
Risky = The SL zone will be placed closer to the OR Low for a bullish entry, and closer to the OR High for a bearish entry.
Adaptive SL -> This option only takes effect if the first TP zone is hit.
Enabled = After the 1st TP zone is hit, the SL zone will be moved to the entry price, essentially making the position risk-free.
Disabled = The SL zone will never change.
2. RBS Dashboard
RBS Dashboard shows the information about the current session.
3. RBS Back testing
RBS Back testing Dashboard allows you to see past performance of the algorithm in the current ticker with current settings.
Total amount of days that can be back tested depends on your TV subscription.
Back testing Exit Ratios -> You can select how much of percent your entry will be closed at any TP zone while back testing. For example, %90, %5, %5 means that %90 of the position will be closed at the first TP zone, %5 of it will be closed at the 2nd TP zone, and %5 of it will be closed at the last TP zone.
Support & Resistance AI (K means/median) [ThinkLogicAI]█ OVERVIEW
K-means is a clustering algorithm commonly used in machine learning to group data points into distinct clusters based on their similarities. While K-means is not typically used directly for identifying support and resistance levels in financial markets, it can serve as a tool in a broader analysis approach.
Support and resistance levels are price levels in financial markets where the price tends to react or reverse. Support is a level where the price tends to stop falling and might start to rise, while resistance is a level where the price tends to stop rising and might start to fall. Traders and analysts often look for these levels as they can provide insights into potential price movements and trading opportunities.
█ BACKGROUND
The K-means algorithm has been around since the late 1950s, making it more than six decades old. The algorithm was introduced by Stuart Lloyd in his 1957 research paper "Least squares quantization in PCM" for telecommunications applications. However, it wasn't widely known or recognized until James MacQueen's 1967 paper "Some Methods for Classification and Analysis of Multivariate Observations," where he formalized the algorithm and referred to it as the "K-means" clustering method.
So, while K-means has been around for a considerable amount of time, it continues to be a widely used and influential algorithm in the fields of machine learning, data analysis, and pattern recognition due to its simplicity and effectiveness in clustering tasks.
█ COMPARE AND CONTRAST SUPPORT AND RESISTANCE METHODS
1) K-means Approach:
Cluster Formation: After applying the K-means algorithm to historical price change data and visualizing the resulting clusters, traders can identify distinct regions on the price chart where clusters are formed. Each cluster represents a group of similar price change patterns.
Cluster Analysis: Analyze the clusters to identify areas where clusters tend to form. These areas might correspond to regions of price behavior that repeat over time and could be indicative of support and resistance levels.
Potential Support and Resistance Levels: Based on the identified areas of cluster formation, traders can consider these regions as potential support and resistance levels. A cluster forming at a specific price level could suggest that this level has been historically significant, causing similar price behavior in the past.
Cluster Standard Deviation: In addition to looking at the means (centroids) of the clusters, traders can also calculate the standard deviation of price changes within each cluster. Standard deviation is a measure of the dispersion or volatility of data points around the mean. A higher standard deviation indicates greater price volatility within a cluster.
Low Standard Deviation: If a cluster has a low standard deviation, it suggests that prices within that cluster are relatively stable and less likely to exhibit sudden and large price movements. Traders might consider placing tighter stop-loss orders for trades within these clusters.
High Standard Deviation: Conversely, if a cluster has a high standard deviation, it indicates greater price volatility within that cluster. Traders might opt for wider stop-loss orders to allow for potential price fluctuations without getting stopped out prematurely.
Cluster Density: Each data point is assigned to a cluster so a cluster that is more dense will act more like gravity and
2) Traditional Approach:
Trendlines: Draw trendlines connecting significant highs or lows on a price chart to identify potential support and resistance levels.
Chart Patterns: Identify chart patterns like double tops, double bottoms, head and shoulders, and triangles that often indicate potential reversal points.
Moving Averages: Use moving averages to identify levels where the price might find support or resistance based on the average price over a specific period.
Psychological Levels: Identify round numbers or levels that traders often pay attention to, which can act as support and resistance.
Previous Highs and Lows: Identify significant previous price highs and lows that might act as support or resistance.
The key difference lies in the approach and the foundation of these methods. Traditional methods are based on well-established principles of technical analysis and market psychology, while the K-means approach involves clustering price behavior without necessarily incorporating market sentiment or specific price patterns.
It's important to note that while the K-means approach might provide an interesting way to analyze price data, it should be used cautiously and in conjunction with other traditional methods. Financial markets are influenced by a wide range of factors beyond just price behavior, and the effectiveness of any method for identifying support and resistance levels should be thoroughly tested and validated. Additionally, developments in trading strategies and analysis techniques could have occurred since my last update.
█ K MEANS ALGORITHM
The algorithm for K means is as follows:
Initialize cluster centers
assign data to clusters based on minimum distance
calculate cluster center by taking the average or median of the clusters
repeat steps 1-3 until cluster centers stop moving
█ LIMITATIONS OF K MEANS
There are 3 main limitations of this algorithm:
Sensitive to Initializations: K-means is sensitive to the initial placement of centroids. Different initializations can lead to different cluster assignments and final results.
Assumption of Equal Sizes and Variances: K-means assumes that clusters have roughly equal sizes and spherical shapes. This may not hold true for all types of data. It can struggle with identifying clusters with uneven densities, sizes, or shapes.
Impact of Outliers: K-means is sensitive to outliers, as a single outlier can significantly affect the position of cluster centroids. Outliers can lead to the creation of spurious clusters or distortion of the true cluster structure.
█ LIMITATIONS IN APPLICATION OF K MEANS IN TRADING
Trading data often exhibits characteristics that can pose challenges when applying indicators and analysis techniques. Here's how the limitations of outliers, varying scales, and unequal variance can impact the use of indicators in trading:
Outliers are data points that significantly deviate from the rest of the dataset. In trading, outliers can represent extreme price movements caused by rare events, news, or market anomalies. Outliers can have a significant impact on trading indicators and analyses:
Indicator Distortion: Outliers can skew the calculations of indicators, leading to misleading signals. For instance, a single extreme price spike could cause indicators like moving averages or RSI (Relative Strength Index) to give false signals.
Risk Management: Outliers can lead to overly aggressive trading decisions if not properly accounted for. Ignoring outliers might result in unexpected losses or missed opportunities to adjust trading strategies.
Different Scales: Trading data often includes multiple indicators with varying units and scales. For example, prices are typically in dollars, volume in units traded, and oscillators have their own scale. Mixing indicators with different scales can complicate analysis:
Normalization: Indicators on different scales need to be normalized or standardized to ensure they contribute equally to the analysis. Failure to do so can lead to one indicator dominating the analysis due to its larger magnitude.
Comparability: Without normalization, it's challenging to directly compare the significance of indicators. Some indicators might have a larger numerical range and could overshadow others.
Unequal Variance: Unequal variance in trading data refers to the fact that some indicators might exhibit higher volatility than others. This can impact the interpretation of signals and the performance of trading strategies:
Volatility Adjustment: When combining indicators with varying volatility, it's essential to adjust for their relative volatilities. Failure to do so might lead to overemphasizing or underestimating the importance of certain indicators in the trading strategy.
Risk Assessment: Unequal variance can impact risk assessment. Indicators with higher volatility might lead to riskier trading decisions if not properly taken into account.
█ APPLICATION OF THIS INDICATOR
This indicator can be used in 2 ways:
1) Make a directional trade:
If a trader thinks price will go higher or lower and price is within a cluster zone, The trader can take a position and place a stop on the 1 sd band around the cluster. As one can see below, the trader can go long the green arrow and place a stop on the one standard deviation mark for that cluster below it at the red arrow. using this we can calculate a risk to reward ratio.
Calculating risk to reward: targeting a risk reward ratio of 2:1, the trader could clearly make that given that the next resistance area above that in the orange cluster exceeds this risk reward ratio.
2) Take a reversal Trade:
We can use cluster centers (support and resistance levels) to go in the opposite direction that price is currently moving in hopes of price forming a pivot and reversing off this level.
Similar to the directional trade, we can use the standard deviation of the cluster to place a stop just in case we are wrong.
In this example below we can see that shorting on the red arrow and placing a stop at the one standard deviation above this cluster would give us a profitable trade with minimal risk.
Using the cluster density table in the upper right informs the trader just how dense the cluster is. Higher density clusters will give a higher likelihood of a pivot forming at these levels and price being rejected and switching direction with a larger move.
█ FEATURES & SETTINGS
General Settings:
Number of clusters: The user can select from 3 to five clusters. A good rule of thumb is that if you are trading intraday, less is more (Think 3 rather than 5). For daily 4 to 5 clusters is good.
Cluster Method: To get around the outlier limitation of k means clustering, The median was added. This gives the user the ability to choose either k means or k median clustering. K means is the preferred method if the user things there are no large outliers, and if there appears to be large outliers or it is assumed there are then K medians is preferred.
Bars back To train on: This will be the amount of bars to include in the clustering. This number is important so that the user includes bars that are recent but not so far back that they are out of the scope of where price can be. For example the last 2 years we have been in a range on the sp500 so 505 days in this setting would be more relevant than say looking back 5 years ago because price would have to move far to get there.
Show SD Bands: Select this to show the 1 standard deviation bands around the support and resistance level or unselect this to just show the support and resistance level by itself.
Features:
Besides the support and resistance levels and standard deviation bands, this indicator gives a table in the upper right hand corner to show the density of each cluster (support and resistance level) and is color coded to the cluster line on the chart. Higher density clusters mean price has been there previously more than lower density clusters and could mean a higher likelihood of a reversal when price reaches these areas.
█ WORKS CITED
Victor Sim, "Using K-means Clustering to Create Support and Resistance", 2020, towardsdatascience.com
Chris Piech, "K means", stanford.edu
█ ACKNOLWEDGMENTS
@jdehorty- Thanks for the publish template. It made organizing my thoughts and work alot easier.
Auto Harmonic Pattern - Screener [Trendoscope]At Trendoscope, we take pride in offering a wide range of indicators on Harmonic Patterns, including both free and premium options. While we have successfully developed various advanced tools, we recognize that creating a Harmonic Pattern screener is an audacious endeavor that few have ventured into.
Creating a harmonic pattern screener presents a formidable challenge. The intricate nature of the algorithm, coupled with the limitations of cloud-based processing and platform memory, makes it exceedingly difficult to implement the screener functionality without encountering runtime errors.
Today marks a historic achievement as we overcome numerous challenges to unveil our groundbreaking harmonic pattern-based screener. This significant leap signifies our commitment to innovation in the field.
Without further delay, let's dive right into the new Auto Harmonic Pattern - Screener algorithm
🎲 Features Overview
🎯 Primary Functionality
We prefer not to categorize this as a traditional indicator, as it goes beyond that scope. Instead, it's a unique amalgamation of both a screener and an indicator, designed to achieve primarily two essential functions.
Firstly, it efficiently scans multiple tickers, up to 20, for harmonic pattern formations and presents them on a user-friendly dashboard
Secondly, it provides harmonic pattern drawings on the chart, but only if the current chart ticker is part of the screener and exhibits a harmonic pattern formation.
🎯 Secondary Features
In addition to its primary functionalities, our revolutionary algorithm offers an array of secondary features that cater to traders' diverse needs
Users have the privilege of accessing enhanced settings, providing limitless customization options for the zigzag and pattern detection algorithm
The platform empowers traders to effortlessly customize stop entry target ratios, facilitating automatic calculations and display of suggestions
The freedom to personalize the visualization and display of patterns and dashboard ensures a seamless and intuitive user experience
And finally, the algorithm leaves no stone unturned, keeping traders well-informed through timely alerts on every bar, highlighting tickers exhibiting Harmonic Pattern formations.
🎯 Limitations
Our innovative screener harnesses the power of the recursive zigzag algorithm to deliver efficient and accurate harmonic pattern detections. While the deep search algorithm, present in our other Harmonic Pattern algorithms, offers unparalleled precision, its resource-intensive nature makes it unsuitable for simultaneous scanning of 20 tickers. By focusing on the recursive zigzag approach, we strike the perfect balance between performance and functionality, ensuring seamless scanning across multiple tickers without compromising on accuracy. This strategic decision allows us to deliver a powerful and reliable screener that meets the diverse needs of traders and empowers them with real-time harmonic pattern insights.
🎲 Chart Components
Upon loading the indicator and configuring your tickers, our user-friendly interface offers two key components seamlessly integrated into the chart:
A color-coded screener dashboard : The dashboard presents a clear visualization of tickers with bullish and bearish harmonic patterns. This intuitive display allows you to quickly identify potential trading opportunities based on pattern formations.
Dynamic pattern display : As you interact with the chart, our algorithm dynamically highlights possible harmonic patterns based on the latest zigzag pivots. Please note that patterns may not always be visible on the chart, especially in cases where higher-level zigzags take time to form pivots. However, rest assured that our sophisticated algorithm ensures real-time updates, providing you with accurate and timely harmonic pattern insights.
🎯 Screener Dashboard
In our screener dashboard, you will find a wealth of information at your fingertips:
Bullish patterns : Tickers exhibiting bullish harmonic patterns are prominently highlighted with a refreshing green background
Bearish patterns : Similarly, tickers featuring bearish harmonic patterns stand out with a striking red background
Dual patterns : Tickers displaying both bullish and bearish patterns are cleverly highlighted in a captivating purple background, providing a comprehensive view of the harmonic pattern landscape.
Tickers without current patterns : Tickers lacking any current patterns are elegantly displayed with a silver background. These tickers do not trigger tooltips, streamlining your focus on actionable pattern-related data.
🎲 Settings in Detail
🎯 Tickers
Our platform currently allows users to select up to 20 tickers for the harmonic pattern screener. We understand the importance of flexibility and scalability, and while we are excited to accommodate more tickers in the future, our present focus is to ensure optimal performance within the CPU and memory limitations. Rest assured, we are continuously working on enhancing our capabilities to provide you with an even more comprehensive experience. Stay tuned for updates as we strive to meet your evolving needs.
🎯 Zigzag and Harmonic Pattern
In this section, we present a range of essential settings that play a pivotal role in the calculation of the zigzag and the scanning of patterns. These parameters share similarities with other premium indicators associated with Harmonic patterns. These settings serve as building blocks for our advanced algorithms' suite.
This include
Zigzag length and depth settings for calculation of the multi level recursive zigzag
Pattern scanning settings to filter patterns based on preferences of category, pattern name, accuracy of calculation, and other considerations.
User preference of pattern trading ratios that are used for calculating entry, stop and target prices.
🎯 Screener Dashboard and Alerts
In this section, we introduce the parameters that define the format and content of alerts and the screener dashboard, offering you maximum flexibility in customizing their display. These settings encompass the following key aspects:
Screener dashboard position, layout and size that influence the display of screener dashboard.
List of parameters that can be shown on dashboard tooltips as well as on alerts.
Format of alert and tooltip data
🎯 Pattern Display
These are the settings related to pattern display on the chart and to limit calculation to last n bars
Will soon make video tutorials on this soon.
Recursive Micro Zigzag🎲 Overview
Zigzag is basic building block for any pattern recognition algorithm. This indicator is a research-oriented tool that combines the concepts of Micro Zigzag and Recursive Zigzag to facilitate a comprehensive analysis of price patterns. This indicator focuses on deriving zigzag on multiple levels in more efficient and enhanced manner in order to support enhanced pattern recognition.
The Recursive Micro Zigzag Indicator utilises the Micro Zigzag as the foundation and applies the Recursive Zigzag technique to derive higher-level zigzags. By integrating these techniques, this indicator enables researchers to analyse price patterns at multiple levels and gain a deeper understanding of market behaviour.
🎲 Concept:
Micro Zigzag Base : The indicator utilises the Micro Zigzag concept to capture detailed price movements within each candle. It allows for the visualisation of the sequential price action within the candle, aiding in pattern recognition at a micro level.
Basic implementation of micro zigzag can be found in this link - Micro-Zigzag
Recursive Zigzag Expansion : Building upon the Micro Zigzag base, the indicator applies the Recursive Zigzag concept to derive higher-level zigzags. Through recursive analysis of the Micro Zigzag's pivots, the indicator uncovers intricate patterns and trends that may not be evident in single-level zigzags.
Earlier implementations of recursive zigzag can be found here:
Recursive Zigzag
Recursive Zigzag - Trendoscope
And the libraries
rZigzag
ZigzagMethods
The major differences in this implementation are
Micro Zigzag Base - Earlier implementation made use of standard zigzag as base whereas this implementation uses Micro Zigzag as base
Not cap on Pivot depth - Earlier implementation was limited by the depth of level 0 zigzag. In this implementation, we are trying to build the recursive algorithm progressively so that there is no cap on the depth of level 0 zigzag. But, if we go for higher levels, there is chance of program timing out due to pine limitations.
These algorithms are useful in automatically spotting patterns on the chart including Harmonic Patterns, Chart Patterns, Elliot Waves and many more.
Adaptive Predictive Stops and Targets The indicator is an experiment to Predict Stops and 1st target for any liquid security and for any timeframe,
Intro
The indicator is made using Predictive Differential Filter of 2nd Degree
and an Adaptive Filter to generate Signals and define Targets and Stops
An adaptive filter is a system with a linear filter that has a transfer function controlled by variable parameters and a means to adjust those parameters according to an optimisation algorithm. Because of the complexity of the optimisation algorithms, almost all adaptive filters are digital filters. Thus Helping us classify our intent either long side or short side
The indicator use Adaptive Least mean square algorithm, for convergence of the filtered signals into a category of intents, (either buy or sell)
The Other Category of Filter used in the indicator is Predictive Differential Filter, which helps us estimate the acceleration of the prices and levels of significance for targeting and Stops
The Predictive Differential Filters are capable of predicting the next state of the input based on the interaction with a pre-specified number of filters, The prediction helps in minimising the quantisation error and in removing the granular noise which are caused by PCM systems.
How to Use
The logic to use is simple Buy at the High of the Signal Candle and Sell at the Low of the Signal Candle
Book your 50% position on the first target shown (respectively in green and red lines) and Trail the rest of the positions till you reach stop or breakeven!
vice versa for Sell,
Just Sell on the Low of the Signal Candle
What securities and timeframes will it work upon
The system is designed to work over any liquid security over any timeframe,
The Indicator has provisions for Alert
How to request Access
Just Private message me, do not use comment box for requesting access, use it only for constructive comments
STD-Stepped Fast Cosine Transform Moving Average [Loxx]STD-Stepped Fast Cosine Transform Moving Average is an experimental moving average that uses Fast Cosine Transform to calculate a moving average. This indicator has standard deviation stepping in order to smooth the trend by weeding out low volatility movements.
What is the Discrete Cosine Transform?
A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. The DCT, first proposed by Nasir Ahmed in 1972, is a widely used transformation technique in signal processing and data compression. It is used in most digital media, including digital images (such as JPEG and HEIF, where small high-frequency components can be discarded), digital video (such as MPEG and H.26x), digital audio (such as Dolby Digital, MP3 and AAC), digital television (such as SDTV, HDTV and VOD), digital radio (such as AAC+ and DAB+), and speech coding (such as AAC-LD, Siren and Opus). DCTs are also important to numerous other applications in science and engineering, such as digital signal processing, telecommunication devices, reducing network bandwidth usage, and spectral methods for the numerical solution of partial differential equations.
The use of cosine rather than sine functions is critical for compression, since it turns out (as described below) that fewer cosine functions are needed to approximate a typical signal, whereas for differential equations the cosines express a particular choice of boundary conditions. In particular, a DCT is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers. The DCTs are generally related to Fourier Series coefficients of a periodically and symmetrically extended sequence whereas DFTs are related to Fourier Series coefficients of only periodically extended sequences. DCTs are equivalent to DFTs of roughly twice the length, operating on real data with even symmetry (since the Fourier transform of a real and even function is real and even), whereas in some variants the input and/or output data are shifted by half a sample. There are eight standard DCT variants, of which four are common.
The most common variant of discrete cosine transform is the type-II DCT, which is often called simply "the DCT". This was the original DCT as first proposed by Ahmed. Its inverse, the type-III DCT, is correspondingly often called simply "the inverse DCT" or "the IDCT". Two related transforms are the discrete sine transform (DST), which is equivalent to a DFT of real and odd functions, and the modified discrete cosine transform (MDCT), which is based on a DCT of overlapping data. Multidimensional DCTs (MD DCTs) are developed to extend the concept of DCT to MD signals. There are several algorithms to compute MD DCT. A variety of fast algorithms have been developed to reduce the computational complexity of implementing DCT. One of these is the integer DCT (IntDCT), an integer approximation of the standard DCT, : ix, xiii, 1, 141–304 used in several ISO/IEC and ITU-T international standards.
Notable settings
windowper = period for calculation, restricted to powers of 2: "16", "32", "64", "128", "256", "512", "1024", "2048", this reason for this is FFT is an algorithm that computes DFT (Discrete Fourier Transform) in a fast way, generally in 𝑂(𝑁⋅log2(𝑁)) instead of 𝑂(𝑁2). To achieve this the input matrix has to be a power of 2 but many FFT algorithm can handle any size of input since the matrix can be zero-padded. For our purposes here, we stick to powers of 2 to keep this fast and neat. read more about this here: Cooley–Tukey FFT algorithm
smthper = smoothing count, this smoothing happens after the first FCT regular pass. this zeros out frequencies from the previously calculated values above SS count. the lower this number, the smoother the output, it works opposite from other smoothing periods
Included
Alerts
Signals
Loxx's Expanded Source Types
Additional reading
A Fast Computational Algorithm for the Discrete Cosine Transform by Chen et al.
Practical Fast 1-D DCT Algorithms With 11 Multiplications by Loeffler et al.
Cooley–Tukey FFT algorithm
Weighted Burg AR Spectral Estimate Extrapolation of Price [Loxx]Weighted Burg AR Spectral Estimate Extrapolation of Price is an indicator that uses an autoregressive spectral estimation called the Weighted Burg Algorithm. This method is commonly used in speech modeling and speech prediction engines. This method also includes Levinson–Durbin algorithm. As was already discussed previously in the following indicator:
Levinson-Durbin Autocorrelation Extrapolation of Price
What is Levinson recursion or Levinson–Durbin recursion?
In many applications, the duration of an uninterrupted measurement of a time series is limited. However, it is often possible to obtain several separate segments of data. The estimation of an autoregressive model from this type of data is discussed. A straightforward approach is to take the average of models estimated from each segment separately. In this way, the variance of the estimated parameters is reduced. However, averaging does not reduce the bias in the estimate. With the Burg algorithm for segments, both the variance and the bias in the estimated parameters are reduced by fitting a single model to all segments simultaneously. As a result, the model estimated with the Burg algorithm for segments is more accurate than models obtained with averaging. The new weighted Burg algorithm for segments allows combining segments of different amplitudes.
The Burg algorithm estimates the AR parameters by determining reflection coefficients that minimize the sum of for-ward and backward residuals. The extension of the algorithm to segments is that the reflection coefficients are estimated by minimizing the sum of forward and backward residuals of all segments taken together. This means a single model is fitted to all segments in one time. This concept is also used for prediction error methods in system identification, where the input to the system is known, like in ARX modeling
Data inputs
Source Settings: -Loxx's Expanded Source Types. You typically use "open" since open has already closed on the current active bar
LastBar - bar where to start the prediction
PastBars - how many bars back to model
LPOrder - order of linear prediction model; 0 to 1
FutBars - how many bars you want to forward predict
BurgWin - weighing function index, rectangular, hamming, or parabolic
Things to know
Normally, a simple moving average is calculated on source data. I've expanded this to 38 different averaging methods using Loxx's Moving Avreages.
This indicator repaints
Included
Bar color muting
Further reading
Performance of the weighted burg methods of ar spectral estimation for pitch-synchronous analysis of voiced speech
The Burg algorithm for segments
Techniques for the Enhancement of Linear Predictive Speech Coding in Adverse Conditions
Related Indicators
Auto Fibonacci Retracement - Real-Time (Expo)█ Fibonacci retracement is a popular technical analysis method to draw support and resistance levels. The Fibonacci levels are calculated between 2 swing points (high/low) and divided by the key Fibonacci coefficients equal to 23.6%, 38.2%, 50%, 61.8%, and 100%. The percentage represents how much of a prior move the price has retraced.
█ Our Auto Fibonacci Retracement indicator analyzes the market in real-time and draws Fibonacci levels automatically for you on the chart. Real-time fib levels use the current swing points, which gives you a huge advantage when using them in your trading. You can always be sure that the levels are calculated from the correct swing high and low, regardless of the current trend. The algorithm has a trend filter and shifts the swing points if there is a trend change.
The user can set the preferred swing move to scalping, trend trading, or swing trading. This way, you can use our automatic fib indicator to do any trading. The auto fib works on any market and timeframe and displays the most important levels in real-time for you.
█ This Auto Fib Retracement indicator for TradingView is powerful since it does the job for you in real-time. Apply it to the chart, set the swing move to fit your trading style, and leave it on the chart. The indicator does the rest for you. The auto Fibonacci indicator calculates and plots the levels for you in any market and timeframe. In addition, it even changes the swing points based on the current trend direction, allowing traders to get the correct Fibonacci levels in every trend.
█ How does the Auto Fib Draw the levels?
The algorithm analyzes the recent price action and examines the current trend; based on the trend direction, two significant swings (high and low) are identified, and Fibonacci levels will then be plotted automatically on the chart. If the algorithm has identified an uptrend, it will calculate the Fibonacci levels from the swing low and up to the swing high. Similarly, if the algorithm has identified a downtrend, it will calculate the Fibonacci levels from the swing high and down to the swing low.
█ HOW TO USE
The levels allow for a quick and easy understanding of the current Fibonacci levels and help traders anticipate and react when the price levels are tested. In addition, the levels are often used for entries to determine stop-loss levels and to set profit targets. It's also common for traders to use Fibonacci levels to identify resistance and support levels.
Traders can set alerts when the levels are tested.
-----------------
Disclaimer
Copyright by Zeiierman.
The information contained in my Scripts/Indicators/Ideas/Algos/Systems does not constitute financial advice or a solicitation to buy or sell any securities of any type. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
My Scripts/Indicators/Ideas/Algos/Systems are only for educational purposes!