Neowave chart cash dataScript Cash is a neo-analytic style data. Add to use on the chart and then hide the candlesticks and enjoy the cash data.
The daily data cache is set normally. To change the settings, be sure to change the D indicator to W for weekly and M for monthly.
Also enter the number of minutes to use in the hourly time frame, for example four hours (240)
...
When you change the data cache settings in the settings, you must follow the rule of one fortieth of the Neowave style and move the time frame chart to forty to analyze it, for example, for a daily time frame go to 30 minutes.
I hope it is used.
Cerca negli script per "wave"
Goertzel Browser [Loxx]As the financial markets become increasingly complex and data-driven, traders and analysts must leverage powerful tools to gain insights and make informed decisions. One such tool is the Goertzel Browser indicator, a sophisticated technical analysis indicator that helps identify cyclical patterns in financial data. This powerful tool is capable of detecting cyclical patterns in financial data, helping traders to make better predictions and optimize their trading strategies. With its unique combination of mathematical algorithms and advanced charting capabilities, this indicator has the potential to revolutionize the way we approach financial modeling and trading.
█ Brief Overview of the Goertzel Browser
The Goertzel Browser is a sophisticated technical analysis tool that utilizes the Goertzel algorithm to analyze and visualize cyclical components within a financial time series. By identifying these cycles and their characteristics, the indicator aims to provide valuable insights into the market's underlying price movements, which could potentially be used for making informed trading decisions.
The primary purpose of this indicator is to:
1. Detect and analyze the dominant cycles present in the price data.
2. Reconstruct and visualize the composite wave based on the detected cycles.
3. Project the composite wave into the future, providing a potential roadmap for upcoming price movements.
To achieve this, the indicator performs several tasks:
1. Detrending the price data: The indicator preprocesses the price data using various detrending techniques, such as Hodrick-Prescott filters, zero-lag moving averages, and linear regression, to remove the underlying trend and focus on the cyclical components.
2. Applying the Goertzel algorithm: The indicator applies the Goertzel algorithm to the detrended price data, identifying the dominant cycles and their characteristics, such as amplitude, phase, and cycle strength.
3. Constructing the composite wave: The indicator reconstructs the composite wave by combining the detected cycles, either by using a user-defined list of cycles or by selecting the top N cycles based on their amplitude or cycle strength.
4. Visualizing the composite wave: The indicator plots the composite wave, using solid lines for the past and dotted lines for the future projections. The color of the lines indicates whether the wave is increasing or decreasing.
5. Displaying cycle information: The indicator provides a table that displays detailed information about the detected cycles, including their rank, period, Bartel's test results, amplitude, and phase.
This indicator is a powerful tool that employs the Goertzel algorithm to analyze and visualize the cyclical components within a financial time series. By providing insights into the underlying price movements and their potential future trajectory, the indicator aims to assist traders in making more informed decisions.
█ What is the Goertzel Algorithm?
The Goertzel algorithm, named after Gerald Goertzel, is a digital signal processing technique that is used to efficiently compute individual terms of the Discrete Fourier Transform (DFT). It was first introduced in 1958, and since then, it has found various applications in the fields of engineering, mathematics, and physics.
The Goertzel algorithm is primarily used to detect specific frequency components within a digital signal, making it particularly useful in applications where only a few frequency components are of interest. The algorithm is computationally efficient, as it requires fewer calculations than the Fast Fourier Transform (FFT) when detecting a small number of frequency components. This efficiency makes the Goertzel algorithm a popular choice in applications such as:
1. Telecommunications: The Goertzel algorithm is used for decoding Dual-Tone Multi-Frequency (DTMF) signals, which are the tones generated when pressing buttons on a telephone keypad. By identifying specific frequency components, the algorithm can accurately determine which button has been pressed.
2. Audio processing: The algorithm can be used to detect specific pitches or harmonics in an audio signal, making it useful in applications like pitch detection and tuning musical instruments.
3. Vibration analysis: In the field of mechanical engineering, the Goertzel algorithm can be applied to analyze vibrations in rotating machinery, helping to identify faulty components or signs of wear.
4. Power system analysis: The algorithm can be used to measure harmonic content in power systems, allowing engineers to assess power quality and detect potential issues.
The Goertzel algorithm is used in these applications because it offers several advantages over other methods, such as the FFT:
1. Computational efficiency: The Goertzel algorithm requires fewer calculations when detecting a small number of frequency components, making it more computationally efficient than the FFT in these cases.
2. Real-time analysis: The algorithm can be implemented in a streaming fashion, allowing for real-time analysis of signals, which is crucial in applications like telecommunications and audio processing.
3. Memory efficiency: The Goertzel algorithm requires less memory than the FFT, as it only computes the frequency components of interest.
4. Precision: The algorithm is less susceptible to numerical errors compared to the FFT, ensuring more accurate results in applications where precision is essential.
The Goertzel algorithm is an efficient digital signal processing technique that is primarily used to detect specific frequency components within a signal. Its computational efficiency, real-time capabilities, and precision make it an attractive choice for various applications, including telecommunications, audio processing, vibration analysis, and power system analysis. The algorithm has been widely adopted since its introduction in 1958 and continues to be an essential tool in the fields of engineering, mathematics, and physics.
█ Goertzel Algorithm in Quantitative Finance: In-Depth Analysis and Applications
The Goertzel algorithm, initially designed for signal processing in telecommunications, has gained significant traction in the financial industry due to its efficient frequency detection capabilities. In quantitative finance, the Goertzel algorithm has been utilized for uncovering hidden market cycles, developing data-driven trading strategies, and optimizing risk management. This section delves deeper into the applications of the Goertzel algorithm in finance, particularly within the context of quantitative trading and analysis.
Unveiling Hidden Market Cycles:
Market cycles are prevalent in financial markets and arise from various factors, such as economic conditions, investor psychology, and market participant behavior. The Goertzel algorithm's ability to detect and isolate specific frequencies in price data helps trader analysts identify hidden market cycles that may otherwise go unnoticed. By examining the amplitude, phase, and periodicity of each cycle, traders can better understand the underlying market structure and dynamics, enabling them to develop more informed and effective trading strategies.
Developing Quantitative Trading Strategies:
The Goertzel algorithm's versatility allows traders to incorporate its insights into a wide range of trading strategies. By identifying the dominant market cycles in a financial instrument's price data, traders can create data-driven strategies that capitalize on the cyclical nature of markets.
For instance, a trader may develop a mean-reversion strategy that takes advantage of the identified cycles. By establishing positions when the price deviates from the predicted cycle, the trader can profit from the subsequent reversion to the cycle's mean. Similarly, a momentum-based strategy could be designed to exploit the persistence of a dominant cycle by entering positions that align with the cycle's direction.
Enhancing Risk Management:
The Goertzel algorithm plays a vital role in risk management for quantitative strategies. By analyzing the cyclical components of a financial instrument's price data, traders can gain insights into the potential risks associated with their trading strategies.
By monitoring the amplitude and phase of dominant cycles, a trader can detect changes in market dynamics that may pose risks to their positions. For example, a sudden increase in amplitude may indicate heightened volatility, prompting the trader to adjust position sizing or employ hedging techniques to protect their portfolio. Additionally, changes in phase alignment could signal a potential shift in market sentiment, necessitating adjustments to the trading strategy.
Expanding Quantitative Toolkits:
Traders can augment the Goertzel algorithm's insights by combining it with other quantitative techniques, creating a more comprehensive and sophisticated analysis framework. For example, machine learning algorithms, such as neural networks or support vector machines, could be trained on features extracted from the Goertzel algorithm to predict future price movements more accurately.
Furthermore, the Goertzel algorithm can be integrated with other technical analysis tools, such as moving averages or oscillators, to enhance their effectiveness. By applying these tools to the identified cycles, traders can generate more robust and reliable trading signals.
The Goertzel algorithm offers invaluable benefits to quantitative finance practitioners by uncovering hidden market cycles, aiding in the development of data-driven trading strategies, and improving risk management. By leveraging the insights provided by the Goertzel algorithm and integrating it with other quantitative techniques, traders can gain a deeper understanding of market dynamics and devise more effective trading strategies.
█ Indicator Inputs
src: This is the source data for the analysis, typically the closing price of the financial instrument.
detrendornot: This input determines the method used for detrending the source data. Detrending is the process of removing the underlying trend from the data to focus on the cyclical components.
The available options are:
hpsmthdt: Detrend using Hodrick-Prescott filter centered moving average.
zlagsmthdt: Detrend using zero-lag moving average centered moving average.
logZlagRegression: Detrend using logarithmic zero-lag linear regression.
hpsmth: Detrend using Hodrick-Prescott filter.
zlagsmth: Detrend using zero-lag moving average.
DT_HPper1 and DT_HPper2: These inputs define the period range for the Hodrick-Prescott filter centered moving average when detrendornot is set to hpsmthdt.
DT_ZLper1 and DT_ZLper2: These inputs define the period range for the zero-lag moving average centered moving average when detrendornot is set to zlagsmthdt.
DT_RegZLsmoothPer: This input defines the period for the zero-lag moving average used in logarithmic zero-lag linear regression when detrendornot is set to logZlagRegression.
HPsmoothPer: This input defines the period for the Hodrick-Prescott filter when detrendornot is set to hpsmth.
ZLMAsmoothPer: This input defines the period for the zero-lag moving average when detrendornot is set to zlagsmth.
MaxPer: This input sets the maximum period for the Goertzel algorithm to search for cycles.
squaredAmp: This boolean input determines whether the amplitude should be squared in the Goertzel algorithm.
useAddition: This boolean input determines whether the Goertzel algorithm should use addition for combining the cycles.
useCosine: This boolean input determines whether the Goertzel algorithm should use cosine waves instead of sine waves.
UseCycleStrength: This boolean input determines whether the Goertzel algorithm should compute the cycle strength, which is a normalized measure of the cycle's amplitude.
WindowSizePast and WindowSizeFuture: These inputs define the window size for past and future projections of the composite wave.
FilterBartels: This boolean input determines whether Bartel's test should be applied to filter out non-significant cycles.
BartNoCycles: This input sets the number of cycles to be used in Bartel's test.
BartSmoothPer: This input sets the period for the moving average used in Bartel's test.
BartSigLimit: This input sets the significance limit for Bartel's test, below which cycles are considered insignificant.
SortBartels: This boolean input determines whether the cycles should be sorted by their Bartel's test results.
UseCycleList: This boolean input determines whether a user-defined list of cycles should be used for constructing the composite wave. If set to false, the top N cycles will be used.
Cycle1, Cycle2, Cycle3, Cycle4, and Cycle5: These inputs define the user-defined list of cycles when 'UseCycleList' is set to true. If using a user-defined list, each of these inputs represents the period of a specific cycle to include in the composite wave.
StartAtCycle: This input determines the starting index for selecting the top N cycles when UseCycleList is set to false. This allows you to skip a certain number of cycles from the top before selecting the desired number of cycles.
UseTopCycles: This input sets the number of top cycles to use for constructing the composite wave when UseCycleList is set to false. The cycles are ranked based on their amplitudes or cycle strengths, depending on the UseCycleStrength input.
SubtractNoise: This boolean input determines whether to subtract the noise (remaining cycles) from the composite wave. If set to true, the composite wave will only include the top N cycles specified by UseTopCycles.
█ Exploring Auxiliary Functions
The following functions demonstrate advanced techniques for analyzing financial markets, including zero-lag moving averages, Bartels probability, detrending, and Hodrick-Prescott filtering. This section examines each function in detail, explaining their purpose, methodology, and applications in finance. We will examine how each function contributes to the overall performance and effectiveness of the indicator and how they work together to create a powerful analytical tool.
Zero-Lag Moving Average:
The zero-lag moving average function is designed to minimize the lag typically associated with moving averages. This is achieved through a two-step weighted linear regression process that emphasizes more recent data points. The function calculates a linearly weighted moving average (LWMA) on the input data and then applies another LWMA on the result. By doing this, the function creates a moving average that closely follows the price action, reducing the lag and improving the responsiveness of the indicator.
The zero-lag moving average function is used in the indicator to provide a responsive, low-lag smoothing of the input data. This function helps reduce the noise and fluctuations in the data, making it easier to identify and analyze underlying trends and patterns. By minimizing the lag associated with traditional moving averages, this function allows the indicator to react more quickly to changes in market conditions, providing timely signals and improving the overall effectiveness of the indicator.
Bartels Probability:
The Bartels probability function calculates the probability of a given cycle being significant in a time series. It uses a mathematical test called the Bartels test to assess the significance of cycles detected in the data. The function calculates coefficients for each detected cycle and computes an average amplitude and an expected amplitude. By comparing these values, the Bartels probability is derived, indicating the likelihood of a cycle's significance. This information can help in identifying and analyzing dominant cycles in financial markets.
The Bartels probability function is incorporated into the indicator to assess the significance of detected cycles in the input data. By calculating the Bartels probability for each cycle, the indicator can prioritize the most significant cycles and focus on the market dynamics that are most relevant to the current trading environment. This function enhances the indicator's ability to identify dominant market cycles, improving its predictive power and aiding in the development of effective trading strategies.
Detrend Logarithmic Zero-Lag Regression:
The detrend logarithmic zero-lag regression function is used for detrending data while minimizing lag. It combines a zero-lag moving average with a linear regression detrending method. The function first calculates the zero-lag moving average of the logarithm of input data and then applies a linear regression to remove the trend. By detrending the data, the function isolates the cyclical components, making it easier to analyze and interpret the underlying market dynamics.
The detrend logarithmic zero-lag regression function is used in the indicator to isolate the cyclical components of the input data. By detrending the data, the function enables the indicator to focus on the cyclical movements in the market, making it easier to analyze and interpret market dynamics. This function is essential for identifying cyclical patterns and understanding the interactions between different market cycles, which can inform trading decisions and enhance overall market understanding.
Bartels Cycle Significance Test:
The Bartels cycle significance test is a function that combines the Bartels probability function and the detrend logarithmic zero-lag regression function to assess the significance of detected cycles. The function calculates the Bartels probability for each cycle and stores the results in an array. By analyzing the probability values, traders and analysts can identify the most significant cycles in the data, which can be used to develop trading strategies and improve market understanding.
The Bartels cycle significance test function is integrated into the indicator to provide a comprehensive analysis of the significance of detected cycles. By combining the Bartels probability function and the detrend logarithmic zero-lag regression function, this test evaluates the significance of each cycle and stores the results in an array. The indicator can then use this information to prioritize the most significant cycles and focus on the most relevant market dynamics. This function enhances the indicator's ability to identify and analyze dominant market cycles, providing valuable insights for trading and market analysis.
Hodrick-Prescott Filter:
The Hodrick-Prescott filter is a popular technique used to separate the trend and cyclical components of a time series. The function applies a smoothing parameter to the input data and calculates a smoothed series using a two-sided filter. This smoothed series represents the trend component, which can be subtracted from the original data to obtain the cyclical component. The Hodrick-Prescott filter is commonly used in economics and finance to analyze economic data and financial market trends.
The Hodrick-Prescott filter is incorporated into the indicator to separate the trend and cyclical components of the input data. By applying the filter to the data, the indicator can isolate the trend component, which can be used to analyze long-term market trends and inform trading decisions. Additionally, the cyclical component can be used to identify shorter-term market dynamics and provide insights into potential trading opportunities. The inclusion of the Hodrick-Prescott filter adds another layer of analysis to the indicator, making it more versatile and comprehensive.
Detrending Options: Detrend Centered Moving Average:
The detrend centered moving average function provides different detrending methods, including the Hodrick-Prescott filter and the zero-lag moving average, based on the selected detrending method. The function calculates two sets of smoothed values using the chosen method and subtracts one set from the other to obtain a detrended series. By offering multiple detrending options, this function allows traders and analysts to select the most appropriate method for their specific needs and preferences.
The detrend centered moving average function is integrated into the indicator to provide users with multiple detrending options, including the Hodrick-Prescott filter and the zero-lag moving average. By offering multiple detrending methods, the indicator allows users to customize the analysis to their specific needs and preferences, enhancing the indicator's overall utility and adaptability. This function ensures that the indicator can cater to a wide range of trading styles and objectives, making it a valuable tool for a diverse group of market participants.
The auxiliary functions functions discussed in this section demonstrate the power and versatility of mathematical techniques in analyzing financial markets. By understanding and implementing these functions, traders and analysts can gain valuable insights into market dynamics, improve their trading strategies, and make more informed decisions. The combination of zero-lag moving averages, Bartels probability, detrending methods, and the Hodrick-Prescott filter provides a comprehensive toolkit for analyzing and interpreting financial data. The integration of advanced functions in a financial indicator creates a powerful and versatile analytical tool that can provide valuable insights into financial markets. By combining the zero-lag moving average,
█ In-Depth Analysis of the Goertzel Browser Code
The Goertzel Browser code is an implementation of the Goertzel Algorithm, an efficient technique to perform spectral analysis on a signal. The code is designed to detect and analyze dominant cycles within a given financial market data set. This section will provide an extremely detailed explanation of the code, its structure, functions, and intended purpose.
Function signature and input parameters:
The Goertzel Browser function accepts numerous input parameters for customization, including source data (src), the current bar (forBar), sample size (samplesize), period (per), squared amplitude flag (squaredAmp), addition flag (useAddition), cosine flag (useCosine), cycle strength flag (UseCycleStrength), past and future window sizes (WindowSizePast, WindowSizeFuture), Bartels filter flag (FilterBartels), Bartels-related parameters (BartNoCycles, BartSmoothPer, BartSigLimit), sorting flag (SortBartels), and output buffers (goeWorkPast, goeWorkFuture, cyclebuffer, amplitudebuffer, phasebuffer, cycleBartelsBuffer).
Initializing variables and arrays:
The code initializes several float arrays (goeWork1, goeWork2, goeWork3, goeWork4) with the same length as twice the period (2 * per). These arrays store intermediate results during the execution of the algorithm.
Preprocessing input data:
The input data (src) undergoes preprocessing to remove linear trends. This step enhances the algorithm's ability to focus on cyclical components in the data. The linear trend is calculated by finding the slope between the first and last values of the input data within the sample.
Iterative calculation of Goertzel coefficients:
The core of the Goertzel Browser algorithm lies in the iterative calculation of Goertzel coefficients for each frequency bin. These coefficients represent the spectral content of the input data at different frequencies. The code iterates through the range of frequencies, calculating the Goertzel coefficients using a nested loop structure.
Cycle strength computation:
The code calculates the cycle strength based on the Goertzel coefficients. This is an optional step, controlled by the UseCycleStrength flag. The cycle strength provides information on the relative influence of each cycle on the data per bar, considering both amplitude and cycle length. The algorithm computes the cycle strength either by squaring the amplitude (controlled by squaredAmp flag) or using the actual amplitude values.
Phase calculation:
The Goertzel Browser code computes the phase of each cycle, which represents the position of the cycle within the input data. The phase is calculated using the arctangent function (math.atan) based on the ratio of the imaginary and real components of the Goertzel coefficients.
Peak detection and cycle extraction:
The algorithm performs peak detection on the computed amplitudes or cycle strengths to identify dominant cycles. It stores the detected cycles in the cyclebuffer array, along with their corresponding amplitudes and phases in the amplitudebuffer and phasebuffer arrays, respectively.
Sorting cycles by amplitude or cycle strength:
The code sorts the detected cycles based on their amplitude or cycle strength in descending order. This allows the algorithm to prioritize cycles with the most significant impact on the input data.
Bartels cycle significance test:
If the FilterBartels flag is set, the code performs a Bartels cycle significance test on the detected cycles. This test determines the statistical significance of each cycle and filters out the insignificant cycles. The significant cycles are stored in the cycleBartelsBuffer array. If the SortBartels flag is set, the code sorts the significant cycles based on their Bartels significance values.
Waveform calculation:
The Goertzel Browser code calculates the waveform of the significant cycles for both past and future time windows. The past and future windows are defined by the WindowSizePast and WindowSizeFuture parameters, respectively. The algorithm uses either cosine or sine functions (controlled by the useCosine flag) to calculate the waveforms for each cycle. The useAddition flag determines whether the waveforms should be added or subtracted.
Storing waveforms in matrices:
The calculated waveforms for each cycle are stored in two matrices - goeWorkPast and goeWorkFuture. These matrices hold the waveforms for the past and future time windows, respectively. Each row in the matrices represents a time window position, and each column corresponds to a cycle.
Returning the number of cycles:
The Goertzel Browser function returns the total number of detected cycles (number_of_cycles) after processing the input data. This information can be used to further analyze the results or to visualize the detected cycles.
The Goertzel Browser code is a comprehensive implementation of the Goertzel Algorithm, specifically designed for detecting and analyzing dominant cycles within financial market data. The code offers a high level of customization, allowing users to fine-tune the algorithm based on their specific needs. The Goertzel Browser's combination of preprocessing, iterative calculations, cycle extraction, sorting, significance testing, and waveform calculation makes it a powerful tool for understanding cyclical components in financial data.
█ Generating and Visualizing Composite Waveform
The indicator calculates and visualizes the composite waveform for both past and future time windows based on the detected cycles. Here's a detailed explanation of this process:
Updating WindowSizePast and WindowSizeFuture:
The WindowSizePast and WindowSizeFuture are updated to ensure they are at least twice the MaxPer (maximum period).
Initializing matrices and arrays:
Two matrices, goeWorkPast and goeWorkFuture, are initialized to store the Goertzel results for past and future time windows. Multiple arrays are also initialized to store cycle, amplitude, phase, and Bartels information.
Preparing the source data (srcVal) array:
The source data is copied into an array, srcVal, and detrended using one of the selected methods (hpsmthdt, zlagsmthdt, logZlagRegression, hpsmth, or zlagsmth).
Goertzel function call:
The Goertzel function is called to analyze the detrended source data and extract cycle information. The output, number_of_cycles, contains the number of detected cycles.
Initializing arrays for past and future waveforms:
Three arrays, epgoertzel, goertzel, and goertzelFuture, are initialized to store the endpoint Goertzel, non-endpoint Goertzel, and future Goertzel projections, respectively.
Calculating composite waveform for past bars (goertzel array):
The past composite waveform is calculated by summing the selected cycles (either from the user-defined cycle list or the top cycles) and optionally subtracting the noise component.
Calculating composite waveform for future bars (goertzelFuture array):
The future composite waveform is calculated in a similar way as the past composite waveform.
Drawing past composite waveform (pvlines):
The past composite waveform is drawn on the chart using solid lines. The color of the lines is determined by the direction of the waveform (green for upward, red for downward).
Drawing future composite waveform (fvlines):
The future composite waveform is drawn on the chart using dotted lines. The color of the lines is determined by the direction of the waveform (fuchsia for upward, yellow for downward).
Displaying cycle information in a table (table3):
A table is created to display the cycle information, including the rank, period, Bartel value, amplitude (or cycle strength), and phase of each detected cycle.
Filling the table with cycle information:
The indicator iterates through the detected cycles and retrieves the relevant information (period, amplitude, phase, and Bartel value) from the corresponding arrays. It then fills the table with this information, displaying the values up to six decimal places.
To summarize, this indicator generates a composite waveform based on the detected cycles in the financial data. It calculates the composite waveforms for both past and future time windows and visualizes them on the chart using colored lines. Additionally, it displays detailed cycle information in a table, including the rank, period, Bartel value, amplitude (or cycle strength), and phase of each detected cycle.
█ Enhancing the Goertzel Algorithm-Based Script for Financial Modeling and Trading
The Goertzel algorithm-based script for detecting dominant cycles in financial data is a powerful tool for financial modeling and trading. It provides valuable insights into the past behavior of these cycles and potential future impact. However, as with any algorithm, there is always room for improvement. This section discusses potential enhancements to the existing script to make it even more robust and versatile for financial modeling, general trading, advanced trading, and high-frequency finance trading.
Enhancements for Financial Modeling
Data preprocessing: One way to improve the script's performance for financial modeling is to introduce more advanced data preprocessing techniques. This could include removing outliers, handling missing data, and normalizing the data to ensure consistent and accurate results.
Additional detrending and smoothing methods: Incorporating more sophisticated detrending and smoothing techniques, such as wavelet transform or empirical mode decomposition, can help improve the script's ability to accurately identify cycles and trends in the data.
Machine learning integration: Integrating machine learning techniques, such as artificial neural networks or support vector machines, can help enhance the script's predictive capabilities, leading to more accurate financial models.
Enhancements for General and Advanced Trading
Customizable indicator integration: Allowing users to integrate their own technical indicators can help improve the script's effectiveness for both general and advanced trading. By enabling the combination of the dominant cycle information with other technical analysis tools, traders can develop more comprehensive trading strategies.
Risk management and position sizing: Incorporating risk management and position sizing functionality into the script can help traders better manage their trades and control potential losses. This can be achieved by calculating the optimal position size based on the user's risk tolerance and account size.
Multi-timeframe analysis: Enhancing the script to perform multi-timeframe analysis can provide traders with a more holistic view of market trends and cycles. By identifying dominant cycles on different timeframes, traders can gain insights into the potential confluence of cycles and make better-informed trading decisions.
Enhancements for High-Frequency Finance Trading
Algorithm optimization: To ensure the script's suitability for high-frequency finance trading, optimizing the algorithm for faster execution is crucial. This can be achieved by employing efficient data structures and refining the calculation methods to minimize computational complexity.
Real-time data streaming: Integrating real-time data streaming capabilities into the script can help high-frequency traders react to market changes more quickly. By continuously updating the cycle information based on real-time market data, traders can adapt their strategies accordingly and capitalize on short-term market fluctuations.
Order execution and trade management: To fully leverage the script's capabilities for high-frequency trading, implementing functionality for automated order execution and trade management is essential. This can include features such as stop-loss and take-profit orders, trailing stops, and automated trade exit strategies.
While the existing Goertzel algorithm-based script is a valuable tool for detecting dominant cycles in financial data, there are several potential enhancements that can make it even more powerful for financial modeling, general trading, advanced trading, and high-frequency finance trading. By incorporating these improvements, the script can become a more versatile and effective tool for traders and financial analysts alike.
█ Understanding the Limitations of the Goertzel Algorithm
While the Goertzel algorithm-based script for detecting dominant cycles in financial data provides valuable insights, it is important to be aware of its limitations and drawbacks. Some of the key drawbacks of this indicator are:
Lagging nature:
As with many other technical indicators, the Goertzel algorithm-based script can suffer from lagging effects, meaning that it may not immediately react to real-time market changes. This lag can lead to late entries and exits, potentially resulting in reduced profitability or increased losses.
Parameter sensitivity:
The performance of the script can be sensitive to the chosen parameters, such as the detrending methods, smoothing techniques, and cycle detection settings. Improper parameter selection may lead to inaccurate cycle detection or increased false signals, which can negatively impact trading performance.
Complexity:
The Goertzel algorithm itself is relatively complex, making it difficult for novice traders or those unfamiliar with the concept of cycle analysis to fully understand and effectively utilize the script. This complexity can also make it challenging to optimize the script for specific trading styles or market conditions.
Overfitting risk:
As with any data-driven approach, there is a risk of overfitting when using the Goertzel algorithm-based script. Overfitting occurs when a model becomes too specific to the historical data it was trained on, leading to poor performance on new, unseen data. This can result in misleading signals and reduced trading performance.
No guarantee of future performance: While the script can provide insights into past cycles and potential future trends, it is important to remember that past performance does not guarantee future results. Market conditions can change, and relying solely on the script's predictions without considering other factors may lead to poor trading decisions.
Limited applicability: The Goertzel algorithm-based script may not be suitable for all markets, trading styles, or timeframes. Its effectiveness in detecting cycles may be limited in certain market conditions, such as during periods of extreme volatility or low liquidity.
While the Goertzel algorithm-based script offers valuable insights into dominant cycles in financial data, it is essential to consider its drawbacks and limitations when incorporating it into a trading strategy. Traders should always use the script in conjunction with other technical and fundamental analysis tools, as well as proper risk management, to make well-informed trading decisions.
█ Interpreting Results
The Goertzel Browser indicator can be interpreted by analyzing the plotted lines and the table presented alongside them. The indicator plots two lines: past and future composite waves. The past composite wave represents the composite wave of the past price data, and the future composite wave represents the projected composite wave for the next period.
The past composite wave line displays a solid line, with green indicating a bullish trend and red indicating a bearish trend. On the other hand, the future composite wave line is a dotted line with fuchsia indicating a bullish trend and yellow indicating a bearish trend.
The table presented alongside the indicator shows the top cycles with their corresponding rank, period, Bartels, amplitude or cycle strength, and phase. The amplitude is a measure of the strength of the cycle, while the phase is the position of the cycle within the data series.
Interpreting the Goertzel Browser indicator involves identifying the trend of the past and future composite wave lines and matching them with the corresponding bullish or bearish color. Additionally, traders can identify the top cycles with the highest amplitude or cycle strength and utilize them in conjunction with other technical indicators and fundamental analysis for trading decisions.
This indicator is considered a repainting indicator because the value of the indicator is calculated based on the past price data. As new price data becomes available, the indicator's value is recalculated, potentially causing the indicator's past values to change. This can create a false impression of the indicator's performance, as it may appear to have provided a profitable trading signal in the past when, in fact, that signal did not exist at the time.
The Goertzel indicator is also non-endpointed, meaning that it is not calculated up to the current bar or candle. Instead, it uses a fixed amount of historical data to calculate its values, which can make it difficult to use for real-time trading decisions. For example, if the indicator uses 100 bars of historical data to make its calculations, it cannot provide a signal until the current bar has closed and become part of the historical data. This can result in missed trading opportunities or delayed signals.
█ Conclusion
The Goertzel Browser indicator is a powerful tool for identifying and analyzing cyclical patterns in financial markets. Its ability to detect multiple cycles of varying frequencies and strengths make it a valuable addition to any trader's technical analysis toolkit. However, it is important to keep in mind that the Goertzel Browser indicator should be used in conjunction with other technical analysis tools and fundamental analysis to achieve the best results. With continued refinement and development, the Goertzel Browser indicator has the potential to become a highly effective tool for financial modeling, general trading, advanced trading, and high-frequency finance trading. Its accuracy and versatility make it a promising candidate for further research and development.
█ Footnotes
What is the Bartels Test for Cycle Significance?
The Bartels Cycle Significance Test is a statistical method that determines whether the peaks and troughs of a time series are statistically significant. The test is named after its inventor, George Bartels, who developed it in the mid-20th century.
The Bartels test is designed to analyze the cyclical components of a time series, which can help traders and analysts identify trends and cycles in financial markets. The test calculates a Bartels statistic, which measures the degree of non-randomness or autocorrelation in the time series.
The Bartels statistic is calculated by first splitting the time series into two halves and calculating the range of the peaks and troughs in each half. The test then compares these ranges using a t-test, which measures the significance of the difference between the two ranges.
If the Bartels statistic is greater than a critical value, it indicates that the peaks and troughs in the time series are non-random and that there is a significant cyclical component to the data. Conversely, if the Bartels statistic is less than the critical value, it suggests that the peaks and troughs are random and that there is no significant cyclical component.
The Bartels Cycle Significance Test is particularly useful in financial analysis because it can help traders and analysts identify significant cycles in asset prices, which can in turn inform investment decisions. However, it is important to note that the test is not perfect and can produce false signals in certain situations, particularly in noisy or volatile markets. Therefore, it is always recommended to use the test in conjunction with other technical and fundamental indicators to confirm trends and cycles.
Deep-dive into the Hodrick-Prescott Fitler
The Hodrick-Prescott (HP) filter is a statistical tool used in economics and finance to separate a time series into two components: a trend component and a cyclical component. It is a powerful tool for identifying long-term trends in economic and financial data and is widely used by economists, central banks, and financial institutions around the world.
The HP filter was first introduced in the 1990s by economists Robert Hodrick and Edward Prescott. It is a simple, two-parameter filter that separates a time series into a trend component and a cyclical component. The trend component represents the long-term behavior of the data, while the cyclical component captures the shorter-term fluctuations around the trend.
The HP filter works by minimizing the following objective function:
Minimize: (Sum of Squared Deviations) + λ (Sum of Squared Second Differences)
Where:
The first term represents the deviation of the data from the trend.
The second term represents the smoothness of the trend.
λ is a smoothing parameter that determines the degree of smoothness of the trend.
The smoothing parameter λ is typically set to a value between 100 and 1600, depending on the frequency of the data. Higher values of λ lead to a smoother trend, while lower values lead to a more volatile trend.
The HP filter has several advantages over other smoothing techniques. It is a non-parametric method, meaning that it does not make any assumptions about the underlying distribution of the data. It also allows for easy comparison of trends across different time series and can be used with data of any frequency.
However, the HP filter also has some limitations. It assumes that the trend is a smooth function, which may not be the case in some situations. It can also be sensitive to changes in the smoothing parameter λ, which may result in different trends for the same data. Additionally, the filter may produce unrealistic trends for very short time series.
Despite these limitations, the HP filter remains a valuable tool for analyzing economic and financial data. It is widely used by central banks and financial institutions to monitor long-term trends in the economy, and it can be used to identify turning points in the business cycle. The filter can also be used to analyze asset prices, exchange rates, and other financial variables.
The Hodrick-Prescott filter is a powerful tool for analyzing economic and financial data. It separates a time series into a trend component and a cyclical component, allowing for easy identification of long-term trends and turning points in the business cycle. While it has some limitations, it remains a valuable tool for economists, central banks, and financial institutions around the world.
aproxLibrary "aprox"
It's a library of the aproximations of a price or Series float it uses Fourier transform and
Euler's Theoreum for Homogenus White noice operations. Calling functions without source value it automatically take close as the default source value.
Copy this indicator to see how each approximations interact between each other.
import Celje_2300/aprox/1 as aprox
//@version=5
indicator("Close Price with Aproximations", shorttitle="Close and Aproximations", overlay=false)
// Sample input data (replace this with your own data)
inputData = close
// Plot Close Price
plot(inputData, color=color.blue, title="Close Price")
dtf32_result = aprox.DTF32()
plot(dtf32_result, color=color.green, title="DTF32 Aproximation")
fft_result = aprox.FFT()
plot(fft_result, color=color.red, title="DTF32 Aproximation")
wavelet_result = aprox.Wavelet()
plot(wavelet_result, color=color.orange, title="Wavelet Aproximation")
wavelet_std_result = aprox.Wavelet_std()
plot(wavelet_std_result, color=color.yellow, title="Wavelet_std Aproximation")
DFT3(xval, _dir)
Parameters:
xval (float)
_dir (int)
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - DFT3", shorttitle="DFT3 Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply DFT3
result = aprox.DFT3(inputData, 2)
// Plot the result
plot(result, color=color.blue, title="DFT3 Result")
DFT2(xval, _dir)
Parameters:
xval (float)
_dir (int)
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - DFT2", shorttitle="DFT2 Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply DFT2
result = aprox.DFT2(inputData, inputData, 1)
// Plot the result
plot(result, color=color.green, title="DFT2 Result")
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - DFT2", shorttitle="DFT2 Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply DFT2
result = aprox.DFT2(inputData, 1)
// Plot the result
plot(result, color=color.green, title="DFT2 Result")
FFT(xval)
FFT: Fast Fourier Transform
Parameters:
xval (float)
Returns: Aproxiated source value
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - FFT", shorttitle="FFT Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply FFT
result = aprox.FFT(inputData)
// Plot the result
plot(result, color=color.red, title="FFT Result")
DTF32(xval)
DTF32: Combined Discrete Fourier Transforms
Parameters:
xval (float)
Returns: Aproxiated source value
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - DTF32", shorttitle="DTF32 Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply DTF32
result = aprox.DTF32(inputData)
// Plot the result
plot(result, color=color.purple, title="DTF32 Result")
whitenoise(indic_, _devided, minEmaLength, maxEmaLength, src)
whitenoise: Ehler's Universal Oscillator with White Noise, without extra aproximated src
Parameters:
indic_ (float)
_devided (int)
minEmaLength (int)
maxEmaLength (int)
src (float)
Returns: Smoothed indicator value
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - whitenoise", shorttitle="whitenoise Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply whitenoise
result = aprox.whitenoise(aprox.FFT(inputData))
// Plot the result
plot(result, color=color.orange, title="whitenoise Result")
whitenoise(indic_, dft1, _devided, minEmaLength, maxEmaLength, src)
whitenoise: Ehler's Universal Oscillator with White Noise and DFT1
Parameters:
indic_ (float)
dft1 (float)
_devided (int)
minEmaLength (int)
maxEmaLength (int)
src (float)
Returns: Smoothed indicator value
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - whitenoise with DFT1", shorttitle="whitenoise-DFT1 Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply whitenoise with DFT1
result = aprox.whitenoise(inputData, aprox.DFT1(inputData))
// Plot the result
plot(result, color=color.yellow, title="whitenoise-DFT1 Result")
smooth(dft1, indic__, _devided, minEmaLength, maxEmaLength, src)
smooth: Smoothing source value with help of indicator series and aproximated source value
Parameters:
dft1 (float)
indic__ (float)
_devided (int)
minEmaLength (int)
maxEmaLength (int)
src (float)
Returns: Smoothed source series
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - smooth", shorttitle="smooth Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply smooth
result = aprox.smooth(inputData, aprox.FFT(inputData))
// Plot the result
plot(result, color=color.gray, title="smooth Result")
smooth(indic__, _devided, minEmaLength, maxEmaLength, src)
smooth: Smoothing source value with help of indicator series
Parameters:
indic__ (float)
_devided (int)
minEmaLength (int)
maxEmaLength (int)
src (float)
Returns: Smoothed source series
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - smooth without DFT1", shorttitle="smooth-NoDFT1 Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply smooth without DFT1
result = aprox.smooth(aprox.FFT(inputData))
// Plot the result
plot(result, color=color.teal, title="smooth-NoDFT1 Result")
vzo_ema(src, len)
vzo_ema: Volume Zone Oscillator with EMA smoothing
Parameters:
src (float)
len (simple int)
Returns: VZO value
vzo_sma(src, len)
vzo_sma: Volume Zone Oscillator with SMA smoothing
Parameters:
src (float)
len (int)
Returns: VZO value
vzo_wma(src, len)
vzo_wma: Volume Zone Oscillator with WMA smoothing
Parameters:
src (float)
len (int)
Returns: VZO value
alma2(series, windowsize, offset, sigma)
alma2: Arnaud Legoux Moving Average 2 accepts sigma as series float
Parameters:
series (float)
windowsize (int)
offset (float)
sigma (float)
Returns: ALMA value
Wavelet(src, len, offset, sigma)
Wavelet: Wavelet Transform
Parameters:
src (float)
len (int)
offset (simple float)
sigma (simple float)
Returns: Wavelet-transformed series
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - Wavelet", shorttitle="Wavelet Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply Wavelet
result = aprox.Wavelet(inputData)
// Plot the result
plot(result, color=color.blue, title="Wavelet Result")
Wavelet_std(src, len, offset, mag)
Wavelet_std: Wavelet Transform with Standard Deviation
Parameters:
src (float)
len (int)
offset (float)
mag (int)
Returns: Wavelet-transformed series
//@version=5
import Celje_2300/aprox/1 as aprox
indicator("Example - Wavelet_std", shorttitle="Wavelet_std Example", overlay=true)
// Sample input data (replace this with your own data)
inputData = close
// Apply Wavelet_std
result = aprox.Wavelet_std(inputData)
// Plot the result
plot(result, color=color.green, title="Wavelet_std Result")
Awesome Oscillator (AO) with Signals [AIBitcoinTrend]👽 Multi-Scale Awesome Oscillator (AO) with Signals (AIBitcoinTrend)
The Multi-Scale Awesome Oscillator transforms the traditional Awesome Oscillator (AO) by integrating multi-scale wavelet filtering, enhancing its ability to detect momentum shifts while maintaining responsiveness across different market conditions.
Unlike conventional AO calculations, this advanced version refines trend structures using high-frequency, medium-frequency, and low-frequency wavelet components, providing traders with superior clarity and adaptability.
Additionally, it features real-time divergence detection and an ATR-based dynamic trailing stop, making it a powerful tool for momentum analysis, reversals, and breakout strategies.
👽 What Makes the Multi-Scale AO – Wavelet-Enhanced Momentum Unique?
Unlike traditional AO indicators, this enhanced version leverages wavelet-based decomposition and volatility-adjusted normalization, ensuring improved signal consistency across various timeframes and assets.
✅ Wavelet Smoothing – Multi-Scale Extraction – Captures short-term fluctuations while preserving broader trend structures.
✅ Frequency-Based Detail Weights – Separates high, medium, and low-frequency components to reduce noise and improve trend clarity.
✅ Real-Time Divergence Detection – Identifies bullish and bearish divergences for early trend reversals.
✅ Crossovers & ATR-Based Trailing Stops – Implements intelligent trade management with adaptive stop-loss levels.
👽 The Math Behind the Indicator
👾 Wavelet-Based AO Smoothing
The indicator applies multi-scale wavelet decomposition to extract high-frequency, medium-frequency, and low-frequency trend components, ensuring an optimal balance between reactivity and smoothness.
sma1 = ta.sma(signal, waveletPeriod1)
sma2 = ta.sma(signal, waveletPeriod2)
sma3 = ta.sma(signal, waveletPeriod3)
detail1 = signal - sma1 // High-frequency detail
detail2 = sma1 - sma2 // Intermediate detail
detail3 = sma2 - sma3 // Low-frequency detail
advancedAO = weightDetail1 * detail1 + weightDetail2 * detail2 + weightDetail3 * detail3
Why It Works:
Short-Term Smoothing: Captures rapid fluctuations while minimizing noise.
Medium-Term Smoothing: Balances short-term and long-term trends.
Long-Term Smoothing: Enhances trend stability and reduces false signals.
👾 Z-Score Normalization
To ensure consistency across different markets, the Awesome Oscillator is normalized using a Z-score transformation, making overbought and oversold levels stable across all assets.
normFactor = ta.stdev(advancedAO, normPeriod)
normalizedAO = advancedAO / nz(normFactor, 1)
Why It Works:
Standardizes AO values for comparison across assets.
Enhances signal reliability, preventing misleading spikes.
👽 How Traders Can Use This Indicator
👾 Divergence Trading Strategy
Bullish Divergence
Price makes a lower low, while AO forms a higher low.
A buy signal is confirmed when AO starts rising.
Bearish Divergence
Price makes a higher high, while AO forms a lower high.
A sell signal is confirmed when AO starts declining.
👾 Buy & Sell Signals with Trailing Stop
Bullish Setup:
✅AO crosses above the bullish trigger level → Buy Signal.
✅Trailing stop placed at Low - (ATR × Multiplier).
✅Exit if price crosses below the stop.
Bearish Setup:
✅AO crosses below the bearish trigger level → Sell Signal.
✅Trailing stop placed at High + (ATR × Multiplier).
✅Exit if price crosses above the stop.
👽 Why It’s Useful for Traders
Wavelet-Enhanced Filtering – Retains essential trend details while eliminating excessive noise.
Multi-Scale Momentum Analysis – Separates different trend frequencies for enhanced clarity.
Real-Time Divergence Alerts – Identifies early reversal signals for better entries and exits.
ATR-Based Risk Management – Ensures stops dynamically adapt to market conditions.
Works Across Markets & Timeframes – Suitable for stocks, forex, crypto, and futures trading.
👽 Indicator Settings
AO Short Period – Defines the short-term moving average for AO calculation.
AO Long Period – Defines the long-term moving average for AO smoothing.
Wavelet Smoothing – Adjusts multi-scale decomposition for different market conditions.
Divergence Detection – Enables or disables real-time divergence analysis. Normalization Period – Sets the lookback period for standard deviation-based AO normalization.
Cross Signals Sensitivity – Controls crossover signal strength for buy/sell signals.
ATR Trailing Stop Multiplier – Adjusts the sensitivity of the trailing stop.
Disclaimer: This indicator is designed for educational purposes and does not constitute financial advice. Please consult a qualified financial advisor before making investment decisions.
Solar Movement Gradient-AYNETSummary of the Solar Movement Gradient Indicator
This Pine Script creates a dynamic, colorful indicator inspired by solar movements. It uses a sinusoidal wave to plot oscillations over time with a rainbow-like gradient that changes based on the wave's position.
Key Features
Sinusoidal Wave:
A wave oscillates smoothly based on the bar index (time) or optionally influenced by price movements.
The wave’s amplitude, baseline, and wavelength can be customized.
Dynamic Colors:
A spectrum of seven colors (red, orange, yellow, green, blue, purple, pink) is used.
The color changes smoothly along with the wave, emulating a solar gradient.
Background Gradient:
An optional gradient fills the background with colors matching the wave, adding a visually pleasing effect.
Customizable Inputs
Gradient Speed:
Adjusts how fast the wave and colors change over time.
Amplitude & Wavelength:
Controls the height and smoothness of the wave.
Price Influence:
Allows the wave to react dynamically to price movements.
Background Gradient:
Toggles a colorful gradient in the chart’s background.
Use Case
This indicator is designed for visual appeal rather than trading signals. It enhances the chart with a dynamic and colorful representation, making it perfect for aesthetic customization.
Let me know if you need further refinements! 🌈✨
mathLibrary "math"
It's a library of discrete aproximations of a price or Series float it uses Fourier Discrete transform, Laplace Discrete Original and Modified transform and Euler's Theoreum for Homogenus White noice operations. Calling functions without source value it automatically take close as the default source value.
Here is a picture of Laplace and Fourier approximated close prices from this library:
Copy this indicator and try it yourself:
import AutomatedTradingAlgorithms/math/1 as math
//@version=5
indicator("Close Price with Aproximations", shorttitle="Close and Aproximations", overlay=false)
// Sample input data (replace this with your own data)
inputData = close
// Plot Close Price
plot(inputData, color=color.blue, title="Close Price")
ltf32_result = math.LTF32(a=0.01)
plot(ltf32_result, color=color.green, title="LTF32 Aproximation")
fft_result = math.FFT()
plot(fft_result, color=color.red, title="Fourier Aproximation")
wavelet_result = math.Wavelet()
plot(wavelet_result, color=color.orange, title="Wavelet Aproximation")
wavelet_std_result = math.Wavelet_std()
plot(wavelet_std_result, color=color.yellow, title="Wavelet_std Aproximation")
DFT3(xval, _dir)
Discrete Fourier Transform with last 3 points
Parameters:
xval (float) : Source series
_dir (int) : Direction parameter
Returns: Aproxiated source value
DFT2(xval, _dir)
Discrete Fourier Transform with last 2 points
Parameters:
xval (float) : Source series
_dir (int) : Direction parameter
Returns: Aproxiated source value
FFT(xval)
Fast Fourier Transform once. It aproximates usig last 3 points.
Parameters:
xval (float) : Source series
Returns: Aproxiated source value
DFT32(xval)
Combined Discrete Fourier Transforms of DFT3 and DTF2 it aproximates last point by first
aproximating last 3 ponts and than using last 2 points of the previus.
Parameters:
xval (float) : Source series
Returns: Aproxiated source value
DTF32(xval)
Combined Discrete Fourier Transforms of DFT3 and DTF2 it aproximates last point by first
aproximating last 3 ponts and than using last 2 points of the previus.
Parameters:
xval (float) : Source series
Returns: Aproxiated source value
LFT3(xval, _dir, a)
Discrete Laplace Transform with last 3 points
Parameters:
xval (float) : Source series
_dir (int) : Direction parameter
a (float) : laplace coeficient
Returns: Aproxiated source value
LFT2(xval, _dir, a)
Discrete Laplace Transform with last 2 points
Parameters:
xval (float) : Source series
_dir (int) : Direction parameter
a (float) : laplace coeficient
Returns: Aproxiated source value
LFT(xval, a)
Fast Laplace Transform once. It aproximates usig last 3 points.
Parameters:
xval (float) : Source series
a (float) : laplace coeficient
Returns: Aproxiated source value
LFT32(xval, a)
Combined Discrete Laplace Transforms of LFT3 and LTF2 it aproximates last point by first
aproximating last 3 ponts and than using last 2 points of the previus.
Parameters:
xval (float) : Source series
a (float) : laplace coeficient
Returns: Aproxiated source value
LTF32(xval, a)
Combined Discrete Laplace Transforms of LFT3 and LTF2 it aproximates last point by first
aproximating last 3 ponts and than using last 2 points of the previus.
Parameters:
xval (float) : Source series
a (float) : laplace coeficient
Returns: Aproxiated source value
whitenoise(indic_, _devided, minEmaLength, maxEmaLength, src)
Ehler's Universal Oscillator with White Noise, without extra aproximated src.
It uses dinamic EMA to aproximate indicator and thus reducing noise.
Parameters:
indic_ (float) : Input series for the indicator values to be smoothed
_devided (int) : Divisor for oscillator calculations
minEmaLength (int) : Minimum EMA length
maxEmaLength (int) : Maximum EMA length
src (float) : Source series
Returns: Smoothed indicator value
whitenoise(indic_, dft1, _devided, minEmaLength, maxEmaLength, src)
Ehler's Universal Oscillator with White Noise and DFT1.
It uses src and sproxiated src (dft1) to clearly define white noice.
It uses dinamic EMA to aproximate indicator and thus reducing noise.
Parameters:
indic_ (float) : Input series for the indicator values to be smoothed
dft1 (float) : Aproximated src value for white noice calculation
_devided (int) : Divisor for oscillator calculations
minEmaLength (int) : Minimum EMA length
maxEmaLength (int) : Maximum EMA length
src (float) : Source series
Returns: Smoothed indicator value
smooth(dft1, indic__, _devided, minEmaLength, maxEmaLength, src)
Smoothing source value with help of indicator series and aproximated source value
It uses src and sproxiated src (dft1) to clearly define white noice.
It uses dinamic EMA to aproximate src and thus reducing noise.
Parameters:
dft1 (float) : Value to be smoothed.
indic__ (float) : Optional input for indicator to help smooth dft1 (default is FFT)
_devided (int) : Divisor for smoothing calculations
minEmaLength (int) : Minimum EMA length
maxEmaLength (int) : Maximum EMA length
src (float) : Source series
Returns: Smoothed source (src) series
smooth(indic__, _devided, minEmaLength, maxEmaLength, src)
Smoothing source value with help of indicator series
It uses dinamic EMA to aproximate src and thus reducing noise.
Parameters:
indic__ (float) : Optional input for indicator to help smooth dft1 (default is FFT)
_devided (int) : Divisor for smoothing calculations
minEmaLength (int) : Minimum EMA length
maxEmaLength (int) : Maximum EMA length
src (float) : Source series
Returns: Smoothed src series
vzo_ema(src, len)
Volume Zone Oscillator with EMA smoothing
Parameters:
src (float) : Source series
len (simple int) : Length parameter for EMA
Returns: VZO value
vzo_sma(src, len)
Volume Zone Oscillator with SMA smoothing
Parameters:
src (float) : Source series
len (int) : Length parameter for SMA
Returns: VZO value
vzo_wma(src, len)
Volume Zone Oscillator with WMA smoothing
Parameters:
src (float) : Source series
len (int) : Length parameter for WMA
Returns: VZO value
alma2(series, windowsize, offset, sigma)
Arnaud Legoux Moving Average 2 accepts sigma as series float
Parameters:
series (float) : Input series
windowsize (int) : Size of the moving average window
offset (float) : Offset parameter
sigma (float) : Sigma parameter
Returns: ALMA value
Wavelet(src, len, offset, sigma)
Aproxiates srt using Discrete wavelet transform.
Parameters:
src (float) : Source series
len (int) : Length parameter for ALMA
offset (simple float)
sigma (simple float)
Returns: Wavelet-transformed series
Wavelet_std(src, len, offset, mag)
Aproxiates srt using Discrete wavelet transform with standard deviation as a magnitude.
Parameters:
src (float) : Source series
len (int) : Length parameter for ALMA
offset (float) : Offset parameter for ALMA
mag (int) : Magnitude parameter for standard deviation
Returns: Wavelet-transformed series
LaplaceTransform(xval, N, a)
Original Laplace Transform over N set of close prices
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
Returns: Aproxiated source value
NLaplaceTransform(xval, N, a, repeat)
Y repetirions on Original Laplace Transform over N set of close prices, each time N-k set of close prices
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
repeat (int) : number of repetitions
Returns: Aproxiated source value
LaplaceTransformsum(xval, N, a, b)
Sum of 2 exponent coeficient of Laplace Transform over N set of close prices
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
b (float) : second laplace coeficient
Returns: Aproxiated source value
NLaplaceTransformdiff(xval, N, a, b, repeat)
Difference of 2 exponent coeficient of Laplace Transform over N set of close prices
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
b (float) : second laplace coeficient
repeat (int) : number of repetitions
Returns: Aproxiated source value
N_divLaplaceTransformdiff(xval, N, a, b, repeat)
N repetitions of Difference of 2 exponent coeficient of Laplace Transform over N set of close prices, with dynamic rotation
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
b (float) : second laplace coeficient
repeat (int) : number of repetitions
Returns: Aproxiated source value
LaplaceTransformdiff(xval, N, a, b)
Difference of 2 exponent coeficient of Laplace Transform over N set of close prices
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
b (float) : second laplace coeficient
Returns: Aproxiated source value
NLaplaceTransformdiffFrom2(xval, N, a, b, repeat)
N repetitions of Difference of 2 exponent coeficient of Laplace Transform over N set of close prices, second element has for 1 higher exponent factor
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
b (float) : second laplace coeficient
repeat (int) : number of repetitions
Returns: Aproxiated source value
N_divLaplaceTransformdiffFrom2(xval, N, a, b, repeat)
N repetitions of Difference of 2 exponent coeficient of Laplace Transform over N set of close prices, second element has for 1 higher exponent factor, dynamic rotation
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
b (float) : second laplace coeficient
repeat (int) : number of repetitions
Returns: Aproxiated source value
LaplaceTransformdiffFrom2(xval, N, a, b)
Difference of 2 exponent coeficient of Laplace Transform over N set of close prices, second element has for 1 higher exponent factor
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
b (float) : second laplace coeficient
Returns: Aproxiated source value
TASC 2025.06 Cybernetic Oscillator█ OVERVIEW
This script implements the Cybernetic Oscillator introduced by John F. Ehlers in his article "The Cybernetic Oscillator For More Flexibility, Making A Better Oscillator" from the June 2025 edition of the TASC Traders' Tips . It cascades two-pole highpass and lowpass filters, then scales the result by its root mean square (RMS) to create a flexible normalized oscillator that responds to a customizable frequency range for different trading styles.
█ CONCEPTS
Oscillators are indicators widely used by technical traders. These indicators swing above and below a center value, emphasizing cyclic movements within a frequency range. In his article, Ehlers explains that all oscillators share a common characteristic: their calculations involve computing differences . The reliance on differences is what causes these indicators to oscillate about a central point.
The difference between two data points in a series acts as a highpass filter — it allows high frequencies (short wavelengths) to pass through while significantly attenuating low frequencies (long wavelengths). Ehlers demonstrates that a simple difference calculation attenuates lower-frequency cycles at a rate of 6 dB per octave. However, the difference also significantly amplifies cycles near the shortest observable wavelength, making the result appear noisier than the original series. To mitigate the effects of noise in a differenced series, oscillators typically smooth the series with a lowpass filter, such as a moving average.
Ehlers highlights an underlying issue with smoothing differenced data to create oscillators. He postulates that market data statistically follows a pink spectrum , where the amplitudes of cyclic components in the data are approximately directly proportional to the underlying periods. Specifically, he suggests that cyclic amplitude increases by 6 dB per octave of wavelength.
Because some conventional oscillators, such as RSI, use differencing calculations that attenuate cycles by only 6 dB per octave, and market cycles increase in amplitude by 6 dB per octave, such calculations do not have a tangible net effect on larger wavelengths in the analyzed data. The influence of larger wavelengths can be especially problematic when using these oscillators for mean reversion or swing signals. For instance, an expected reversion to the mean might be erroneous because oscillator's mean might significantly deviate from its center over time.
To address the issues with conventional oscillator responses, Ehlers created a new indicator dubbed the Cybernetic Oscillator. It uses a simple combination of highpass and lowpass filters to emphasize a specific range of frequencies in the market data, then normalizes the result based on RMS. The process is as follows:
Apply a two-pole highpass filter to the data. This filter's critical period defines the longest wavelength in the oscillator's passband.
Apply a two-pole SuperSmoother (lowpass filter) to the highpass-filtered data. This filter's critical period defines the shortest wavelength in the passband.
Scale the resulting waveform by its RMS. If the filtered waveform follows a normal distribution, the scaled result represents amplitude in standard deviations.
The oscillator's two-pole filters attenuate cycles outside the desired frequency range by 12 dB per octave. This rate outweighs the apparent rate of amplitude increase for successively longer market cycles (6 dB per octave). Therefore, the Cybernetic Oscillator provides a more robust isolation of cyclic content than conventional oscillators. Best of all, traders can set the periods of the highpass and lowpass filters separately, enabling fine-tuning of the frequency range for different trading styles.
█ USAGE
The "Highpass period" input in the "Settings/Inputs" tab specifies the longest wavelength in the oscillator's passband, and the "Lowpass period" input defines the shortest wavelength. The oscillator becomes more responsive to rapid movements with a smaller lowpass period. Conversely, it becomes more sensitive to trends with a larger highpass period. Ehlers recommends setting the smallest period to a value above 8 to avoid aliasing. The highpass period must not be smaller than the lowpass period. Otherwise, it causes a runtime error.
The "RMS length" input determines the number of bars in the RMS calculation that the indicator uses to normalize the filtered result.
This indicator also features two distinct display styles, which users can toggle with the "Display style" input. With the "Trend" style enabled, the indicator plots the oscillator with one of two colors based on whether its value is above or below zero. With the "Threshold" style enabled, it plots the oscillator as a gray line and highlights overbought and oversold areas based on the user-specified threshold.
Below, we show two instances of the script with different settings on an equities chart. The first uses the "Threshold" style with default settings to pass cycles between 20 and 30 bars for mean reversion signals. The second uses a larger highpass period of 250 bars and the "Trend" style to visualize trends based on cycles spanning less than one year:
Infinity Market Grid -AynetConcept
Imagine viewing the market as a dynamic grid where price, time, and momentum intersect to reveal infinite possibilities. This indicator leverages:
Grid-Based Market Flow: Visualizes price action as a grid with zones for:
Accumulation
Distribution
Breakout Expansion
Volatility Compression
Predictive Dynamic Layers:
Forecasts future price zones using historical volatility and momentum.
Tracks event probabilities like breakout, fakeout, and trend reversals.
Data Science Visuals:
Uses heatmap-style layers, moving waveforms, and price trajectory paths.
Interactive Alerts:
Real-time alerts for high-probability market events.
Marks critical zones for "buy," "sell," or "wait."
Key Features
Market Layers Grid:
Creates dynamic "boxes" around price using fractals and ATR-based volatility.
These boxes show potential future price zones and probabilities.
Volatility and Momentum Waves:
Overlay volatility oscillators and momentum bands for directional context.
Dynamic Heatmap Zones:
Colors the chart dynamically based on breakout probabilities and risk.
Price Path Prediction:
Tracks price trajectory as a moving "wave" across the grid.
How It Works
Grid Box Structure:
Upper and lower price levels are based on ATR (volatility) and plotted dynamically.
Dashed green/red lines show the grid for potential price expansion zones.
Heatmap Zones:
Colors the background based on probabilities:
Green: High breakout probability.
Blue: High consolidation probability.
Price Path Prediction:
Forecasts future price movements using momentum.
Plots these as a dynamic "wave" on the chart.
Momentum and Volatility Waves:
Shows the relationship between momentum and volatility as oscillating waves.
Helps identify when momentum exceeds volatility (potential breakouts).
Buy/Sell Signals:
Triggers when price approaches grid edges with strong momentum.
Provides alerts and visual markers.
Why Is It Revolutionary?
Grid and Wave Synergy:
Combines structural price zones (grid boxes) with real-time momentum and volatility waves.
Predictive Analytics:
Uses momentum-based forecasting to visualize what’s next, not just what’s happening.
Dynamic Heatmap:
Creates a living map of breakout/consolidation zones in real-time.
Scalable for Any Market:
Works seamlessly with forex, crypto, and stocks by adjusting the ATR multiplier and box length.
This indicator is not just a tool but a framework for understanding market dynamics at a deeper level. Let me know if you'd like to take it even further — for example, adding machine learning-inspired probability models or multi-timeframe analysis! 🚀
EWO Breaking Bands & XTLElliott Wave Principle, developed by Ralph Nelson Elliott , proposes that the seemingly chaotic behaviour of the different financial markets isn’t actually chaotic. In fact the markets moves in predictable, repetitive cycles or waves and can be measured and forecast using Fibonacci numbers. These waves are a result of influence on investors from outside sources primarily the current psychology of the masses at that given time. Elliott wave predicts that the prices of the a traded currency pair will evolve in waves: five impulsive waves and three corrective waves. Impulsive waves give the main direction of the market expansion and the corrective waves are in the opposite direction (corrective wave occurrences and combination corrective wave occurrences are much higher comparing to impulsive waves)
The Elliott Wave Oscillator ( EWO ) helps identifying where you are in the 5 / 3 Elliott Waves , mainly the highest/lowest values of the oscillator might indicate a potential bullish / bearish Wave 3. Mathematically expressed, EWO is the difference between a 5 period and 35 period moving average. In this study instead 35-period, Fibonacci number 34 is implemented for the slow moving average and formula becomes ewo = sma (HL2, 5) - sma (HL2, 34)
The Elliott Wave Oscillator enables traders to track Elliott Wave counts and divergences. It allows traders to observe when an existing wave ends and when a new one begins. Included with the EWO are the breakout bands that help identify strong impulses.
The Expert Trend Locator ( XTL ) was developed by Tom Joseph (in his book Applying Technical Analysis) to identify major trends, similar to Elliott Wave 3 type swings.
Blue bars are bullish and indicate a potential upwards impulse.
Red bars are bearish and indicate a potential downwards impulse.
White bars indicate no trend is detected at the moment.
Added "TSI Arrows". The arrows is intended to help the viewer identify potential turning points. The presence of arrows indicates that the TSI indicator is either "curling" up under the signal line, or "curling" down over the signal line. This can help to anticipate reversals, or moves in favor of trend direction.
Right Sided Ricker Moving Average And The Gaussian DerivativesIn general gaussian related indicators are built by using the gaussian function in one way or another, for example a gaussian filter is built by using a truncated gaussian function as filter kernel (kernel refer to the set weights) and has many great properties, note that i say truncated because the gaussian function is not supposed to be finite. In general the gaussian function is represented by a symmetrical bell shaped curve, however the gaussian function is parametric, and the user might adjust the position of the peak as well as the width of the curve, an indicator using this parametric approach is the Arnaud Legoux moving average (ALMA) who posses a length parameter controlling the filter length, a peak parameter controlling the position of the peak of the gaussian function as well as a width parameter, those parameters can increase/decrease the lag and smoothness of the moving average output.
However what about the derivatives of the gaussian function ? We don't talk much about them and thats a pity because they are extremely interesting and have many great properties as well, therefore in this post i'll present a low lag moving average based on the modification of the 2nd order derivative of the gaussian function, i believe this post will be extremely informative and i hope you will enjoy reading it, if you are not a math person you can skip the introduction on gaussian derivatives and their properties used as filter kernel.
Gaussian Derivatives And The Ricker Wavelet
The notion of derivative is continuous, so we will stick with the term discrete derivative instead, which just refer to the rate of change in the function, we have a change function in pinescript, and we will be using it to show an approximation of the gaussian function derivatives.
Earlier i used the term 2nd order derivative, here the derivative order refer to the order of differentiation, that is the number of time we apply the change function. For example the 0 (zeroth) order derivative mean no differentiation, the 1st order derivative mean we use differentiation 1 time, that is change(f) , 2nd order mean we use differentiation 2 times, that is change(change(f)) , derivates based on multiple differentiation are called "higher derivative". It will be easier to show a graphic :
Here we can see a normal gaussian function in blue, its scaled 1st order derivative in orange, and its scaled 2nd derivative in green, note that i use scaled because i used multiplication in order for you to see each curve, else it would have been less easy to observe them. The number of time a gaussian function derivative cross 0 is based on the order of differentiation, that is 2nd order = the function crossing 0 two times.
Now we can explain what is the Ricker wavelet, the Ricker wavelet is just the normalized 2nd order derivative of a gaussian function with inverted sign, and unlike the gaussian function the only thing you can change is the width parameter. The formula of the Ricker wavelet is show'n here en.wikipedia.org , where sigma is the width parameter.
The Ricker wavelet has this look :
Because she is shaped like a sombrero the Ricker wavelet is also called "mexican hat wavelet", now what would happen if we used a Ricker wavelet as filter kernel ? The response is that we would end-up with a bandpass filter, in fact the derivatives of the gaussian function would all give the kernel of a bandpass filter, with higher order derivatives making the frequency response of the filter approximate a symmetrical gaussian function, if i recall a filter using the first order derivative of a gaussian function would give a frequency response that is left skewed, this skewness is removed when using higher order derivatives.
The Indicator
I didn't wanted to make a bandpass filter, as lately i'am more interested in low-lag filters, so how can we use the Ricker wavelet to make a low-lag low-pass filter ? The response is by taking the right side of the Ricker wavelet, and since values of the wavelets are negatives near the border we know that the filter passband is non-monotonic, that is we know that the filter will have low-lag as frequencies in the passband will be amplified.
So taking the right side of the Ricker wavelet only mean that t has to be greater than 0 and linearly increasing, thats easy, however the width parameter can be tricky to use, this was already the case with ALMA, so how can we work with it ? First it can be seen that values of width needs to be adjusted based on the filter length.
In red width = 14, in green width = 5. We can see that an higher values of width would give really low weights, when the number of negative weights is too important the filter can have a negative group delay thus becoming predictive, this simply mean that the overshoots/undershoots will be crazy wild and that a great fit will be impossible.
Here two moving averages using the previous described kernels, they don't fit the price well at all ! In order to fix this we can simply define width as a function of the filter length, therefore the parameter "Percentage Width" was introduced, and simply set the width of the Ricker wavelet as p percent of the filter length. Lower values of percent width reduce the lag of the moving average, but lets see precisely how this parameter influence the filter output :
Here the filter length is equal to 100, and the percent width is equal to 60, the fit is quite great, lower values of percent width will increase overshoots, in fact the filter become predictive once the percent width is equal or lower to 50.
Here the percent width is equal to 50. Higher values of percent width reduce the overshoots, and a value of 100 return a filter with no overshoots that is suited to act as a lagging moving average.
Above percent width is set to 100. In order to make use of the predictive side of the filter, it would be great to introduce a forecast option, however this require to find the best forecast horizon period based on length and width, this is no easy task.
Finally lets estimate a least squares moving average with the proposed moving average, you know me...a percent width set to 63 will return a relatively good estimate of the LSMA.
LSMA in green and the proposed moving in red with percent width = 63 and both length = 100.
Conclusion
A new low-lag moving average using a right sided Ricker wavelet as filter kernel has been introduced, we have also seen some properties of gaussian derivatives. You can see that lately i published more moving averages where the user can adjust certain properties of the filter kernel such as curve width for example, if you like those moving averages you can check the Parametric Corrective Linear Moving Averages indicator published last month :
I don't exclude working with pure forms of gaussian derivatives in the future, as i didn't published much oscillators lately.
Thx for reading !
Aetherium Institutional Market Resonance EngineAetherium Institutional Market Resonance Engine (AIMRE)
A Three-Pillar Framework for Decoding Institutional Activity
🎓 THEORETICAL FOUNDATION
The Aetherium Institutional Market Resonance Engine (AIMRE) is a multi-faceted analysis system designed to move beyond conventional indicators and decode the market's underlying structure as dictated by institutional capital flow. Its philosophy is built on a singular premise: significant market moves are preceded by a convergence of context , location , and timing . Aetherium quantifies these three dimensions through a revolutionary three-pillar architecture.
This system is not a simple combination of indicators; it is an integrated engine where each pillar's analysis feeds into a central logic core. A signal is only generated when all three pillars achieve a state of resonance, indicating a high-probability alignment between market organization, key liquidity levels, and cyclical momentum.
⚡ THE THREE-PILLAR ARCHITECTURE
1. 🌌 PILLAR I: THE COHERENCE ENGINE (THE 'CONTEXT')
Purpose: To measure the degree of organization within the market. This pillar answers the question: " Is the market acting with a unified purpose, or is it chaotic and random? "
Conceptual Framework: Institutional campaigns (accumulation or distribution) create a non-random, organized market environment. Retail-driven or directionless markets are characterized by "noise" and chaos. The Coherence Engine acts as a filter to ensure we only engage when institutional players are actively steering the market.
Formulaic Concept:
Coherence = f(Dominance, Synchronization)
Dominance Factor: Calculates the absolute difference between smoothed buying pressure (volume-weighted bullish candles) and smoothed selling pressure (volume-weighted bearish candles), normalized by total pressure. A high value signifies a clear winner between buyers and sellers.
Synchronization Factor: Measures the correlation between the streams of buying and selling pressure over the analysis window. A high positive correlation indicates synchronized, directional activity, while a negative correlation suggests choppy, conflicting action.
The final Coherence score (0-100) represents the percentage of market organization. A high score is a prerequisite for any signal, filtering out unpredictable market conditions.
2. 💎 PILLAR II: HARMONIC LIQUIDITY MATRIX (THE 'LOCATION')
Purpose: To identify and map high-impact institutional footprints. This pillar answers the question: " Where have institutions previously committed significant capital? "
Conceptual Framework: Large institutional orders leave indelible marks on the market in the form of anomalous volume spikes at specific price levels. These are not random occurrences but are areas of intense historical interest. The Harmonic Liquidity Matrix finds these footprints and consolidates them into actionable support and resistance zones called "Harmonic Nodes."
Algorithmic Process:
Footprint Identification: The engine scans the historical lookback period for candles where volume > average_volume * Institutional_Volume_Filter. This identifies statistically significant volume events.
Node Creation: A raw node is created at the mean price of the identified candle.
Dynamic Clustering: The engine uses an ATR-based proximity algorithm. If a new footprint is identified within Node_Clustering_Distance (ATR) of an existing Harmonic Node, it is merged. The node's price is volume-weighted, and its magnitude is increased. This prevents chart clutter and consolidates nearby institutional orders into a single, more significant level.
Node Decay: Nodes that are older than the Institutional_Liquidity_Scanback period are automatically removed from the chart, ensuring the analysis remains relevant to recent market dynamics.
3. 🌊 PILLAR III: CYCLICAL RESONANCE MATRIX (THE 'TIMING')
Purpose: To identify the market's dominant rhythm and its current phase. This pillar answers the question: " Is the market's immediate energy flowing up or down? "
Conceptual Framework: Markets move in waves and cycles of varying lengths. Trading in harmony with the current cyclical phase dramatically increases the probability of success. Aetherium employs a simplified wavelet analysis concept to decompose price action into short, medium, and long-term cycles.
Algorithmic Process:
Cycle Decomposition: The engine calculates three oscillators based on the difference between pairs of Exponential Moving Averages (e.g., EMA8-EMA13 for short cycle, EMA21-EMA34 for medium cycle).
Energy Measurement: The 'energy' of each cycle is determined by its recent volatility (standard deviation). The cycle with the highest energy is designated as the "Dominant Cycle."
Phase Analysis: The engine determines if the dominant cycles are in a bullish phase (rising from a trough) or a bearish phase (falling from a peak).
Cycle Sync: The highest conviction timing signals occur when multiple cycles (e.g., short and medium) are synchronized in the same direction, indicating broad-based momentum.
🔧 COMPREHENSIVE INPUT SYSTEM
Pillar I: Market Coherence Engine
Coherence Analysis Window (10-50, Default: 21): The lookback period for the Coherence Engine.
Lower Values (10-15): Highly responsive to rapid shifts in market control. Ideal for scalping but can be sensitive to noise.
Balanced (20-30): Excellent for day trading, capturing the ebb and flow of institutional sessions.
Higher Values (35-50): Smoother, more stable reading. Best for swing trading and identifying long-term institutional campaigns.
Coherence Activation Level (50-90%, Default: 70%): The minimum market organization required to enable signal generation.
Strict (80-90%): Only allows signals in extremely clear, powerful trends. Fewer, but potentially higher quality signals.
Standard (65-75%): A robust filter that effectively removes choppy conditions while capturing most valid institutional moves.
Lenient (50-60%): Allows signals in less-organized markets. Can be useful in ranging markets but may increase false signals.
Pillar II: Harmonic Liquidity Matrix
Institutional Liquidity Scanback (100-400, Default: 200): How far back the engine looks for institutional footprints.
Short (100-150): Focuses on recent institutional activity, providing highly relevant, immediate levels.
Long (300-400): Identifies major, long-term structural levels. These nodes are often extremely powerful but may be less frequent.
Institutional Volume Filter (1.3-3.0, Default: 1.8): The multiplier for detecting a volume spike.
High (2.5-3.0): Only registers climactic, undeniable institutional volume. Fewer, but more significant nodes.
Low (1.3-1.7): More sensitive, identifying smaller but still relevant institutional interest.
Node Clustering Distance (0.2-0.8 ATR, Default: 0.4): The ATR-based distance for merging nearby nodes.
High (0.6-0.8): Creates wider, more consolidated zones of liquidity.
Low (0.2-0.3): Creates more numerous, precise, and distinct levels.
Pillar III: Cyclical Resonance Matrix
Cycle Resonance Analysis (30-100, Default: 50): The lookback for determining cycle energy and dominance.
Short (30-40): Tunes the engine to faster, shorter-term market rhythms. Best for scalping.
Long (70-100): Aligns the timing component with the larger primary trend. Best for swing trading.
Institutional Signal Architecture
Signal Quality Mode (Professional, Elite, Supreme): Controls the strictness of the three-pillar confluence.
Professional: Loosest setting. May generate signals if two of the three pillars are in strong alignment. Increases signal frequency.
Elite: Balanced setting. Requires a clear, unambiguous resonance of all three pillars. The recommended default.
Supreme: Most stringent. Requires perfect alignment of all three pillars, with each pillar exhibiting exceptionally strong readings (e.g., coherence > 85%). The highest conviction signals.
Signal Spacing Control (5-25, Default: 10): The minimum bars between signals to prevent clutter and redundant alerts.
🎨 ADVANCED VISUAL SYSTEM
The visual architecture of Aetherium is designed not merely for aesthetics, but to provide an intuitive, at-a-glance understanding of the complex data being processed.
Harmonic Liquidity Nodes: The core visual element. Displayed as multi-layered, semi-transparent horizontal boxes.
Magnitude Visualization: The height and opacity of a node's "glow" are proportional to its volume magnitude. More significant nodes appear brighter and larger, instantly drawing the eye to key levels.
Color Coding: Standard nodes are blue/purple, while exceptionally high-magnitude nodes are highlighted in an accent color to denote critical importance.
🌌 Quantum Resonance Field: A dynamic background gradient that visualizes the overall market environment.
Color: Shifts from cool blues/purples (low coherence) to energetic greens/cyans (high coherence and organization), providing instant context.
Intensity: The brightness and opacity of the field are influenced by total market energy (a composite of coherence, momentum, and volume), making powerful market states visually apparent.
💎 Crystalline Lattice Matrix: A geometric web of lines projected from a central moving average.
Mathematical Basis: Levels are projected using multiples of the Golden Ratio (Phi ≈ 1.618) and the ATR. This visualizes the natural harmonic and fractal structure of the market. It is not arbitrary but is based on mathematical principles of market geometry.
🧠 Synaptic Flow Network: A dynamic particle system visualizing the engine's "thought process."
Node Density & Activation: The number of particles and their brightness/color are tied directly to the Market Coherence score. In high-coherence states, the network becomes a dense, bright, and organized web. In chaotic states, it becomes sparse and dim.
⚡ Institutional Energy Waves: Flowing sine waves that visualize market volatility and rhythm.
Amplitude & Speed: The height and speed of the waves are directly influenced by the ATR and volume, providing a feel for market energy.
📊 INSTITUTIONAL CONTROL MATRIX (DASHBOARD)
The dashboard is the central command console, providing a real-time, quantitative summary of each pillar's status.
Header: Displays the script title and version.
Coherence Engine Section:
State: Displays a qualitative assessment of market organization: ◉ PHASE LOCK (High Coherence), ◎ ORGANIZING (Moderate Coherence), or ○ CHAOTIC (Low Coherence). Color-coded for immediate recognition.
Power: Shows the precise Coherence percentage and a directional arrow (↗ or ↘) indicating if organization is increasing or decreasing.
Liquidity Matrix Section:
Nodes: Displays the total number of active Harmonic Liquidity Nodes currently being tracked.
Target: Shows the price level of the nearest significant Harmonic Node to the current price, representing the most immediate institutional level of interest.
Cycle Matrix Section:
Cycle: Identifies the currently dominant market cycle (e.g., "MID ") based on cycle energy.
Sync: Indicates the alignment of the cyclical forces: ▲ BULLISH , ▼ BEARISH , or ◆ DIVERGENT . This is the core timing confirmation.
Signal Status Section:
A unified status bar that provides the final verdict of the engine. It will display "QUANTUM SCAN" during neutral periods, or announce the tier and direction of an active signal (e.g., "◉ TIER 1 BUY ◉" ), highlighted with the appropriate color.
🎯 SIGNAL GENERATION LOGIC
Aetherium's signal logic is built on the principle of strict, non-negotiable confluence.
Condition 1: Context (Coherence Filter): The Market Coherence must be above the Coherence Activation Level. No signals can be generated in a chaotic market.
Condition 2: Location (Liquidity Node Interaction): Price must be actively interacting with a significant Harmonic Liquidity Node.
For a Buy Signal: Price must be rejecting the Node from below (testing it as support).
For a Sell Signal: Price must be rejecting the Node from above (testing it as resistance).
Condition 3: Timing (Cycle Alignment): The Cyclical Resonance Matrix must confirm that the dominant cycles are synchronized with the intended trade direction.
Signal Tiering: The Signal Quality Mode input determines how strictly these three conditions must be met. 'Supreme' mode, for example, might require not only that the conditions are met, but that the Market Coherence is exceptionally high and the interaction with the Node is accompanied by a significant volume spike.
Signal Spacing: A final filter ensures that signals are spaced by a minimum number of bars, preventing over-alerting in a single move.
🚀 ADVANCED TRADING STRATEGIES
The Primary Confluence Strategy: The intended use of the system. Wait for a Tier 1 (Elite/Supreme) or Tier 2 (Professional/Elite) signal to appear on the chart. This represents the alignment of all three pillars. Enter after the signal bar closes, with a stop-loss placed logically on the other side of the Harmonic Node that triggered the signal.
The Coherence Context Strategy: Use the Coherence Engine as a standalone market filter. When Coherence is high (>70%), favor trend-following strategies. When Coherence is low (<50%), avoid new directional trades or favor range-bound strategies. A sharp drop in Coherence during a trend can be an early warning of a trend's exhaustion.
Node-to-Node Trading: In a high-coherence environment, use the Harmonic Liquidity Nodes as both entry points and profit targets. For example, after a BUY signal is generated at one Node, the next Node above it becomes a logical first profit target.
⚖️ RESPONSIBLE USAGE AND LIMITATIONS
Decision Support, Not a Crystal Ball: Aetherium is an advanced decision-support tool. It is designed to identify high-probability conditions based on a model of institutional behavior. It does not predict the future.
Risk Management is Paramount: No indicator can replace a sound risk management plan. Always use appropriate position sizing and stop-losses. The signals provided are probabilistic, not certainties.
Past Performance Disclaimer: The market models used in this script are based on historical data. While robust, there is no guarantee that these patterns will persist in the future. Market conditions can and do change.
Not a "Set and Forget" System: The indicator performs best when its user understands the concepts behind the three pillars. Use the dashboard and visual cues to build a comprehensive view of the market before acting on a signal.
Backtesting is Essential: Before applying this tool to live trading, it is crucial to backtest and forward-test it on your preferred instruments and timeframes to understand its unique behavior and characteristics.
🔮 CONCLUSION
The Aetherium Institutional Market Resonance Engine represents a paradigm shift from single-variable analysis to a holistic, multi-pillar framework. By quantifying the abstract concepts of market context, location, and timing into a unified, logical system, it provides traders with an unprecedented lens into the mechanics of institutional market operations.
It is not merely an indicator, but a complete analytical engine designed to foster a deeper understanding of market dynamics. By focusing on the core principles of institutional order flow, Aetherium empowers traders to filter out market noise, identify key structural levels, and time their entries in harmony with the market's underlying rhythm.
"In all chaos there is a cosmos, in all disorder a secret order." - Carl Jung
— Dskyz, Trade with insight. Trade with confluence. Trade with Aetherium.
Ichimoku Kinkō HyōThe Ichimoku Kinko Hyo is an trading system developed by the late Goichi Hosoda (pen name "Ichimokusanjin") when he was the general manager of the business conditions department of Miyako Shinbun, the predecessor of the Tokyo Shimbun. Currently, it is a registered trademark of Economic Fluctuation Research Institute Co., Ltd., which is run by the bereaved family of Hosoda as a private research institute.
The Ichimoku Kinko Hyo is composed of time theory, price range theory (target price theory) and wave movement theory. Ichimoku means "At One Glace". The equilibrium table is famous for its span, but the first in the equilibrium table is the time relationship.
In the theory of time, the change date is the day after the number of periods classified into the basic numerical value such as 9, 17, 26, etc., the equal numerical value that takes the number of periods of the past wave motion, and the habit numerical value that appears for each issue is there. The market is based on the idea that the buying and selling equilibrium will move in the wrong direction. Another feature is that time is emphasized in order to estimate when changes will occur.
In the price range theory, there are E・V・N・NT calculated values and multiple values of 4 to 8E as target values. In addition, in order to determine the momentum and direction of the market, we will consider other price ranges and ying and yang numbers.
If the calculated value is realized on the change date calculated by each numerical value, the market price is likely to reverse.
転換線 (Tenkansen) (Conversion Line) = (highest price in the past 9 periods + lowest price) ÷ 2
基準線 (Kijunsen) (Base Line) = (highest price in the past 26 periods + lowest price) ÷ 2
It represents Support/Resistance for 16 bars. It is a 50% Fibonacci Retracement. The Kijun sen is knows as the "container" of the trend. It is prefect to use as an initial stop and/or trailing stop.
先行スパン1 (Senkou span 1) (Lagging Span 1) = {(conversion value + reference value) ÷ 2} 25 periods ahead (26 periods ahead including the current day, that is)
先行スパン2 (Senkou span 2) (Lagging Span 2) = {(highest price in the past 52 periods + lowest price) ÷ 2} 25 periods ahead (26 periods ahead including the current day, that is)
遅行スパン (Chikou span) (Lagging Span) = (current candle closing price) plotted 26 periods before (that is, including the current day) 25 periods ago
It is the only Ichimoku indicator that uses the closing price. It is used for momentum of the trend.
The area surrounded by the two lagging span lines is called a cloud. This is the foundation of the system. It determines the sentiment (Bull/Bear) for the insrument. If price is above the cloud, the instrument is bullish. If price is below the cloud, the instrument is bearish.
-
The wave theory of the Ichimoku Kinko Hyo has the following waves.
All about the rising market. If it is the falling market, the opposite is true.
I wave rise one market price.
V wave the market price that raises and lowers.
N wave the market price for raising, lowering, and raising.
P wave the high price depreciates and the low price rises with the passage of time. Leave either.
Y wave the high price rises and the low price falls with the passage of time. Leave either.
S wave A market in which the lowered market rebounds and rises at the previous high level.
There are the above 6 types but the basis of the Ichimoku Kinko Hyo is the N wave of 3 waves.
In Elliott wave theory and similar theories, basically there are 5 waves but 5 waves are a series of 2 and 3 waves N, 3 for 7 waves, 4 for 9 waves and so on.
Even if it keep continuing, it will be based on N wave. In addition, since the P wave and the Y wave are separated from each other, they can be seen as N waves from a large perspective.
-
There are basic E・V・N・NT calculated values and several other calculation methods for the Ichimoku Kinko Hyo. It is the only calculated value that gives a concrete value in the Ichimoku Kinko Hyo, which is difficult to understand, but since we focus only on the price difference and do not consider the supply and demand, it is forbidden to stick to the calculated value alone.
(The calculation method of the following five calculated values is based on the rising market price, which is raised from the low price A to the high price B and lowered from the high price B to the low price C. Therefore, the low price C is higher than the low price A)
E calculated value The amount of increase from the low price A to the high price B is added to the high price B. = B + (BA)
V calculated value Adds the amount of decline from the high price B to the low price C to the high price B. = B + (BC)
N calculated value The amount of increase from the low price A to the high price B is added to the low price C. = C + (BA)
NT calculated value Adds the amount of increase from the low price A to the low price C to the low price C. = C + (CA)
4E calculated value (four-layer double / quadruple value) Adds three times the amount of increase from the low price A to the high price B to the high price B. = B + 3 × (BA)
Calculated value of P wave The upper price is devalued and the lower price is rounded up, and the price range of both is the same.
Calculated value of Y wave The upper price is rounded up and the lower price is rounded down, and the price range of both is the same.
[blackcat] L2 Ehlers Even Better SinwaveLevel: 2
Background
John F. Ehlers introduced Even Better sinwave Indicator in his "Cycle Analytics for Traders" chapter 12 on 2013.
Function
The original Sinewave Indicator was created by seeking the dominant cycle phase angle that had the best correlation between the price data and a theoretical dominant cycle sine wave. The Even Better Sinewave Indicator skips all the cycle measurements completely and relies on a strong normalization of the waveform at the output of a modified roofing filter. The modified roofing filter uses a single-pole high-pass filter to deliberately retain the longer-period trend components. The single-pole high-pass filter basically levels the amplitude of all the cycle components that would otherwise be larger with longer wavelengths due to Spectral Dilation. Therefore, when the waveform is normalized to the power in the waveform over a short period of time, the longer wavelength contributions tend to be an indication to stay in a trade when the market is in a trend.
The Even Better Sinewave Indicator works extraordinarily well when the market is in a trend mode. This means that the spectacular failures of most swing wave indicators are mitigated when the expected price turning point does not occur.
Although Dr. Ehlers admitted he had not studied it extensively, it appears that the Even Better Sinewave Indicator works well on futures intraday data. It takes a position in the correct direction and tends to stay with the good trades without excessive whipsawing.
Key Signal
Wave --> Even Better sinwave Indicator fast line
Trigger --> Even Better sinwave Indicator slow line
Pros and Cons
100% John F. Ehlers definition translation of original work, even variable names are the same. This help readers who would like to use pine to read his book. If you had read his works, then you will be quite familiar with my code style.
Remarks
The 55th script for Blackcat1402 John F. Ehlers Week publication.
Readme
In real life, I am a prolific inventor. I have successfully applied for more than 60 international and regional patents in the past 12 years. But in the past two years or so, I have tried to transfer my creativity to the development of trading strategies. Tradingview is the ideal platform for me. I am selecting and contributing some of the hundreds of scripts to publish in Tradingview community. Welcome everyone to interact with me to discuss these interesting pine scripts.
The scripts posted are categorized into 5 levels according to my efforts or manhours put into these works.
Level 1 : interesting script snippets or distinctive improvement from classic indicators or strategy. Level 1 scripts can usually appear in more complex indicators as a function module or element.
Level 2 : composite indicator/strategy. By selecting or combining several independent or dependent functions or sub indicators in proper way, the composite script exhibits a resonance phenomenon which can filter out noise or fake trading signal to enhance trading confidence level.
Level 3 : comprehensive indicator/strategy. They are simple trading systems based on my strategies. They are commonly containing several or all of entry signal, close signal, stop loss, take profit, re-entry, risk management, and position sizing techniques. Even some interesting fundamental and mass psychological aspects are incorporated.
Level 4 : script snippets or functions that do not disclose source code. Interesting element that can reveal market laws and work as raw material for indicators and strategies. If you find Level 1~2 scripts are helpful, Level 4 is a private version that took me far more efforts to develop.
Level 5 : indicator/strategy that do not disclose source code. private version of Level 3 script with my accumulated script processing skills or a large number of custom functions. I had a private function library built in past two years. Level 5 scripts use many of them to achieve private trading strategy.
Ord Volume [LucF]Tim Ord came up with the Ord Volume concept. The idea is similar to Weis Wave , except that where Weis Wave keeps a cumulative tab of each wave’s successive volume columns, Ord Volume tracks the wave's average volume .
Features
You can choose to distinguish the area’s colors when the average is rising/falling (default).
You can show an EMA of the wave averages, which is different than an EMA on raw volume.
You can show (default) the last wave’s ending average over the current wave, to help in comparing relative levels.
You can change the length of the trend that needs to be broken for a new wave to start, as well as the price used in trend detection.
Use Cases
As with Weis Wave, what I look at first are three characteristics of the waves: their length, height and slope. I then compare those to the corresponding price movements, looking for discrepancies. For example, consecutive bearish waves of equal strength associated with lesser and lesser price movements are often a good indication of an impeding reversal.
Because Ord Volume uses average rather than cumulative volume, I find it is often easier to distinguish what is going on during waves, especially exhaustion at the end of waves.
Tim Ord has a method for entries and exits where he uses Ord Volume in conjunction with tests of support and resistance levels. Here are two articles published in 2004 where Ord explains his technique:
pr.b5z.net
n.b5z.net
Note
Being dependent on volume information as it is currently available in Pine, which does not include a practical way to retrieve delta volume information, the indicator suffers the same lack of precision as most other Pine-built volume indicators. For those not aware of the issue, the problem is that there is no way to distinguish the buying and selling volume (delta volume) in a bar, other than by looping through inside intervals using the security() function, which for me makes performance unsustainable in day to day use, while only providing an approximation of delta volume.
Aethix Cipher DivergencesAethix Cipher Divergences v6
Core Hook: Custom indicator inspired by VuManChu B, Grok-enhanced for crypto intel—blends WaveTrend (WT) oscillator with multi-divergences for buy/sell circles (green/teal buys #00FFFF, red sells) and dots (divs, gold overbought alerts).
Key Features:
WaveTrend Waves: Dual waves (teal WT1, darker teal WT2) with VWAP (purple for neon vibe), overbought/oversold lines, crosses for signals.
Divergences: Regular/hidden for WT, RSI, Stoch—red bearish, green bullish dots; extra range for deeper insights.
RSI + MFI Area: Colored area (green positive, red negative) for sentiment/volume flow.
Stochastic RSI: K/D lines with fill for overbought/oversold trends.
Schaff Trend Cycle: Purple line for cycle smoothing.
Sommi Patterns: Flags (pink bearish, blue bullish) and diamonds for HTF patterns, purple higher VWAP.
MACD Colors on WT: Dynamic WT shading based on MACD for enhanced reads.
Information-Geometric Market DynamicsInformation-Geometric Market Dynamics
The Information Field: A Geometric Approach to Market Dynamics
By: DskyzInvestments
Foreword: Beyond the Shadows on the Wall
If you have traded for any length of time, you know " the feeling ." It is the frustration of a perfect setup that fails, the whipsaw that stops you out just before the real move, the nagging sense that the chart is telling you only half the story. For decades, technical analysis has relied on interpreting the shadows—the patterns left behind by price. We draw lines on these shadows, apply indicators to them, and hope they reveal the future.
But what if we could stop looking at the shadows and, instead, analyze the object casting them?
This script introduces a new paradigm for market analysis: Information-Geometric Market Dynamics (IGMD) . The core premise of IGMD is that the price chart is merely a one-dimensional projection of a much richer, higher-dimensional reality—an " information field " generated by the collective actions and beliefs of all market participants.
This is not just another collection of indicators. It is a unified framework for measuring the geometry of the market's information field—its memory, its complexity, its uncertainty, its causal flows—and making high-probability decisions based on that deeper reality. By fusing advanced mathematical and informational concepts, IGMD provides a multi-faceted lens through which to view market behavior, moving beyond simple price action into the very structure of market information itself.
Prepare to move beyond the flatland of the price chart. Welcome to the information field.
The IGMD Framework: A Multi-Kernel Approach
What is a Kernel? The Heart of Transformation
In mathematics and data science, a kernel is a powerful and elegant concept. At its core, a kernel is a function that takes complex, often inscrutable data and transforms it into a more useful format. Think of it as a specialized lens or a mathematical "probe." You cannot directly measure abstract concepts like "market memory" or "trend quality" by looking at a price number. First, you must process the raw price data through a specific mathematical machine—a kernel—that is designed to output a measurement of that specific property. Kernels operate by performing a sort of "similarity test," projecting data into a higher-dimensional space where hidden patterns and relationships become visible and measurable.
Why do creators use them? We use kernels to extract features —meaningful pieces of information—that are not explicitly present in the raw data. They are the essential tools for moving beyond surface-level analysis into the very DNA of market behavior. A simple moving average can tell you the average price; a suite of well-chosen kernels can tell you about the character of the price action itself.
The Alchemist's Challenge: The Art of Fusion
Using a single kernel is a challenge. Using five distinct, computationally demanding mathematical engines in unison is an immense undertaking. The true difficulty—and artistry—lies not just in using one kernel, but in fusing the outputs of many . Each kernel provides a different perspective, and they can often give conflicting signals. One kernel might detect a strong trend, while another signals rising chaos and uncertainty. The IGMD script's greatest strength is its ability to act as this alchemist, synthesizing these disparate viewpoints through a weighted fusion process to produce a single, coherent picture of the market's state. It required countless hours of testing and calibration to balance the influence of these five distinct analytical engines so they work in harmony rather than cacophony.
The Five Kernels of Market Dynamics
The IGMD script is built upon a foundation of five distinct kernels, each chosen to probe a unique and critical dimension of the market's information field.
1. The Wavelet Kernel (The "Microscope")
What it is: The Wavelet Kernel is a signal processing function designed to decompose a signal into different frequency scales. Unlike a Fourier Transform that analyzes the entire signal at once, the wavelet slides across the data, providing information about both what frequencies are present and when they occurred.
The Kernels I Use:
Haar Kernel: The simplest wavelet, a square-wave shape defined by the coefficients . It excels at detecting sharp, sudden changes.
Daubechies 2 (db2) Kernel: A more complex and smoother wavelet shape that provides a better balance for analyzing the nuanced ebb and flow of typical market trends.
How it Works in the Script: This kernel is applied iteratively. It first separates the finest "noise" (detail d1) from the first level of trend (approximation a1). It then takes the trend a1 and repeats the process, extracting the next level of cycle (d2) and trend (a2), and so on. This hierarchical decomposition allows us to separate short-term noise from the long-term market "thesis."
2. The Hurst Exponent Kernel (The "Memory Gauge")
What it is: The Hurst Exponent is derived from a statistical analysis kernel that measures the "long-term memory" or persistence of a time series. It is the definitive measure of whether a series is trending (H > 0.5), mean-reverting (H < 0.5), or random (H = 0.5).
How it Works in the Script: The script employs a method based on Rescaled Range (R/S) analysis. It calculates the average range of price movements over increasingly larger time lags (m1, m2, m4, m8...). The slope of the line plotting log(range) vs. log(lag) is the Hurst Exponent. Applying this complex statistical analysis not to the raw price, but to the clean, wavelet-decomposed trend lines, is a key innovation of IGMD.
3. The Fractal Dimension Kernel (The "Complexity Compass")
What it is: This kernel measures the geometric complexity or "jaggedness" of a price path, based on the principles of fractal geometry. A straight line has a dimension of 1; a chaotic, space-filling line approaches a dimension of 2.
How it Works in the Script: We use a version based on Ehlers' Fractal Dimension Index (FDI). It calculates the rate of price change over a full lookback period (N3) and compares it to the sum of the rates of change over the two halves of that period (N1 + N2). The formula d = (log(N1 + N2) - log(N3)) / log(2) quantifies how much "longer" and more convoluted the price path was than a simple straight line. This kernel is our primary filter for tradeable (low complexity) vs. untradeable (high complexity) conditions.
4. The Shannon Entropy Kernel (The "Uncertainty Meter")
What it is: This kernel comes from Information Theory and provides the purest mathematical measure of information, surprise, or uncertainty within a system. It is not a measure of volatility; a market moving predictably up by 10 points every bar has high volatility but zero entropy .
How it Works in the Script: The script normalizes price returns by the ATR, categorizes them into a discrete number of "bins" over a lookback window, and forms a probability distribution. The Shannon Entropy H = -Σ(p_i * log(p_i)) is calculated from this distribution. A low H means returns are predictable. A high H means returns are chaotic. This kernel is our ultimate gauge of market conviction.
5. The Transfer Entropy Kernel (The "Causality Probe")
What it is: This is by far the most advanced and computationally intensive kernel in the script. Transfer Entropy is a non-parametric measure of directed information flow between two time series. It moves beyond correlation to ask: "Does knowing the past of Volume genuinely reduce our uncertainty about the future of Price?"
How it Works in the Script: To make this work, the script discretizes both price returns and the chosen "driver" (e.g., OBV) into three states: "up," "down," or "neutral." It then builds complex conditional probability tables to measure the flow of information in both directions. The Net Transfer Entropy (TE Driver→Price minus TE Price→Driver) gives us a direct measure of causality . A positive score means the driver is leading price, confirming the validity of the move. This is a profound leap beyond traditional indicator analysis.
Chapter 3: Fusion & Interpretation - The Field Score & Dashboard
Each kernel is a specialist providing a piece of the puzzle. The Field Score is where they are fused into a single, comprehensive reading. It's a weighted sum of the normalized scores from all five kernels, producing a single number from -1 (maximum bearish information field) to +1 (maximum bullish information field). This is the ultimate "at-a-glance" metric for the market's net state, and it is interpreted through the dashboard.
The Dashboard: Your Mission Control
Field Score & Regime: The master metric and its plain-English interpretation ("Uptrend Field", "Downtrend Field", "Transitional").
Kernel Readouts (Wave Align, H(w), FDI, etc.): The live scores of each individual kernel. This allows you to see why the Field Score is what it is. A high Field Score with all components in agreement (all green or red) is a state of High Coherence and represents a high-quality setup.
Market Context: Standard metrics like RSI and Volume for additional confluence.
Signals: The raw and adjusted confluence counts and the final, calculated probability scores for potential long and short entries.
Pattern: Shows the dominant candlestick pattern detected within the currently forming APEX range box and its calculated confidence percentage.
Chapter 4: Mastering the Controls - The Inputs Menu
Every parameter is a lever to fine-tune the IGMD engine.
📊 Wavelet Transform: Kernel ( Haar for sharp moves, db2 for smooth trends) and Scales (depth of analysis) let you tune the script's core microscope to your asset's personality.
📈 Hurst Exponent: The Window determines if you're assessing short-term or long-term market memory.
🔍 Fractal Dimension & ⚡ Entropy Volatility: Adjust the lookback windows to make these kernels more or less sensitive to recent price action. Always keep "Normalize by ATR" enabled for Entropy for consistent results.
🔄 Transfer Entropy: Driver lets you choose what causal force to measure (e.g., OBV, Volume, or even an external symbol like VIX). The throttle setting is a crucial performance tool, allowing you to balance precision with script speed.
⚡ Field Fusion • Weights: This is where you can customize the model's "brain." Increase the weights for the kernels that best align with your trading philosophy (e.g., w_hurst for trend followers, w_fdi for chop avoiders).
📊 Signal Engine: Mode offers presets from Conservative to Aggressive . Min Confluence sets your evidence threshold. Dynamic Confluence is a powerful feature that automatically adapts this threshold to the market regime.
🎨 Visuals & 📏 Support/Resistance: These inputs give you full control over the chart's appearance, allowing you to toggle every visual element for a setup that is as clean or as data-rich as you desire.
Chapter 5: Reading the Battlefield - On-Chart Visuals
Pattern Boxes (The Large Rectangles): These are not simple range boxes. They appear when the Field Score crosses a significance threshold, signaling a potential ignition point.
Color: The color reflects the dominant candlestick pattern that has occurred within that box's duration (e.g., green for Bull Engulf).
Label: Displays the dominant pattern, its duration in bars, and a calculated Confidence % based on field strength and pattern clarity.
Bar Pattern Boxes (The Small Boxes): If enabled, these highlight individual, significant candlestick patterns ( BE for Bull Engulf, H for Hammer) on a bar-by-bar basis.
Signal Markers (▲ and ▼): These appear only when the Signal Engine's criteria are all met. The number is the calculated Probability Score .
RR Rails (Dashed Lines): When a signal appears, these lines automatically plot the Entry, Stop Loss (based on ATR), and two Take Profit targets (based on Risk/Reward ratios). They dynamically break and disappear as price touches each level.
Support & Resistance Lines: Plots of the highest high ( Resistance ) and lowest low ( Support ) over a lookback, providing key structural levels.
Chapter 6: Development Philosophy & A Final Word
One single question: " What is the market really doing? " It represents a triumph of complexity, blending concepts from signal processing, chaos theory, and information theory into a cohesive framework. It is offered for educational and analytical purposes and does not constitute financial advice. Its goal is to elevate your analysis from interpreting flat shadows to measuring the rich, geometric reality of the market's information field.
As the great mathematician Benoit Mandelbrot , father of fractal geometry, noted:
"Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line."
Neither does the market. IGMD is a tool designed to navigate that beautiful, complex, and fractal reality.
— Dskyz, Trade with insight. Trade with anticipation.
[Excalibur] Ehlers AutoCorrelation Periodogram ModifiedKeep your coins folks, I don't need them, don't want them. If you wish be generous, I do hope that charitable peoples worldwide with surplus food stocks may consider stocking local food banks before stuffing monetary bank vaults, for the crusade of remedying the needs of less than fortunate children, parents, elderly, homeless veterans, and everyone else who deserves nutritional sustenance for the soul.
DEDICATION:
This script is dedicated to the memory of Nikolai Dmitriyevich Kondratiev (Никола́й Дми́триевич Кондра́тьев) as tribute for being a pioneering economist and statistician, paving the way for modern econometrics by advocation of rigorous and empirical methodologies. One of his most substantial contributions to the study of business cycle theory include a revolutionary hypothesis recognizing the existence of dynamic cycle-like phenomenon inherent to economies that are characterized by distinct phases of expansion, stagnation, recession and recovery, what we now know as "Kondratiev Waves" (K-waves). Kondratiev was one of the first economists to recognize the vital significance of applying quantitative analysis on empirical data to evaluate economic dynamics by means of statistical methods. His understanding was that conceptual models alone were insufficient to adequately interpret real-world economic conditions, and that sophisticated analysis was necessary to better comprehend the nature of trending/cycling economic behaviors. Additionally, he recognized prosperous economic cycles were predominantly driven by a combination of technological innovations and infrastructure investments that resulted in profound implications for economic growth and development.
I will mention this... nation's economies MUST be supported and defended to continuously evolve incrementally in order to flourish in perpetuity OR suffer through eras with lasting ramifications of societal stagnation and implosion.
Analogous to the realm of economics, aperiodic cycles/frequencies, both enduring and ephemeral, do exist in all facets of life, every second of every day. To name a few that any blind man can naturally see are: heartbeat (cardiac cycles), respiration rates, circadian rhythms of sleep, powerful magnetic solar cycles, seasonal cycles, lunar cycles, weather patterns, vegetative growth cycles, and ocean waves. Do not pretend for one second that these basic aforementioned examples do not affect business cycle fluctuations in minuscule and monumental ways hour to hour, day to day, season to season, year to year, and decade to decade in every nation on the planet. Kondratiev's original seminal theories in macroeconomics from nearly a century ago have proven remarkably prescient with many of his antiquated elementary observations/notions/hypotheses in macroeconomics being scholastically studied and topically researched further. Therefore, I am compelled to honor and recognize his statistical insight and foresight.
If only.. Kondratiev could hold a pocket sized computer in the cup of both hands bearing the TradingView logo and platform services, I truly believe he would be amazed in marvelous delight with a GARGANTUAN smile on his face.
INTRODUCTION:
Firstly, this is NOT technically speaking an indicator like most others. I would describe it as an advanced cycle period detector to obtain market data spectral estimates with low latency and moderate frequency resolution. Developers can take advantage of this detector by creating scripts that utilize a "Dominant Cycle Source" input to adaptively govern algorithms. Be forewarned, I would only recommend this for advanced developers, not novice code dabbling. Although, there is some Pine wizardry introduced here for novice Pine enthusiasts to witness and learn from. AI did describe the code into one super-crunched sentence as, "a rare feat of exceptionally formatted code masterfully balancing visual clarity, precision, and complexity to provide immense educational value for both programming newcomers and expert Pine coders alike."
Understand all of the above aforementioned? Buckle up and proceed for a lengthy read of verbose complexity...
This is my enhanced and heavily modified version of autocorrelation periodogram (ACP) for Pine Script v5.0. It was originally devised by the mathemagician John Ehlers for detecting dominant cycles (frequencies) in an asset's price action. I have been sitting on code similar to this for a long time, but I decided to unleash the advanced code with my fashion. Originally Ehlers released this with multiple versions, one in a 2016 TASC article and the other in his last published 2013 book "Cycle Analytics for Traders", chapter 8. He wasn't joking about "concepts of advanced technical trading" and ACP is nowhere near to his most intimidating and ingenious calculations in code. I will say the book goes into many finer details about the original periodogram, so if you wish to delve into even more elaborate info regarding Ehlers' original ACP form AND how you may adapt algorithms, you'll have to obtain one. Note to reader, comparing Ehlers' original code to my chimeric code embracing the "Power of Pine", you will notice they have little resemblance.
What you see is a new species of autocorrelation periodogram combining Ehlers' innovation with my fascinations of what ACP could be in a Pine package. One other intention of this script's code is to pay homage to Ehlers' lifelong works. Like Kondratiev, Ehlers is also a hardcore cycle enthusiast. I intend to carry on the fire Ehlers envisioned and I believe that is literally displayed here as a pleasant "fiery" example endowed with Pine. With that said, I tried to make the code as computationally efficient as possible, without going into dozens of more crazy lines of code to speed things up even more. There's also a few creative modifications I made by making alterations to the originating formulas that I felt were improvements, one of them being lag reduction. By recently questioning every single thing I thought I knew about ACP, combined with the accumulation of my current knowledge base, this is the innovative revision I came up with. I could have improved it more but decided not to mind thrash too many TV members, maybe later...
I am now confident Pine should have adequate overhead left over to attach various indicators to the dominant cycle via input.source(). TV, I apologize in advance if in the future a server cluster combusts into a raging inferno... Coders, be fully prepared to build entire algorithms from pure raw code, because not all of the built-in Pine functions fully support dynamic periods (e.g. length=ANYTHING). Many of them do, as this was requested and granted a while ago, but some functions are just inherently finicky due to implementation combinations and MUST be emulated via raw code. I would imagine some comprehensive library or numerous authored scripts have portions of raw code for Pine built-ins some where on TV if you look diligently enough.
Notice: Unfortunately, I will not provide any integration support into member's projects at all. I have my own projects that require way too much of my day already. While I was refactoring my life (forgoing many other "important" endeavors) in the early half of 2023, I primarily focused on this code over and over in my surplus time. During that same time I was working on other innovations that are far above and beyond what this code is. I hope you understand.
The best way programmatically may be to incorporate this code into your private Pine project directly, after brutal testing of course, but that may be too challenging for many in early development. Being able to see the periodogram is also beneficial, so input sourcing may be the "better" avenue to tether portions of the dominant cycle to algorithms. Unique indication being able to utilize the dominantCycle may be advantageous when tethering this script to those algorithms. The easiest way is to manually set your indicators to what ACP recognizes as the dominant cycle, but that's actually not considered dynamic real time adaption of an indicator. Different indicators may need a proportion of the dominantCycle, say half it's value, while others may need the full value of it. That's up to you to figure that out in practice. Sourcing one or more custom indicators dynamically to one detector's dominantCycle may require code like this: `int sourceDC = int(math.max(6, math.min(49, input.source(close, "Dominant Cycle Source"))))`. Keep in mind, some algos can use a float, while algos with a for loop require an integer.
I have witnessed a few attempts by talented TV members for a Pine based autocorrelation periodogram, but not in this caliber. Trust me, coding ACP is no ordinary task to accomplish in Pine and modifying it blessed with applicable improvements is even more challenging. For over 4 years, I have been slowly improving this code here and there randomly. It is beautiful just like a real flame, but... this one can still burn you! My mind was fried to charcoal black a few times wrestling with it in the distant past. My very first attempt at translating ACP was a month long endeavor because PSv3 simply didn't have arrays back then. Anyways, this is ACP with a newer engine, I hope you enjoy it. Any TV subscriber can utilize this code as they please. If you are capable of sufficiently using it properly, please use it wisely with intended good will. That is all I beg of you.
Lastly, you now see how I have rasterized my Pine with Ehlers' swami-like tech. Yep, this whole time I have been using hline() since PSv3, not plot(). Evidently, plot() still has a deficiency limited to only 32 plots when it comes to creating intense eye candy indicators, the last I checked. The use of hline() is the optimal choice for rasterizing Ehlers styled heatmaps. This does only contain two color schemes of the many I have formerly created, but that's all that is essentially needed for this gizmo. Anything else is generally for a spectacle or seeing how brutal Pine can be color treated. The real hurdle is being able to manipulate colors dynamically with Merlin like capabilities from multiple algo results. That's the true challenging part of these heatmap contraptions to obtain multi-colored "predator vision" level indication. You now have basic hline() food for thought empowerment to wield as you can imaginatively dream in Pine projects.
PERIODOGRAM UTILITY IN REAL WORLD SCENARIOS:
This code is a testament to the abilities that have yet to be fully realized with indication advancements. Periodograms, spectrograms, and heatmaps are a powerful tool with real-world applications in various fields such as financial markets, electrical engineering, astronomy, seismology, and neuro/medical applications. For instance, among these diverse fields, it may help traders and investors identify market cycles/periodicities in financial markets, support engineers in optimizing electrical or acoustic systems, aid astronomers in understanding celestial object attributes, assist seismologists with predicting earthquake risks, help medical researchers with neurological disorder identification, and detection of asymptomatic cardiovascular clotting in the vaxxed via full body thermography. In either field of study, technologies in likeness to periodograms may very well provide us with a better sliver of analysis beyond what was ever formerly invented. Periodograms can identify dominant cycles and frequency components in data, which may provide valuable insights and possibly provide better-informed decisions. By utilizing periodograms within aspects of market analytics, individuals and organizations can potentially refrain from making blinded decisions and leverage data-driven insights instead.
PERIODOGRAM INTERPRETATION:
The periodogram renders the power spectrum of a signal, with the y-axis representing the periodicity (frequencies/wavelengths) and the x-axis representing time. The y-axis is divided into periods, with each elevation representing a period. In this periodogram, the y-axis ranges from 6 at the very bottom to 49 at the top, with intermediate values in between, all indicating the power of the corresponding frequency component by color. The higher the position occurs on the y-axis, the longer the period or lower the frequency. The x-axis of the periodogram represents time and is divided into equal intervals, with each vertical column on the axis corresponding to the time interval when the signal was measured. The most recent values/colors are on the right side.
The intensity of the colors on the periodogram indicate the power level of the corresponding frequency or period. The fire color scheme is distinctly like the heat intensity from any casual flame witnessed in a small fire from a lighter, match, or camp fire. The most intense power would be indicated by the brightest of yellow, while the lowest power would be indicated by the darkest shade of red or just black. By analyzing the pattern of colors across different periods, one may gain insights into the dominant frequency components of the signal and visually identify recurring cycles/patterns of periodicity.
SETTINGS CONFIGURATIONS BRIEFLY EXPLAINED:
Source Options: These settings allow you to choose the data source for the analysis. Using the `Source` selection, you may tether to additional data streams (e.g. close, hlcc4, hl2), which also may include samples from any other indicator. For example, this could be my "Chirped Sine Wave Generator" script found in my member profile. By using the `SineWave` selection, you may analyze a theoretical sinusoidal wave with a user-defined period, something already incorporated into the code. The `SineWave` will be displayed over top of the periodogram.
Roofing Filter Options: These inputs control the range of the passband for ACP to analyze. Ehlers had two versions of his highpass filters for his releases, so I included an option for you to see the obvious difference when performing a comparison of both. You may choose between 1st and 2nd order high-pass filters.
Spectral Controls: These settings control the core functionality of the spectral analysis results. You can adjust the autocorrelation lag, adjust the level of smoothing for Fourier coefficients, and control the contrast/behavior of the heatmap displaying the power spectra. I provided two color schemes by checking or unchecking a checkbox.
Dominant Cycle Options: These settings allow you to customize the various types of dominant cycle values. You can choose between floating-point and integer values, and select the rounding method used to derive the final dominantCycle values. Also, you may control the level of smoothing applied to the dominant cycle values.
DOMINANT CYCLE VALUE SELECTIONS:
External to the acs() function, the code takes a dominant cycle value returned from acs() and changes its numeric form based on a specified type and form chosen within the indicator settings. The dominant cycle value can be represented as an integer or a decimal number, depending on the attached algorithm's requirements. For example, FIR filters will require an integer while many IIR filters can use a float. The float forms can be either rounded, smoothed, or floored. If the resulting value is desired to be an integer, it can be rounded up/down or just be in an integer form, depending on how your algorithm may utilize it.
AUTOCORRELATION SPECTRUM FUNCTION BASICALLY EXPLAINED:
In the beginning of the acs() code, the population of caches for precalculated angular frequency factors and smoothing coefficients occur. By precalculating these factors/coefs only once and then storing them in an array, the indicator can save time and computational resources when performing subsequent calculations that require them later.
In the following code block, the "Calculate AutoCorrelations" is calculated for each period within the passband width. The calculation involves numerous summations of values extracted from the roofing filter. Finally, a correlation values array is populated with the resulting values, which are normalized correlation coefficients.
Moving on to the next block of code, labeled "Decompose Fourier Components", Fourier decomposition is performed on the autocorrelation coefficients. It iterates this time through the applicable period range of 6 to 49, calculating the real and imaginary parts of the Fourier components. Frequencies 6 to 49 are the primary focus of interest for this periodogram. Using the precalculated angular frequency factors, the resulting real and imaginary parts are then utilized to calculate the spectral Fourier components, which are stored in an array for later use.
The next section of code smooths the noise ridden Fourier components between the periods of 6 and 49 with a selected filter. This species also employs numerous SuperSmoothers to condition noisy Fourier components. One of the big differences is Ehlers' versions used basic EMAs in this section of code. I decided to add SuperSmoothers.
The final sections of the acs() code determines the peak power component for normalization and then computes the dominant cycle period from the smoothed Fourier components. It first identifies a single spectral component with the highest power value and then assigns it as the peak power. Next, it normalizes the spectral components using the peak power value as a denominator. It then calculates the average dominant cycle period from the normalized spectral components using Ehlers' "Center of Gravity" calculation. Finally, the function returns the dominant cycle period along with the normalized spectral components for later external use to plot the periodogram.
POST SCRIPT:
Concluding, I have to acknowledge a newly found analyst for assistance that I couldn't receive from anywhere else. For one, Claude doesn't know much about Pine, is unfortunately color blind, and can't even see the Pine reference, but it was able to intuitively shred my code with laser precise realizations. Not only that, formulating and reformulating my description needed crucial finesse applied to it, and I couldn't have provided what you have read here without that artificial insight. Finding the right order of words to convey the complexity of ACP and the elaborate accompanying content was a daunting task. No code in my life has ever absorbed so much time and hard fricking work, than what you witness here, an ACP gem cut pristinely. I'm unveiling my version of ACP for an empowering cause, in the hopes a future global army of code wielders will tether it to highly functional computational contraptions they might possess. Here is ACP fully blessed poetically with the "Power of Pine" in sublime code. ENJOY!
Stochastic Zone Strength Trend [wbburgin](This script was originally invite-only, but I'd vastly prefer contributing to the TradingView community more than anything else, so I am making it public :) I'd much rather share my ideas with you all.)
The Stochastic Zone Strength Trend indicator is a very powerful momentum and trend indicator that 1) identifies trend direction and strength, 2) determines pullbacks and reversals (including oversold and overbought conditions), 3) identifies divergences, and 4) can filter out ranges. I have some examples below on how to use it to its full effectiveness. It is composed of two components: Stochastic Zone Strength and Stochastic Trend Strength.
Stochastic Zone Strength
At its most basic level, the stochastic Zone Strength plots the momentum of the price action of the instrument, and identifies bearish and bullish changes with a high degree of accuracy. Think of the stochastic Zone Strength as a much more robust equivalent of the RSI. Momentum-change thresholds are demonstrated by the "20" and "80" levels on the indicator (see below image).
Stochastic Trend Strength
The stochastic Trend Strength component of the script uses resistance in each candlestick to calculate the trend strength of the instrument. I'll go more into detail about the settings after my description of how to use the indicator, but there are two forms of the stochastic Trend Strength:
Anchored at 50 (directional stochastic Trend Strength):
The directional stochastic Trend Strength can be used similarly to the MACD difference or other histogram-like indicators : a rising plot indicates an upward trend, while a falling plot indicates a downward trend.
Anchored at 0 (nondirectional stochastic Trend Strength):
The nondirectional stochastic Trend Strength can be used similarly to the ADX or other non-directional indicators : a rising plot indicates increasing trend strength, and look at the stochastic Zone Strength component and your instrument to determine if this indicates increasing bullish strength or increasing bearish strength (see photo below):
(In the above photo, a bearish divergence indicated that the high Trend Strength predicted a strong downwards move, which was confirmed shortly after. Later, a bullish move upward by the Zone Strength while the Trend Strength was elevated predicated a strong upwards move, which was also confirmed. Note the period where the Trend Strength never reached above 80, which indicated a ranging period (and thus unprofitable to enter or exit)).
How to Use the Indicator
The above image is a good example on how to use the indicator to determine divergences and possible pivot points (lines and circles, respectively). I recommend using both the stochastic Zone Strength and the stochastic Trend Strength at the same time, as it can give you a robust picture of where momentum is in relation to the price action and its trajectory. Every color is changeable in the settings.
Settings
The Amplitude of the indicator is essentially the high-low lookback for both components.
The Wavelength of the indicator is how stretched-out you want the indicator to be: how many amplitudes do you want the indicator to process in one given bar.
A useful analogy that I use (and that I derived the names from) is from traditional physics. In wave motion, the Amplitude is the up-down sensitivity of the wave, and the Wavelength is the side-side stretch of the wave.
The Smoothing Factor of the settings is simply how smoothed you want the stochastic to be. It's not that important in most circumstances.
Trend Anchor was covered above (see my description of Trend Strength). The "Trend Transform MA Length" is the EMA length of the Trend Strength that you use to transform it into the directional oscillator. Think of the EMA being transformed onto the 50 line and then the Trend Strength being dragged relative to that.
Trend Transform MA Length is the EMA length you want to use for transforming the nondirectional Trend Strength (anchored at 0) into the directional Trend Strength (anchored at 50). I suggest this be the same as the wavelength.
Trend Plot Type can transform the Nondirectional Trend Strength into a line plot so that it doesn't murk up the background.
Finally, the colors are changeable on the bottom.
Explanation of Zone Strength
If you're knowledgeable in Pine Script, I encourage you to look at the code to try to understand the concept, as it's a little complicated. The theory behind my Zone Strength concept is that the wicks in every bar can be used create an index of bullish and bearish resistance, as a wick signifies that the price crossed above a threshold before returning to its origin. This distance metric is unique because most indicators/formulas for calculating relative strength use a displacement metric (such as close - open) instead of measuring how far the price actually moved (up and down) within a candlestick. This is what the Zone Strength concept represents - the hesitation within the bar that is not typically represented in typical momentum indicators.
In the script's code I have step by step explanations of how the formula is calculated and why it is calculated as such. I encourage you to play around with the amplitude and wavelength inputs as they can make the zone strength look very different and perform differently depending on your interests.
Enjoy!
Walker
Weis V5 zigzag jayySomehow, I deleted version 5 of the zigzag script. Same name. I have added some older notes describing how the Weis Wave works.
I have also changed the date restriction that stopped the script from working after Dec 31, 2022.
What you see here is the Weis zigzag wave plotted directly on the price chart. This script is the companion to the Weis cumulative wave volume script.
What is a Weis wave? David Weis has been recognized as a Wyckoff method analyst he has written two books one of which, Trades About to Happen, describes the evolution of the now-popular Weis wave. The method employed by Weis is to identify waves of price action and to compare the strength of the waves on characteristics of wave strength. Chief among the characteristics of strength is the cumulative volume of the wave. There are other markers that Weis uses as well for example how the actual price difference between the start of the Weis wave from start to finish. Weis also uses time, particularly when using a Renko chart
David Weis did a futures io video which is a popular source of information about his method. (Search David Weis and futures.io. I strongly suggest you also read “Trades About to Happen” by David Weis.
This will get you up and running more quickly when studying charts. However, you should choose the Traditional method to be true to David Weis technique as described in his book "Trades About to Happen" and in the Futures IO Webcast featuring David Weis
. The Weis pip zigzag wave shows how far in terms of bar close price a Weis wave has traveled through the duration of a Weis wave. The Weis zigzag wave is used in combination with the Weis cumulative volume wave. The two waves should be set to the same "wave size".
To use this script, you must set the wave size: Using the traditional Weis method simply enter the desired wave size in the box "How should wave size be calculated", in this example I am using a traditional wave size of .25. Each wave for each security and each timeframe requires its own wave size. Although not the traditional method devised by David Weis a more automatic way to set wave size would be to use Average True Range (ATR). Using ATR is not the true Weis method but it does give you similar waves and, importantly, without the hassle described above. Once the Weis wave size is set then the zigzag wave will be shown with volume. Because Weis used the closing price of a wave to define waves a line Bar highs and bar lows are not captured by the Weis Wave. The default script setting is now cumulative volume waves using an ATR of 7 and a multiplication factor of .5.
To display volume in a way that does not crowd out neighbouring volumes Weis displayed volume as a maximum of 3 digits (usually). Consider two Weis Wave volumes 176,895,570 and 2,654,763,889. To display wave volume as three digits it is necessary to take a number such as 176,895,570 and truncate it. 176,895,570 can be represented as 177 X 10 to the power of 6. The number displayed must also be relative to other numbers in the field. If the highest volume on the page is: 2,654,763,889 and with only three numbers available to display the result the value shown must be 265 (265 X 10 to the power of 7). Since 176,895,570 is an order of magnitude smaller than 2,654,763,889 therefore 175,895,570 must be shown as 18 instead of 177. In this way, the relative magnitudes of the two volumes can be understood. All numbers in the field of view must be truncated by the same order of magnitude to make the relative volumes understandable. The script attempts to calculate the order of magnitude value automatically. If you see a red number in the field of view it means the script has failed to do the calculation automatically and you should use the manual method – use the dialogue box “Calculate truncated wave value automatically or manually”. Scroll down from the automatic method and select manual. Once "manual" is selected the values displayed become the power values or multipliers for each wave.
Using the manual method you will select a “Multiplier” in the next dialogue box. Scan the field and select the largest value in the field of view (visible chart) is the multiplier of interest. If you select a lower number than the maximum value will see at least one red “up”. If you are too high you will see at least one red “down”. Scroll in the direction recommended or the values on the screen will be totally incorrect. With volume truncated to the highest order values, the eye can quickly get a feel for relative volumes. It also reduces the crowding and overlapping of values on the screen. You can opt to show the full volume to help get a sense of the magnitude of the true volumes.
How does the script determine if a Weis wave is continuing to grow or not?
The script evaluates the closing price of each new bar relative to the "Weis wave size". Suppose the current bar closes at a new low close, within the current down wave, at $30.00. If the Weis wave size is $0.10 then the algorithm will remember the $30.00 close and compare it to the close of the next bar. If the bar close price does not close equal to or lower than $30.00 or close equal to or higher than $30.10 then the wave is still a down wave with a current low of $30.00. This is true even if the bar low is less than $30.00 or the bar high is greater than 30.10 – only the bar’s closing price matters. If a bar's closing price climbs back up to a close of $30.11 then because the closing price has moved more than $0.10 (the Weis wave size) then that is a wave reversal with a new up-trending wave. In the above example if there was currently a downward trending wave and the bar closes were as follows $30.00, $30.09, $30.01, $30.05, $30.10 The wave direction would continue to stay downward trending until the close of $30.10 was achieved. As such $30.00 would be the low and the following closes $30.09, $30.01, $30.05 would be allocated to the new upward-trending wave. If however There was a series of bar closes like this $30.00, $30.09, $30.01, $30.05, $29.99 since none of the closes was equal to above the 10-cent reversal target of $30.10 but instead, a new Weis wave low was achieved ($29.99). As such the closes of $30.09, $30.01, $30.05 would all be attributed to the continued down-trending wave with a current low of $29.99, even though the closing price for the interim bars was above $30.00. Now that the Weis Wave low is now 429.99 then, in order to reverse this continued downtrend price will need to close at or above $30.09 on subsequent bar closes assuming now new low bar close is achieved. With large wave sizes, wave direction can be in limbo for many bars before a close either renews wave direction or reverses it and confirms wave direction as either a reversal or a continuation. On the zig-zag, a wave line and its volume will not be "printed" until a wave reversal is confirmed.
The wave attribution is similar when using other methods to define wave size. If ATR is used for wave size instead of a traditional wave constant size such as $0.10 or $2 or 2000 pips or ... then the wave size is calculated based on current ATR instead of the Weis wave constant (Traditional selected value).
I have the option to display pseudo-Ord volume. In truth, Ord used more traditional zig-zag pivots of bar highs and lows. Waves using closes as pivots can have some significant differences. This difference can be lessened by using smaller time frames and larger wave sizes.
There are other options such to display the delta price or pip size of a Weis Wave, the number of bars in a wave, and a few other options.
Weis pip zigzag jayyWhat you see here is the Weis pip zigzag wave plotted directly on the price chart. This script is the companion to the Weis pip wave ( ) which is plotted in the lower panel of the displayed chart and can be used as an alternate way of plotting the same results. The Weis pip zigzag wave shows how far in terms of price a Weis wave has traveled through the duration of a Weis wave. The Weis pip zigzag wave is used in combination with the Weis cumulative volume wave. The two waves must be set to the same "wave size".
To use this script you must set the wave size. Using the traditional Weis method simply enter the desired wave size in the box "Select Weis Wave Size" In this example, it is set to 5. Each wave for each security and each timeframe requires its own wave size. Although not the traditional method a more automatic way to set wave size would be to use ATR. This is not the true Weis method but it does give you similar waves and, importantly, without the hassle described above. Once the Weis wave size is set then the pip wave will be shown.
I have put a pip zigzag of a 5 point Weis wave on the bar chart - that is a different script. I have added it to allow your eye to see what a Weis wave looks like. You will notice that the wave is not in straight lines connecting wave tops to bottoms this is a function of the limitations of Pinescript version 1. This script would need to be in version 4 to allow straight lines. There are too many calculations within this script to allow conversion to Pinescript version 4 or even Version 3. I am in the process of rewriting this script to reduce the number of calculations and streamline the algorithm.
The numbers plotted on the chart are calculated to be relative numbers. The script is limited to showing only three numbers vertically. Only the highest three values of a number are shown. For example, if the highest recent pip value is 12,345 only the first 3 numerals would be displayed ie 123. But suppose there is a recent value of 691. It would not be helpful to display 691 if the other wave size is shown as 123. To give the appropriate relative value the script will show a value of 7 instead of 691. This informs you of the relative magnitude of the values. This is done automatically within the script. There is likely no need to manually override the automatically calculated value. I will create a video that demonstrates the manual override method.
What is a Weis wave? David Weis has been recognized as a Wyckoff method analyst he has written two books one of which, Trades About to Happen, describes the evolution of the now popular Weis wave. The method employed by Weis is to identify waves of price action and to compare the strength of the waves on characteristics of wave strength. Chief among the characteristics of strength is the cumulative volume of the wave. There are other markers that Weis uses as well for example how the actual price difference between the start of the Weis wave from start to finish. Weis also uses time, particularly when using a Renko chart. Weis specifically uses candle or bar closes to define all wave action ie a line chart.
David Weis did a futures io video which is a popular source of information about his method.
This is the identical script with the identical settings but without the offending links. If you want to see the pip Weis method in practice then search Weis pip wave. If you want to see Weis chart in pdf then message me and I will give a link or the Weis pdf. Why would you want to see the Weis chart for May 27, 2020? Merely to confirm the veracity of my algorithm. You could compare my Weis chart here () from the same period to the David Weis chart from May 27. Both waves are for the ES!1 4 hour chart and both for a wave size of 5.
EZThis script is designed to provide a clear, visual confirmation of trend direction, momentum shifts, and institutional bias by combining multiple EMA layers and smoothed Heiken Ashi waves.
Features:
• EMA Trend Band (8, 13, 21 EMA): Highlights short-term trend strength and clean stacking conditions.
• 35 EMA Momentum Line: Captures medium-term momentum shifts for better trade entries.
• 200 SMA Institutional Bias Line: Filters trades aligned with higher timeframe bias.
• Triple-Smoothed Heiken Ashi Waves: Changes background & candle colors to reflect momentum waves, filtering out noise and false signals.
• Liquidity Sweep Zones & Inverse FVGs (Optional): Helps identify smart money footprints and potential reversal zones.
Use Case:
• Best suited for trend-following traders, scalpers, and swing traders who rely on multi-timeframe confluence.
• Works effectively on Forex, Futures, Indices, and Crypto charts.
• Designed to filter out fakeouts and highlight high-probability trade zones.
Disclaimer:
This script is for educational purposes only. It does not guarantee profits and should be used in combination with proper risk management and trading experience.
ABCD Harmonic Pattern [TradingFinder] ABCD Pattern indicator🔵 Introduction
The ABCD harmonic pattern is a tool for identifying potential reversal zones (PRZ) by using Fibonacci ratios to pinpoint critical price reversal points on price charts.
This pattern consists of four key points, labeled A, B, C, and D. In this structure, the AB and CD waves move in the same direction, while the BC wave acts as a corrective wave in the opposite direction.
The ABCD pattern follows specific Fibonacci ratios that enhance its accuracy in identifying PRZ. Typically, point C lies within the 0.382 to 0.886 Fibonacci retracement of the AB wave, indicating the correction extent of the BC wave.
Subsequently, the CD wave, as the final wave in this pattern, reaches point D with a Fibonacci extension between 1.13 and 2.618 of the BC wave. Point D, which marks the PRZ, is where a potential price reversal is likely to occur.
The ABCD pattern appears in both bullish and bearish forms. In the bullish ABCD pattern, prices tend to increase at point D, which defines the PRZ; in the bearish ABCD pattern, prices typically decrease upon reaching the PRZ at point D.
These characteristics make the ABCD pattern a popular tool for identifying PRZ and price reversal points in financial markets, including forex, cryptocurrencies, and stocks.
Bullish Pattern :
Beaish Pattern :
🔵 How to Use
🟣 Bullish ABCD Pattern
The bullish ABCD pattern is another harmonic structure used to identify a potential reversal zone (PRZ) where the price is likely to rise after a downward movement. This pattern includes four main points A, B, C, and D. In the bullish ABCD, the AB and CD waves move downward, and the BC wave acts as a corrective, upward wave. This setup creates a PRZ at point D, where the price may reverse and move upward.
To identify a bullish ABCD pattern, begin with the downward AB wave. The BC wave retraces upward between 0.382 and 0.886 of the AB wave, indicating the extent of the correction.
After the BC retracement, the CD wave forms and extends from point C down to point D, with an extension of around 1.13 to 2.618 of the BC wave. Point D, as the PRZ, represents the area where the price may reverse upwards, making it a strategic level for potential buy positions.
When the price reaches point D in the bullish ABCD pattern, traders look for upward reversal signals. This can include bullish candlestick formations, such as hammer or morning star patterns, near the PRZ to confirm the trend reversal. Entering a long position after confirmation near point D provides a calculated entry point.
Additionally, placing a stop loss slightly below point D helps protect against potential loss if the reversal does not occur. The ABCD pattern, with its precise Fibonacci structure and PRZ identification, gives traders a disciplined approach to spotting bullish reversals in markets, particularly in forex, cryptocurrency, and stock trading.
Bullish Pattern in COINBASE:BTCUSD :
🟣 Bearish ABCD Pattern
The bearish ABCD pattern is a harmonic structure that indicates a potential reversal zone (PRZ) where price may shift downward after an initial upward movement. This pattern consists of four main points A, B, C, and D. In a bearish ABCD, the AB and CD waves move upward, while the BC wave acts as a corrective wave in the opposite, downward direction. This reversal zone (PRZ) can be identified with specific Fibonacci ratios.
To identify a bearish ABCD pattern, start by observing the AB wave, which forms as an upward price movement. The BC wave, which follows, typically retraces between 0.382 to 0.886 of the AB wave. This retracement indicates how far the correction goes and sets the foundation for the next wave.
Finally, the CD wave extends from point C to reach point D with a Fibonacci extension of approximately 1.13 to 2.618 of the BC wave. Point D represents the PRZ where the potential reversal may occur, making it a critical area for traders to consider short positions.
Once point D in the bearish ABCD pattern is reached, traders can anticipate a downward price movement. At this potential reversal zone (PRZ), traders often wait for additional bearish signals or candlestick patterns, such as engulfing or evening star formations, to confirm the price reversal.
This confirmation around the PRZ enhances the accuracy of the entry point for a bearish position. Setting a stop loss slightly above point D can help manage risk if the price doesn’t reverse as anticipated. The ABCD pattern, with its reliance on Fibonacci ratios and clearly defined points, offers a strategic approach for traders looking to capitalize on potential bearish reversals in financial markets, including forex, stocks, and cryptocurrencies.
Bearish Pattern in OANDA:XAUUSD :
🔵 Setting
🟣 Logical Setting
ZigZag Pivot Period : You can adjust the period so that the harmonic patterns are adjusted according to the pivot period you want. This factor is the most important parameter in pattern recognition.
Show Valid Forma t: If this parameter is on "On" mode, only patterns will be displayed that they have exact format and no noise can be seen in them. If "Off" is, the patterns displayed that maybe are noisy and do not exactly correspond to the original pattern.
Show Formation Last Pivot Confirm : if Turned on, you can see this ability of patterns when their last pivot is formed. If this feature is off, it will see the patterns as soon as they are formed. The advantage of this option being clear is less formation of fielded patterns, and it is accompanied by the latest pattern seeing and a sharp reduction in reward to risk.
Period of Formation Last Pivot : Using this parameter you can determine that the last pivot is based on Pivot period.
🟣 Genaral Setting
Show : Enter "On" to display the template and "Off" to not display the template.
Color : Enter the desired color to draw the pattern in this parameter.
LineWidth : You can enter the number 1 or numbers higher than one to adjust the thickness of the drawing lines. This number must be an integer and increases with increasing thickness.
LabelSize : You can adjust the size of the labels by using the "size.auto", "size.tiny", "size.smal", "size.normal", "size.large" or "size.huge" entries.
🟣 Alert Setting
Alert : On / Off
Message Frequency : This string parameter defines the announcement frequency. Choices include: "All" (activates the alert every time the function is called), "Once Per Bar" (activates the alert only on the first call within the bar), and "Once Per Bar Close" (the alert is activated only by a call at the last script execution of the real-time bar upon closing). The default setting is "Once per Bar".
Show Alert Time by Time Zone : The date, hour, and minute you receive in alert messages can be based on any time zone you choose. For example, if you want New York time, you should enter "UTC-4". This input is set to the time zone "UTC" by default.
🟣 Conclusion
The ABCD harmonic pattern offers a structured approach in technical analysis, helping traders accurately identify potential reversal zones (PRZ) where price movements may shift direction. By leveraging the relationships between points A, B, C, and D, alongside specific Fibonacci ratios, traders can better anticipate points of market reversal and make more informed decisions.
Both the bearish and bullish ABCD patterns enable traders to pinpoint ideal entry points that align with anticipated market shifts. In a bearish ABCD, point D within the PRZ often signals a downward trend reversal, while in a bullish ABCD, this same point typically suggests an upward reversal. The adaptability of the ABCD pattern across different markets, such as forex, stocks, and cryptocurrencies, further highlights its utility and reliability.
Integrating the ABCD pattern into a trading strategy provides a methodical and calculated approach to entry and exit decisions. With accurate application of Fibonacci ratios and confirmation of the PRZ, traders can enhance their trading precision, reduce risks, and boost overall performance. The ABCD harmonic pattern remains a valuable resource for traders aiming to leverage structured patterns for consistent results in their technical analysis.
Hurst Spectral Analysis SwamiChartHaving a hard time deciding which wavelength to use for a Hurst analysis? Try a handful at once! SwamiCharts by John Ehlers offers a comprehensive way to visualize an indicator used over a range of lookback periods. The Spectral Analysis SwamiChart shows the bullish or bearish state of a spectrum of bandpasses over a user-defined range of wavelengths. The trader simply selects a bandwidth, a base wavelength, and a step/multiple to see the Spectral Analysis SwamiChart. A vertical column of green or red tends to indicate a very bullish or bearish moment in time, meaning that all bandpasses in the analyzed spectrum are in a bullish or bearish orientation simultaneously.
🏆 Shoutout to DavidF at Sigma-L for all the helpful information, conversations together, & indicator feedback.
🏅Shoutout to @HPotter for the bandpass code, and shoutout to @TerryPascoe for sharing it with me