DafeUltimateLibDAFE Ultimate Library: The Universal AI Dashboard & Analysis System
This is the operating system for your next generation of trading tools. Welcome to the future of on-chart intelligence.
█ PHILOSOPHY: BEYOND THE INDICATOR, INTO THE CONSCIOUSNESS
For decades, technical analysis has been a monologue. We load indicators onto our charts, and they give us static, one-dimensional answers: a line, a number, a crossover. They provide data, but they offer no wisdom, no context, no actionable intelligence. They are tools without a mind.
The DAFE Ultimate Library was created to fundamentally shatter this paradigm. It was not designed to be another indicator, but to be the very brain that powers all of your future indicators. This is a professional-grade, open-source library that allows any Pine Script developer to integrate a sophisticated, AI-powered analytical and visualization engine into their own scripts with just a few lines of code.
This library transforms your indicator from a simple data plotter into an intelligent trading assistant. It takes in raw metrics—RSI, MACD, Volume, Volatility—and synthesizes them into a rich, multi-dimensional analysis, complete with a primary bias, confidence score, market state assessment, and a set of dynamic, actionable recommendations. It doesn't just give you the "what"; it gives you the " so what? "
█ WHAT IS THIS LIBRARY? A REVOLUTION IN PINE SCRIPT
This is a foundational shift in what's possible within the TradingView ecosystem.
A Universal AI Brain: At its core is a powerful analysis engine. You feed it any number of metrics from your own custom script—each with its own type (bounded, zero-centric, trend, etc.) and weight—and the AI synthesizes them into a single, cohesive analysis. It's like having a quantitative analyst living inside your indicator.
The ASCII Art Visualization Core: This is the soul of the library. We have pushed the boundaries of what's possible with Pine Script's table and label objects to create a stunning, fully animated, and customizable ASCII art interface. This is not a gimmick; it is a high-information-density display that brings your data to life in a way that is both beautiful and intuitively understandable. Choose from multiple "genders" (Male, Female, Droid) and themes to create an AI assistant that fits your personal aesthetic.
Open & Extensible Framework: This is a library, not a closed black box. It is designed to be the foundation for a new generation of "smart" indicators. I provide a simple, powerful API (Application Programming Interface) that allows any developer to plug their own unique metrics into the DAFE AI brain and instantly gain access to its analytical and visualization power.
Human-Readable Intelligence: The output is not just numbers. The AI communicates in natural language. It provides you with its "Thoughts" ("Bullish momentum across 3 metrics," "Structural weakness developing") and a set of "Recommended Actions" ("ACCUMULATE on pullbacks," "TIGHTEN stops") that adapt in real-time to the changing market conditions.
█ HOW IT WORKS: THE ARCHITECTURE OF AN AI
The library operates on a simple but powerful three-stage pipeline.
Stage 1: Metric Ingestion (The Senses)
As a developer, you first define the "senses" of your AI. Using the library's simple create_metric functions, you tell the AI what to look at. This is a highly flexible system that can handle any type of data your indicator produces. You define the metric's name, its current value, its "mode" of operation, and its relative importance (weight). The available modes allow the AI to correctly interpret any data source:
metric_bounded: For oscillators like RSI or Stochastics that move between set levels (e.g., 0-100).
metric_zero: For indicators like MACD or a Momentum oscillator that fluctuate around a central zero line.
metric_trend: For moving averages or trend lines, analyzing their position relative to price.
metric_volume / metric_volatility: Specialized metrics for analyzing volume and volatility events against high/low thresholds.
Stage 2: The Analysis Engine (The Brain)
On every bar, the library takes the updated metric values and feeds them into its core analytical model. This is where the magic happens.
Normalization: Each metric is processed according to its "mode" and converted into a standardized signal score from -100 (extremely bearish) to +100 (extremely bullish). This allows the AI to compare apples and oranges—an RSI of 80 can now be directly compared to a MACD histogram of 0.5.
Synthesis: The AI calculates a composite score by taking a weighted average of all the individual metric signals. This gives a single, unified view of the market's state based on all available evidence.
State Assessment: It analyzes the distribution of signals (how many are bullish vs. bearish), the number of "extreme" readings (e.g., overbought, high volume), and the overall composite score to determine the current Market State (e.g., "STRONG TREND," "MIXED SIGNALS," "EXTREME CONDITIONS").
Confidence Calculation: The magnitude of the final composite score is translated into a Confidence percentage, representing the strength of the AI's conviction in its current bias.
Natural Language Generation: Based on the final analysis, the engine selects the most appropriate "Thoughts" and "Recommended Actions" from its pre-programmed library of strategic heuristics, providing you with context and a potential game plan.
Stage 3: The Rendering Engine (The Face)
The final analysis is passed to the visualization core, which renders the complete AI Terminal on your chart. This is a masterwork of Pine Script's drawing capabilities.
The Face: The stunning ASCII art face is dynamically generated on every bar. Its Mood (Confident, Focused, Cautious, etc.) is directly determined by the AI's confidence level. Its eyes will even animate with a subtle, customizable Blink cycle, bringing the character to life and creating an unparalleled user experience.
The Dashboard: The surrounding terminal is built, displaying the primary bias, market state, confidence, and the detailed thoughts, active metrics, and recommended actions in a clean, retro-futuristic interface.
Theming: The entire display is colored according to your chosen theme, from the cool greens of "Matrix" to the vibrant pinks of "Neon," allowing for deep personalization.
█ A GUIDE FOR DEVELOPERS: INTEGRATING THE DAFE AI
We have made it incredibly simple to bring your indicators to life with the DAFE AI. This is the true purpose of the library—to empower you.
Import the Library: Add the following line to the top of your script import DskyzInvestments/DafeUltimateLib/1 as dafe
Define Your Metrics: In the barstate.isfirst block of your script, create an array and populate it with the metrics your indicator uses. For example:
var array my_metrics = array.new()
if barstate.isfirst
array.push(my_metrics, dafe.metric_bounded("RSI", 50.0, 70.0, 30.0, 1.5))
array.push(my_metrics, dafe.metric_zero("MACD Hist", 0.0, 0.5, 1.0))
Update Your Metrics: On every bar, update the values of your metrics.
dafe.update_metric(array.get(my_metrics, 0), ta.rsi(close, 14))
dafe.update_metric(array.get(my_metrics, 1), macd_histogram_value)
Configure & Render: Create a configuration object from user inputs and call the main render function.
dafe.DafeConfig my_config = dafe.quick_config("Droid", "Cyber")
dafe.render(my_metrics, my_config)
That's it. With these few steps, you have integrated a complete AI dashboard and analysis engine directly into your own script, saving you hundreds of hours of development time and providing your users with a revolutionary interface.
█ DEVELOPMENT PHILOSOPHY
The DAFE Ultimate Library was born from a desire to push the boundaries of Pine Script and to empower the entire TradingView developer community. We believe that the future of technical analysis is not just in creating more complex algorithms, but in building more intelligent and intuitive ways to interact with the data those algorithms provide. This library is our contribution to that future. It is an open-source tool designed to elevate the work of every developer who uses it, fostering a new era of "smart" indicators on the platform.
This library is designed to help you and your users make the best trades by providing a layer of objective, synthesized intelligence that filters out noise, quantifies confidence, and promotes a disciplined, analytical approach to the market.
█ A NOTE TO USERS & DISCLAIMER
THIS IS A LIBRARY: This script does nothing on its own. It is a powerful engine that must be integrated by other indicator developers. It is a tool for builders.
THE AI IS A GUIDE, NOT A GURU: The analysis provided is based on the mathematical synthesis of the metrics it is fed. It is a powerful decision-support tool, but it is not a crystal ball. All trading involves substantial risk.
GARBAGE IN, GARBAGE OUT: The quality of the AI's analysis is directly dependent on the quality and logic of the metrics it is given by the host indicator.
"The goal of a successful trader is to make the best trades. Money is secondary."
— Alexander Elder
Taking you to school. - Dskyz, Trade with DAFE.
MATH
DafePatternLibDafePatternLib: The Neural Pattern Recognition & Reinforcement Learning Engine
This is not a pattern library. This is an artificial trading brain. It doesn't just find patterns; it learns, adapts, and evolves based on their performance in the live market.
█ CHAPTER 1: THE PHILOSOPHY - BEYOND STATIC RULES, INTO DYNAMIC LEARNING
For decades, chart pattern analysis has been trapped in a static, rigid paradigm. An indicator is coded to find a "Bullish Engulfing" or a "Head and Shoulders," and it will signal that pattern with the same blind confidence every single time, regardless of whether that pattern has been consistently failing for the past month. It has no memory, no intelligence, no ability to adapt. It is a dumb machine executing a fixed command.
The DafePatternLib was created to shatter this paradigm. It is built on a powerful, revolutionary philosophy borrowed from the world of artificial intelligence: Reinforcement Learning . This library is not just a collection of pattern detection functions; it is a complete, self-optimizing neural framework. It doesn't just find patterns; it tracks their outcomes. It remembers what works and what doesn't. Over time, it learns to amplify the signals of high-probability patterns and silence the noise from those that are failing in the current market regime.
This is not a black box. It is an open-source, observable learning system. It is a "Neural Edition" because, like a biological brain, it strengthens and weakens its own "synaptic" connections based on positive and negative feedback, evolving into a tool that is uniquely adapted to the specific personality of the asset you are trading.
█ CHAPTER 2: THE CORE INNOVATIONS - WHAT MAKES THIS A "NEURAL" LIBRARY?
This library introduces several concepts previously unseen in the TradingView ecosystem, creating a truly next-generation analytical tool.
Reinforcement Learning Engine: The brain of the system. Every time a high-confidence pattern is detected, it is logged into an "Active Memory." The library then tracks the outcome of that pattern against its projected stop and target. If the pattern is successful, the "synaptic weight" for that entire category of patterns is strengthened. If it fails, the weight is weakened. This is a continuous feedback loop of performance-driven adaptation.
Synaptic Plasticity (Learning Rate): You have direct control over the brain's "plasticity"—its ability to learn. A high plasticity allows it to adapt very quickly to changing market conditions, while a lower plasticity creates a more stable, long-term learning model.
Dynamic Volatility Scaling (DVS): Markets are not static; they breathe. DVS is a proprietary function that calculates a real-time volatility scalar by comparing the current ATR to its historical average. This scalar is then used to automatically adjust the lookback periods and sensitivity of all relevant pattern detection engines. In high-volatility environments, the engines look for larger, more significant patterns. In low-volatility, they tighten their focus to find smaller, more subtle setups.
Neural Confidence Score: The output of this library is not a simple "true/false" signal. Every detected pattern comes with two confidence scores:
Raw Confidence: The original, static confidence level based on the pattern's textbook definition.
Net Confidence: The AI-adjusted score. This is the Raw Confidence × Learned Bias. A pattern that has been performing well will see its confidence amplified (e.g., 70% raw → 95% net). A pattern that has been failing will see its confidence diminished (e.g., 70% raw → 45% net).
Intelligent Filtering: The learning system is not just for scoring. If the learned bias for a particular pattern category (e.g., "Candle") drops below a certain threshold (e.g., 0.8), the library will automatically begin to filter out those signals, treating them as unreliable noise until their performance improves.
█ CHAPTER 3: THE ANATOMY OF THE AI - HOW IT THINKS
The library's intelligence is built on a clear, observable architecture.
The NeuralWeights (The Brain)
This is the central data structure that holds the system's "memory." It is a simple object that stores a single floating-point number—a weight or "bias"—for each of the five major categories of pattern analysis. It is initialized with neutral weights of 1.0 for all categories.
w_candle: For candlestick patterns.
w_harmonic: For harmonic patterns.
w_structure: For market structure patterns (e.g., BOS/CHoCH).
w_geometry: For classic geometric patterns (e.g., flags, wedges).
w_vsa: For Volume Spread Analysis patterns.
The update_brain() Function (The Learning Process)
This is the core of the reinforcement learning loop. When a pattern from the ActiveMemory is resolved (as a win or a loss), this function is called. If the pattern was a "win," it applies a small, positive adjustment (the plasticity value) to the corresponding weight in the brain. If it was a "loss," it applies a negative adjustment. The weights are constrained between 0.5 (maximum distrust) and 2.0 (maximum trust), preventing runaway feedback loops.
The manage_memory() Function (The Short-Term Memory)
This function is the AI's hippocampus. It maintains an array of ActiveMemory objects, tracking up to 50 recent, high-confidence signals. On every bar, it checks each active pattern to see if its target or stop has been hit. If a pattern resolves, it triggers the update_brain() function with the outcome and removes the pattern from memory. If a pattern does not resolve within a set number of bars (e.g., 50), it is considered "expired" and is treated as a minor loss, teaching the AI to distrust patterns that lead to nowhere.
The scan_neural_universe() Function (The Master Controller)
This is the main exported function that you will call from your indicator. It is the AI's "consciousness." On every bar, it performs a sequence of high-level actions:
It calculates the current Dynamic Volatility Scalar (DVS).
It runs all of its built-in pattern detection engines (VSA, Geometry, Candles, etc.), feeding them the DVS to ensure they are adapted to the current market volatility.
It identifies the single "best" active pattern for the current bar based on its raw confidence score.
It passes this "best" pattern to the manage_memory() function to be tracked and to trigger learning from any previously resolved patterns.
It retrieves the current learned bias for the "best" pattern's category from the brain.
It calculates the final net_confidence by multiplying the raw confidence by the learned bias.
It performs a final check, intelligently filtering out the signal if its learned bias is too low.
It returns the final, neurally-enhanced PatternResult object to your indicator.
█ CHAPTER 4: A GUIDE FOR DEVELOPERS - INTEGRATING THE BRAIN
I have designed the DafePatternLib to be both incredibly powerful and remarkably simple to integrate into your own scripts.
Import the Library: Add the following line to the top of your script (replace YourUsername with your TradingView username):
import DskyzInvestments/DafePatternLib/1 as pattern
Call the Scanner: On every bar, simply call the main scanning function. The library handles everything else internally—the DVS calculation, the multi-pattern scanning, the memory management, and the reinforcement learning.
pattern.PatternResult signal = pattern.scan_neural_universe()
Use the Result: The signal object now contains all the intelligence you need. Check if a pattern is active, and if so, use its properties to draw your signals and alerts. You can choose to display the raw_confidence vs. the net_confidence to give your users a direct view of the AI's learning process.
if signal.is_active
label.new(bar_index, signal.entry, "AI Conf: " + str.tostring(signal.net_confidence, "#") + "%")
With just these few lines, you have integrated a self-learning, self-optimizing, multi-pattern recognition engine into your indicator.
// ═══════════════════════════════════════════════════════════
// INSTRUCTIONS FOR DEVELOPERS:
// ───────────────────────────────────────────────────────────
1. Import the library at the top of your indicator script:
import YourUsername/DafePatternLib/1 as pattern
2. Copy the entire "INPUTS TEMPLATE" section below and paste it into your indicator's code.
This will create the complete user settings panel for controlling the AI.
3. Copy the "USAGE EXAMPLE" section and adapt it to your script's logic.
This shows how to initialize the brain, call the scanner, and use the results.
// ═══════════════════════════════════════════════════════════
// INPUT GROUPS
// ═══════════════════════════════════════════════════════════
string G_AI_ENGINE = "══════════ 🧠 NEURAL ENGINE ══════════"
string G_AI_PATTERNS = "══════════ 🔬 PATTERN SELECTION ══════════"
string G_AI_VISUALS = "══════════ 🎨 VISUALS & SIGNALS ══════════"
string G_AI_DASH = "══════════ 📋 BRAIN STATE DASHBOARD ══════════"
string G_AI_ALERTS = "══════════ 🔔 ALERTS ══════════"
// ═══════════════════════════════════════════════════════════
// NEURAL ENGINE CONTROLS
// ═══════════════════════════════════════════════════════════
bool i_enable_ai = input.bool(true, "✨ Enable Neural Pattern Engine", group = G_AI_ENGINE,
tooltip="Master switch to enable or disable the entire pattern recognition and learning system.")
float i_plasticity = input.float(0.03, "Synaptic Plasticity (Learning Rate)", minval=0.01, maxval=0.1, step=0.01, group = G_AI_ENGINE,
tooltip="Controls how quickly the AI adapts to pattern performance. " +
"• Low (0.01-0.02): Slow, stable learning. Good for long-term adaptation. " +
"• Medium (0.03-0.05): Balanced adaptation (Recommended). " +
"• High (0.06-0.10): Fast, aggressive learning. Adapts quickly to new market regimes but can be more volatile.")
float i_filter_threshold = input.float(0.8, "Neural Filter Threshold", minval=0.5, maxval=1.0, step=0.05, group = G_AI_ENGINE,
tooltip="The AI will automatically hide (filter) signals from any pattern category whose learned 'Bias' falls below this value. Set to 0.5 to disable filtering.")
// ═══════════════════════════════════════════════════════════
// PATTERN SELECTION
// ═══════════════════════════════════════════════════════════
bool i_scan_candles = input.bool(true, "🕯️ Candlestick Patterns", group = G_AI_PATTERNS, inline="row1")
bool i_scan_vsa = input.bool(true, "📦 Volume Spread Analysis", group = G_AI_PATTERNS, inline="row1")
bool i_scan_geometry = input.bool(true, "📐 Geometric Patterns", group = G_AI_PATTERNS, inline="row2")
bool i_scan_structure = input.bool(true, "📈 Market Structure (SMC)", group = G_AI_PATTERNS, inline="row2")
bool i_scan_harmonic = input.bool(false, "🦋 Harmonic Setups (Experimental)", group = G_AI_PATTERNS, inline="row3",
tooltip="Harmonic detection is simplified and experimental. Enable for additional confluence but use with caution.")
// ═══════════════════════════════════════════════════════════
// VISUALS & SIGNALS
// ═══════════════════════════════════════════════════════════
bool i_show_signals = input.bool(true, "Show Pattern Signals on Chart", group = G_AI_VISUALS)
color i_bull_color = input.color(#00FF88, "Bullish Signal Color", group = G_AI_VISUALS, inline="colors")
color i_bear_color = input.color(#FF0055, "Bearish Signal Color", group = G_AI_VISUALS, inline="colors")
string i_signal_size = input.string("Small", "Signal Size", options= , group = G_AI_VISUALS)
//══════════════════════════════════════════════════════
// BRAIN STATE DASHBOARD
//══════════════════════════════════════════════════════
bool i_show_dashboard = input.bool(true, "Show Brain State Dashboard", group = G_AI_DASH)
string i_dash_position = input.string("Bottom Right", "Position", options= , group = G_AI_DASH)
string i_dash_size = input.string("Small", "Size", options= , group = G_AI_DASH)
// ══════════════════════════════════════════════════════════
// ALERTS
// ═══════════════════════════════════════════════════════════
bool i_enable_alerts = input.bool(true, "Enable All Pattern Alerts", group = G_AI_ALERTS)
int i_alert_min_confidence = input.int(75, "Min Neural Confidence to Alert (%)", minval=50, maxval=100, group = G_AI_ALERTS)
█ DEVELOPMENT PHILOSOPHY
The DafePatternLib was born from a vision to bring the principles of modern AI to the world of technical analysis on TradingView. We believe that an indicator should not be a static, lifeless tool. It should be a dynamic, intelligent partner that learns and adapts alongside the trader. This library is an open-source framework designed to empower developers to build the next generation of smart indicators, moving beyond fixed rules and into the realm of adaptive, performance-driven intelligence.
This library is designed to be a tool for that discipline. By providing an objective, data-driven, and self-correcting analysis of patterns, it helps to remove the emotional guesswork and second-guessing that plagues so many traders, allowing you to act with the cold, calculated confidence of a machine.
█ A NOTE TO USERS & DISCLAIMER
THIS IS A LIBRARY FOR DEVELOPERS: This script does nothing on its own. It is a powerful engine that must be imported and used by other indicator developers in their own scripts.
THE AI LEARNS, IT DOES NOT PREDICT: The reinforcement learning is based on the recent historical performance of patterns. It is a powerful statistical edge, but it is not a crystal ball. Past performance does not guarantee future results.
ALL TRADING INVOLVES RISK: The patterns and confidence scores are for informational and educational purposes only. Always use proper risk management.
**Please be aware that this is a library script and has no visual output on its own. The charts, signals, and dashboards shown in the images were created with a separate demonstration indicator that utilizes this library's powerful pattern recognition and learning engine.
"The key to trading success is emotional discipline. If intelligence were the key, there would be a lot more people making money trading."
— Victor Sperandeo, Market Wizard
Taking you to school. - Dskyz, Create with DAFE
WickPressureKernelWick Pressure Kernel (WPK) : The Physics-ML Fusion Engine
This is not a candlestick pattern library. This is a physics and machine learning engine that decodes the institutional battle hidden inside every wick. It is the definitive toolkit for analyzing microstructure and quantifying order flow pressure.
█ CHAPTER 1: THE PHILOSOPHY - BEYOND THE WICK, INTO THE PHYSICS
A candlestick wick is not just a line on a chart. It is a fossil record of a battle. It is the remnant of a fierce conflict between aggressive market orders and passive limit orders. A long upper wick is not just "bearish"; it is the evidence of a failed auction, a footprint left by a wall of institutional supply that absorbed and rejected aggressive buying. Traditional candle analysis gives these events simple, unchanging names. It is a dead language.
The Wick Pressure Kernel (WPK) was created to translate this dead language into the living, dynamic language of physics and machine learning. It deconstructs the candle into its core components and subjects them to a rigorous, multi-layered analysis. It calculates the Kinetic Force of the candle, estimates the institutional Delta hidden within it, tracks the Siege Decay of the levels it tests, and uses a Bayesian Reinforcement Learning system to predict the probable outcome.
This library does not just identify patterns; it quantifies the underlying forces that create them. It is designed for the trader who is no longer content with subjective interpretation and demands a quantitative, data-driven edge in reading the story of every single bar.
█ CHAPTER 2: THE PHYSICS-ML FUSION - THE SIX LAYERS OF ANALYSIS
The WPK's unparalleled intelligence comes from its six-stage analytical pipeline. Each layer builds upon the last, transforming a simple candle into a rich, multi-dimensional data object.
LAYER 1 - DELTA PHYSICS: At its core, the WPK uses a proprietary model to estimate the institutional order flow (Delta) within a single candle. It analyzes the relationship between the wicks, the body, and the total range to approximate the net buying or selling pressure that occurred during the bar's formation.
LAYER 2 - SIEGE ANALYSIS: This is a revolutionary concept for measuring structural integrity. Every time a price level is tested by a wick, its "Siege Decay" score is updated. A fresh, untested level has a score of 1.0. A level that has been tested multiple times without breaking has a decayed score (e.g., 0.5), indicating it is weakening and likely to fail.
LAYER 3 - MAGNETISM ENGINE: Not all wicks are created equal. This engine calculates the probability that a wick will be "filled" (mean reversion). It understands that long wicks in a weak, choppy trend are likely to be filled, while wicks in a strong, trending market are more likely to represent valid rejection.
LAYER 4 - REGIME ML: The WPK is context-aware. It uses a suite of advanced statistical tools— Shannon Entropy (disorder), Detrended Fluctuation Analysis (trend vs. mean-reversion), and the Hurst Exponent (persistence)—to classify the market's current "personality" into one of six regimes (e.g., "Bull Trend," "Bear Range," "Choppy").
LAYER 5 - THOMPSON SAMPLING (REINFORCEMENT LEARNING): This is the AI brain. The library uses a Bayesian Multi-Armed Bandit algorithm called Thompson Sampling. It maintains a set of "Learning Agents," each specializing in a different type of wick pattern (e.g., Rejection, Absorption). Based on the real-time performance of these patterns, the AI continuously updates the win probability for each agent, learning which strategies are most effective in the current market.
LAYER 6 - CONTEXTUAL ROUTING: The final layer of intelligence. The WPK analyzes the wick, determines its pattern type, and then routes it to the specialist Learning Agent for a probability assessment, all while considering the current market regime.
█ CHAPTER 3: A DEEP DIVE INTO THE WPK's CORE ENGINES
THE analyze_wick() FUNCTION (THE MASTER ANALYZER)
This is the primary function of the library. You feed it a bar, and it returns a complete WickAnalysis object, a rich data structure containing over 15 distinct metrics that tell the full story of that candle:
Anomaly Score: A Z-Score that tells you how statistically rare the wick's size is compared to recent history. A score > 2.0 is a significant outlier.
Kinetic Force: A physics-based metric that combines the bar's range and its relative volume to quantify the "impact energy" of the candle.
Estimated Delta & Delta Ratio: The raw institutional flow estimate and its ratio compared to the recent average.
Siege Decay & Test Count: The current structural integrity of the level the wick tested, and how many times it has been hit.
Magnet Score: The calculated probability (0-100%) that this specific wick will be filled.
Pattern & Context: The classified pattern name (e.g., "Dragonfly/Hammer," "Liquidity Trap") and its tactical context (e.g., "reversal," "trend_continuation").
Win Probability: The final, ML-predicted success rate for this pattern, as determined by the Thompson Sampling engine.
THE scan_clusters() FUNCTION (LIQUIDITY ZONE DETECTION)
Wicks rarely happen in isolation. This powerful function scans the recent price history and identifies areas where multiple high-pressure wicks have occurred at similar price levels. It groups these events into dynamic "Pressure Clusters," which function as high-probability supply and demand zones. Each cluster is tracked with its own set of metrics, including its total strength, age, and siege decay.
THE detect_regime() FUNCTION (THE CONTEXT ENGINE)
This function is the AI's "eyes." It uses advanced statistical methods to understand the market's personality. Is it trending cleanly? Is it mean-reverting? Is it completely random and chaotic? The output of this function is fed into the rest of the WPK, allowing the analysis to adapt intelligently to the current environment.
█ CHAPTER 4: A GUIDE FOR DEVELOPERS - INTEGRATING THE KERNEL
The Wick Pressure Kernel is designed as a powerful, low-level engine for developers to build sophisticated trading tools.
Import the Library:
import YourUsername/WickPressureKernel/1 as wpk
Initialize the AI: The learning agents must be stored in a var variable to retain their memory.
var array agents = wpk.create_learning_agents()
Analyze the Market: On every bar, run the core analysis functions.
wpk.RegimeState regime = wpk.detect_regime(100)
wpk.WickAnalysis wick_data = wpk.analyze_wick(0, regime, agents)
float final_probability = wpk.get_probability(wick_data, regime)
Apply Your Logic: Use the rich data from the wick_data object and the final_probability to build your custom signal logic and filters.
bool my_buy_signal = wick_data.pattern == "Dragonfly/Hammer" and final_probability > 65
Provide Feedback (The Learning Step): After a trade based on a pattern is complete, you must tell the AI the outcome. This is how it learns.
int agent_id = wpk.get_agent_by_pattern(wick_data)
agents := wpk.update_learning_agent(agents, agent_id, 1.0) // +1.0 for a win, -1.0 for a loss
By following this loop, you are not just running an indicator; you are actively training a specialized AI to master the specific asset you are trading.
█ DEVELOPMENT PHILOSOPHY
The Wick Pressure Kernel was born from the conviction that the future of technical analysis lies in the fusion of quantitative physics and machine learning. It is an attempt to move beyond subjective pattern naming and into the realm of objective, measurable forces. This library is for the serious developer and the quantitative trader who is not satisfied with simple signals, but who seeks to understand the deep, complex, and often violent auction process that creates every candle on the chart.
The WPK is designed to be a tool for that patience, providing the deep contextual understanding needed to wait for moments of true, statistically-backed institutional pressure.
█ A NOTE TO USERS & DISCLAIMER
THIS IS A LIBRARY FOR DEVELOPERS: This script does nothing on its own. It is a powerful engine that must be imported and used by other indicator developers to build their own tools.
THE AI IS A PROBABILISTIC GUIDE: The reinforcement learning system provides a powerful statistical edge, but it does not predict the future with certainty. All trading involves substantial risk.
LEARNING REQUIRES DATA: The Thompson Sampling engine becomes more intelligent over time as it processes more outcomes. Its initial predictions will be less reliable than its predictions after hundreds of trades.
**Please be aware that this is a library script and has no visual output on its own. The charts, signals, and dashboards shown in the images were created with a separate demonstration indicator that utilizes this library's powerful pattern recognition and learning engine.
"The stock market is a device for transferring money from the impatient to the patient."
— Warren Buffett
Taking you to school. - Dskyz, Create with DAFE
StolenKernelFunctionsLibrary "StolenKernelFunctions"
This library provides non-repainting kernel functions for Nadaraya-Watson estimator implementations. This allows for easy substition/comparison of different kernel functions for one another in indicators. Furthermore, kernels can easily be combined with other kernels to create newer, more customized kernels.
rationalQuadratic(_src, _lookback, _relativeWeight, startAtBar)
Rational Quadratic Kernel - An infinite sum of Gaussian Kernels of different length scales.
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_relativeWeight (simple float) : Relative weighting of time frames. Smaller values resut in a more stretched out curve and larger values will result in a more wiggly curve. As this value approaches zero, the longer time frames will exert more influence on the estimation. As this value approaches infinity, the behavior of the Rational Quadratic Kernel will become identical to the Gaussian kernel.
startAtBar (simple int)
Returns: yhat The estimated values according to the Rational Quadratic Kernel.
gaussian(_src, _lookback, startAtBar)
Gaussian Kernel - A weighted average of the source series. The weights are determined by the Radial Basis Function (RBF).
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
startAtBar (simple int)
Returns: yhat The estimated values according to the Gaussian Kernel.
periodic(_src, _lookback, _period, startAtBar)
Periodic Kernel - The periodic kernel (derived by David Mackay) allows one to model functions which repeat themselves exactly.
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_period (simple int) : The distance between repititions of the function.
startAtBar (simple int)
Returns: yhat The estimated values according to the Periodic Kernel.
locallyPeriodic(_src, _lookback, _period, startAtBar)
Locally Periodic Kernel - The locally periodic kernel is a periodic function that slowly varies with time. It is the product of the Periodic Kernel and the Gaussian Kernel.
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_period (simple int) : The distance between repititions of the function.
startAtBar (simple int)
Returns: yhat The estimated values according to the Locally Periodic Kernel.
ATLAS_COREShared utility library for the ATLAS Trading Intelligence Suite. Provides brand colors, math utilities, candle analysis, grading system, visual helpers, and more.
Better Contrast (NTSC Optimized)Library Better Contrast (NTSC Optimized)
This lightweight utility library automatically selects the optimal text color (black or white) for any given background color, ensuring maximum readability for labels, tables, and UI elements.
Unlike standard libraries that use the HSP model or simple averaging, this library utilizes the NTSC Perceived Brightness formula.
🟢 Why NTSC?
The human eye is significantly more sensitive to green light than red or blue. Standard formulas often miscalculate brightness for high‑energy colors like yellow (red + green) or cyan, resulting in white text on bright yellow backgrounds — which is hard to read.
The NTSC formula weights colors based on human perception:
Brightness = (Red * 0.299) + (Green * 0.587) + (Blue * 0.114)
By heavily weighting the green channel (58.7%), this method correctly identifies yellow and cyan as “bright” backgrounds, forcing the text to black for superior contrast.
🛠 Usage
Import the library:
import Robertsanktov/Better_Contrast/1 as contrast
Use the method directly on any color variable:
textColor = myBackgroundColor.contrast()
Parameters
- threshold : (optional) brightness cutoff (0.0–1.0 or 0–255). Default is 0.55 .
Higher values force more white text; lower values force more black text.
ma_libraryTitle: Library: Advanced Moving Average Collection
Description:
This library provides a comprehensive set of Moving Average algorithms, ranging from standard filters (SMA, EMA) to adaptive trendlines (KAMA, FRAMA) and experimental smoothers (ALMA, JMA).
It has been fully optimized for Pine Script v6, ensuring efficient execution and strict robustness against na (missing) values. Unlike standard implementations that propagate na values, these functions dynamically recalculate weights to maintain continuity in disjointed datasets.
🧩 Library Features
Robustness: Non-recursive filters ignore na values within the lookback window. Recursive filters maintain state to prevent calculation breaks.
Optimization: Logic updated to v6 standards, utilizing efficient loops and var persistence.
Standardization: All functions utilize a consistent f_ prefix and standardized parameters for easy integration.
Scope: Contains over 35 different smoothing algorithms.
📊 Input Requirements
Source (src): The data series to smooth (usually close, hl2, etc.).
Length (length): The lookback period (must be a simple int).
Specifics: Some adaptive MAs (like f_evwma) require volume data, while others (like f_alma) require offset/sigma settings.
🛠️ Integration Example
You can import the library and call functions directly, or use the built-in f_selector to create dynamic inputs for your users.
code
Pine
download
content_copy
expand_less
//@version=6
indicator("MA Library Demo", overlay=true)
// Import the library
import YourUsername/ma_/1 as ma
// --- Example 1: Direct Function Call ---
// calculating Jurik Moving Average (JMA)
float jma_val = ma.f_jma(close, 14)
plot(jma_val, "JMA", color=color.yellow, linewidth=2)
// --- Example 2: User Selector ---
// Allowing the user to choose the MA type via settings
string selected_type = input.string("ALMA", "MA Type", options= )
int length = input.int(20, "Length")
// Using the generic selector function
float dynamic_ma = ma.f_selector(close, length, selected_type)
plot(dynamic_ma, "Dynamic MA", color=color.aqua)
📋 Included Algorithms
The following methods are available (prefixed with f_):
Standard: SMA, EMA, WMA, VWMA, RMA
Adaptive: KAMA (Kaufman), FRAMA (Fractal), VIDYA (Chande/VARMA), VAMA (Vol. Adjusted)
Low Lag: ZLEMA (Zero Lag), HMA (Hull), JMA (Jurik), DEMA, TEMA
Statistical/Math: LSMA (Least Squares), GMMA (Geometric Mean), FLSMA (Fisher Least Squares)
Advanced/Exotic:
ALMA (Arnaud Legoux)
EIT (Ehlers Instantaneous Trend)
ESD (Ehlers Simple Decycler)
AHMA (Ahrens)
BMF (Blackman Filter)
CMA (Corrective)
DSWF (Damped Sine Wave)
EVWMA (Elastic Vol. Weighted)
HCF (Hybrid Convolution)
LMA (Leo)
MD (McGinley Dynamic)
MF (Modular Filter)
MM (Moving Median)
QMA (Quick)
RPMA (Repulsion)
RSRMA (Right Sided Ricker)
SMMA (Smoothed)
SSMA (Shapeshifting)
SWMA (Sine Weighted)
TMA (Triangular)
TSF (True Strength Force)
VBMA (Variable Band)
NodialTreesLows2: ML Random Forest / Pivot Lows (Part 2 of 2)Title: `Library: ML Random Forest / Pivot Lows (Part 2 of 2)`
Description:
This library contains the second half (Trees 6-11) of the Random Forest Classifier designed to validate Pivot Lows (Long setups).
It is a direct extension of NodialTreesL1 and cannot be used alone. Due to Pine Script's compilation limits on complexity and file size, the 12-tree ensemble model has been split into two separate libraries.
### 🧩 Library Contents
This module exports the following methods representing the specific decision paths of the trained AI model:
- `tree_6(array f)`
- `tree_7(array f)`
- `tree_8(array f)`
- `tree_9(array f)`
- `tree_10(array f)`
- `tree_11(array f)`
### ⚠️ Implementation Guide
To use this library, you must combine it with Part 1.
Please refer to the NodialTreesLows1 library description for:
1. The full Integration Code Example (how to average the votes).
2. The exact Input Feature List (the 27 required metrics).
3. Detailed explanation of the Machine Learning logic.
How to finish the integration:
Import this library alongside Part 1 and add the results of `tree_6` through `tree_11` to your voting sum, as shown in the Part 1 documentation.
NodialTreesLows1: ML Random Forest / Pivot Lows (Part 1 of 2)Title: `Library: ML Random Forest / Pivot Lows (Part 1 of 2)`
Description:
This library contains the first half (Trees 0-5) of a Random Forest Classifier designed to validate Pivot Lows (Long setups).
Due to Pine Script size constraints, the model is split into two libraries. You must use this library in conjunction with NodialTreesL2 to run the full ensemble.
### 🧩 System Architecture
- Model: Random Forest (12 Trees total).
- This Library: Contains `tree_0` to `tree_5`.
- Logic: Each tree analyzes a feature array and outputs a probability score.
- Target: Validating Swing Lows / Support Bounces.
### 📊 Input Requirements
The methods expect an `array` of size 27 containing market features (Price Action, Momentum, Volatility, Volume, Structure). The exact order of features is critical for the model's accuracy.
### 🛠️ Integration Example
Since this is a modular library, you need to import both parts and average their results to get the final prediction.
### 📋 Feature Mapping (Array Indexing)
To get accurate predictions, the input array must contain exactly 27 floats in this specific order:
0. Timeframe (in seconds)
1. RSI (Raw Value)
2. MACD Histogram
3. Relative Volume
4. EMA Distance (%)
5. EMA Slope
6. ATR Ratio
7. ADX
8. Buying/Selling Pressure
9. Wick Ratio
10-16. Divergences & Pattern Flags (Boolean 0.0/1.0)
17-22. Proprietary Momentum Metrics ("Onion" Structure)
23-26. Derived Volatility/Volume Features
*Note: For the advanced proprietary metrics (Indices 17-26), users must implement their own calculations or use compatible indicators.*
//@version=6
indicator("My ML Long Strategy", overlay=true)
// Import BOTH libraries
import YourUsername/NodialTreesL1/1 as rf_part1
import YourUsername/NodialTreesL2/1 as rf_part2
// ... (Calculate your 27 features and fill the array) ...
// var features = array.from(timeframe, rsi, macd, ...)
// Calculate Ensemble Probability (Average of 12 Trees)
float vote_sum = 0.0
// Trees from Part 1
vote_sum += rf_part1.tree_0(features)
vote_sum += rf_part1.tree_1(features)
vote_sum += rf_part1.tree_2(features)
vote_sum += rf_part1.tree_3(features)
vote_sum += rf_part1.tree_4(features)
vote_sum += rf_part1.tree_5(features)
// Trees from Part 2 (Trees 6-11)
vote_sum += rf_part2.tree_6(features)
vote_sum += rf_part2.tree_7(features)
vote_sum += rf_part2.tree_8(features)
vote_sum += rf_part2.tree_9(features)
vote_sum += rf_part2.tree_10(features)
vote_sum += rf_part2.tree_11(features)
// Final Probability (0.0 to 1.0)
float final_prob = vote_sum / 12.0
if final_prob > 0.60
label.new(bar_index, low, "Valid Low", color=color.green)
NodialTreesHighs2: ML Random Forest / Pivot Highs (Part 2 of 2)Title: `Library: ML Random Forest / Pivot Highs (Part 2 of 2)`
Description:
This library contains the second half (Trees 6-11) of the Random Forest Classifier designed to validate Pivot Highs (Short setups).
It is a direct extension of NodialTreesH1 and cannot be used alone. Due to Pine Script's compilation limits on complexity and file size, the 12-tree ensemble model has been split into two separate libraries.
### 🧩 Library Contents
This module exports the following methods representing the specific decision paths of the trained AI model:
- `tree_6(array f)`
- `tree_7(array f)`
- `tree_8(array f)`
- `tree_9(array f)`
- `tree_10(array f)`
- `tree_11(array f)`
### ⚠️ Implementation Guide
To use this library, you must combine it with Part 1.
Please refer to the NodialTreesH1 library description for:
1. The full Integration Code Example (how to average the votes).
2. The exact Input Feature List (the 27 required metrics).
3. Detailed explanation of the Machine Learning logic.
How to finish the integration:
Import this library alongside Part 1 and add the results of `tree_6` through `tree_11` to your voting sum, as shown in the Part 1 documentation.
NodialTreesHighs1: ML Random Forest / Pivot Highs (Part 1 of 2)Title: `Library: ML Random Forest / Pivot Highs (Part 1 of 2)`
Description:
This library contains the first half (Trees 0-5) of a Random Forest Classifier designed to validate Pivot Highs (Short setups).
Due to Pine Script size constraints, the model is split into two libraries. You must use this library in conjunction with NodialTreesH2 to run the full ensemble.
### 🧩 System Architecture
- Model: Random Forest (12 Trees total).
- This Library: Contains `tree_0` to `tree_5`.
- Logic: Each tree analyzes a feature array and outputs a probability score.
- Target: Validating Swing Highs / Resistance Rejections.
### 📊 Input Requirements
The methods expect an `array` of size 27 containing market features (Price Action, Momentum, Volatility, Volume, Structure). The exact order of features is critical for the model's accuracy.
### 🛠️ Integration Example
Since this is a modular library, you need to import both parts and average their results to get the final prediction.
### 📋 Feature Mapping (Array Indexing)
To get accurate predictions, the input array must contain exactly 27 floats in this specific order:
0. Timeframe (in seconds)
1. RSI (Raw Value)
2. MACD Histogram
3. Relative Volume
4. EMA Distance (%)
5. EMA Slope
6. ATR Ratio
7. ADX
8. Buying/Selling Pressure
9. Wick Ratio
10-16. Divergences & Pattern Flags (Boolean 0.0/1.0)
17-22. Proprietary Momentum Metrics ("Onion" Structure)
23-26. Derived Volatility/Volume Features
*Note: For the advanced proprietary metrics (Indices 17-26), users must implement their own calculations or use compatible indicators.*
//@version=6
indicator("My ML Short Strategy", overlay=true)
// Import BOTH libraries
import YourUsername/NodialTreesH1/1 as rf_part1
import YourUsername/NodialTreesH2/1 as rf_part2
// ... (Calculate your 27 features and fill the array) ...
// var features = array.from(timeframe, rsi, macd, ...)
// Calculate Ensemble Probability (Average of 12 Trees)
float vote_sum = 0.0
// Trees from Part 1
vote_sum += rf_part1.tree_0(features)
vote_sum += rf_part1.tree_1(features)
vote_sum += rf_part1.tree_2(features)
vote_sum += rf_part1.tree_3(features)
vote_sum += rf_part1.tree_4(features)
vote_sum += rf_part1.tree_5(features)
// Trees from Part 2 (Trees 6-11)
vote_sum += rf_part2.tree_6(features)
vote_sum += rf_part2.tree_7(features)
vote_sum += rf_part2.tree_8(features)
vote_sum += rf_part2.tree_9(features)
vote_sum += rf_part2.tree_10(features)
vote_sum += rf_part2.tree_11(features)
// Final Probability (0.0 to 1.0)
float final_prob = vote_sum / 12.0
if final_prob > 0.60
label.new(bar_index, high, "Valid Short", color=color.red)
NormalizedVolume_HHHLNormalized volume + HH/HL/LH/LL structure logic for confirming moves and spotting traps. Library only—intended for use with indicators. Does not plot or draw anything by itself.
PineStats█ OVERVIEW
PineStats is a comprehensive statistical analysis library for Pine Script v6, providing 104 functions across 6 modules. Built for quantitative traders, researchers, and indicator developers who need professional-grade statistics without reinventing the wheel.
For building mean-reversion strategies, analyzing return distributions, measuring correlations, or testing for market regimes.
█ MODULES
CORE STATISTICS (20 functions)
• Central tendency: mean, median, WMA, EMA
• Dispersion: variance, stdev, MAD, range
• Standardization: z-score, robust z-score, normalize, percentile
• Distribution shape: skewness, kurtosis
PROBABILITY DISTRIBUTIONS (17 functions)
• Normal: PDF, CDF, inverse CDF (quantile function)
• Power-law: Hill estimator, MLE alpha, survival function
• Exponential: PDF, CDF, rate estimation
• Normality testing: Jarque-Bera test
ENTROPY (9 functions)
• Shannon entropy (information theory)
• Tsallis entropy (non-extensive, fat-tail sensitive)
• Permutation entropy (ordinal patterns)
• Approximate entropy (regularity measure)
• Entropy-based regime detection
PROBABILITY (21 functions)
• Win rates and expected value
• First passage time estimation
• TP/SL probability analysis
• Conditional probability and Bayes updates
• Streak and drawdown probabilities
REGRESSION (19 functions)
• Linear regression: slope, intercept, forecast
• Goodness of fit: R², adjusted R², standard error
• Statistical tests: t-statistic, p-value, significance
• Trend analysis: strength, angle, acceleration
• Quadratic regression
CORRELATION (18 functions)
• Pearson, Spearman, Kendall correlation
• Covariance, beta, alpha (Jensen's)
• Rolling correlation analysis
• Autocorrelation and cross-correlation
• Information ratio, tracking error
█ QUICK START
import HenriqueCentieiro/PineStats/1 as stats
// Z-score for mean reversion
z = stats.zscore(close, 20)
// Test if returns are normally distributed
returns = (close - close ) / close
isGaussian = stats.is_normal(returns, 100, 0.05)
// Regression channel
= stats.linreg_channel(close, 50, 2.0)
// Correlation with benchmark
spyReturns = request.security("SPY", timeframe.period, close/close - 1)
beta = stats.beta(returns, spyReturns, 60)
█ USE CASES
✓ Mean Reversion — z-scores, percentiles, Bollinger-style analysis
✓ Regime Detection — entropy measures, correlation regimes
✓ Risk Analysis — drawdown probability, VaR via quantiles
✓ Strategy Evaluation — expected value, win rates, R:R analysis
✓ Distribution Analysis — normality tests, fat-tail detection
✓ Multi-Asset — beta, alpha, correlation, relative strength
█ NOTES
• All functions return `na` on invalid inputs
• Designed for Pine Script v6
• Fully documented in the library header
• Part of the Pine ecosystem: PineStats, PineQuant, PineCriticality, PineWavelet
█ REFERENCES
• Abramowitz & Stegun — Normal CDF approximation
• Acklam's algorithm — Inverse normal CDF
• Hill estimator — Power-law tail estimation
• Tsallis statistics — Non-extensive entropy
Full documentation in the library header.
mean(src, length)
Calculates the arithmetic mean (simple moving average) over a lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Arithmetic mean of the last `length` values, or `na` if inputs invalid
wma_custom(src, length)
Calculates weighted moving average with linearly decreasing weights
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Weighted moving average, or `na` if inputs invalid
ema_custom(src, length)
Calculates exponential moving average
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Exponential moving average, or `na` if inputs invalid
median(src, length)
Calculates the median value over a lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Median value, or `na` if inputs invalid
variance(src, length)
Calculates population variance over a lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Population variance, or `na` if inputs invalid
stdev(src, length)
Calculates population standard deviation over a lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Population standard deviation, or `na` if inputs invalid
mad(src, length)
Calculates Median Absolute Deviation (MAD) - robust dispersion measure
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: MAD value, or `na` if inputs invalid
data_range(src, length)
Calculates the range (highest - lowest) over a lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Range value, or `na` if inputs invalid
zscore(src, length)
Calculates z-score (number of standard deviations from mean)
Parameters:
src (float) : Source series
length (simple int) : Lookback period for mean and stdev calculation (must be >= 2)
Returns: Z-score, or `na` if inputs invalid or stdev is zero
zscore_robust(src, length)
Calculates robust z-score using median and MAD (resistant to outliers)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 2)
Returns: Robust z-score, or `na` if inputs invalid or MAD is zero
normalize(src, length)
Normalizes value to range using min-max scaling
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Normalized value in , or `na` if inputs invalid or range is zero
percentile(src, length)
Calculates percentile rank of current value within lookback window
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Percentile rank (0 to 100), or `na` if inputs invalid
winsorize(src, length, lower_pct, upper_pct)
Winsorizes values by clamping to percentile bounds (reduces outlier impact)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
lower_pct (simple float) : Lower percentile bound (0-100, e.g., 5 for 5th percentile)
upper_pct (simple float) : Upper percentile bound (0-100, e.g., 95 for 95th percentile)
Returns: Winsorized value clamped to bounds
skewness(src, length)
Calculates sample skewness (measure of distribution asymmetry)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 3)
Returns: Skewness value (negative = left tail, positive = right tail), or `na` if invalid
kurtosis(src, length)
Calculates excess kurtosis (measure of distribution tail heaviness)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 4)
Returns: Excess kurtosis (>0 = heavy tails, <0 = light tails), or `na` if invalid
count_valid(src, length)
Counts non-na values in lookback window (useful for data quality checks)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Count of valid (non-na) values
sum(src, length)
Calculates sum over lookback period
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 1)
Returns: Sum of values, or `na` if inputs invalid
cumsum(src)
Calculates cumulative sum (running total from first bar)
Parameters:
src (float) : Source series
Returns: Cumulative sum
change(src, length)
Returns the change (difference) from n bars ago
Parameters:
src (float) : Source series
length (simple int) : Number of bars to look back (must be >= 1)
Returns: Current value minus value from `length` bars ago
roc(src, length)
Calculates Rate of Change (percentage change from n bars ago)
Parameters:
src (float) : Source series
length (simple int) : Number of bars to look back (must be >= 1)
Returns: Percentage change as decimal (0.05 = 5%), or `na` if invalid
normal_pdf_standard(x)
Calculates the standard normal probability density function (PDF)
Parameters:
x (float) : The value to evaluate
Returns: PDF value at x for standard normal N(0,1)
normal_pdf(x, mu, sigma)
Calculates the normal probability density function (PDF)
Parameters:
x (float) : The value to evaluate
mu (float) : Mean of the distribution (default: 0)
sigma (float) : Standard deviation (default: 1, must be > 0)
Returns: PDF value at x for normal N(mu, sigma²)
normal_cdf_standard(x)
Calculates the standard normal cumulative distribution function (CDF)
Parameters:
x (float) : The value to evaluate
Returns: Probability P(X <= x) for standard normal N(0,1)
@description Uses Abramowitz & Stegun approximation (formula 7.1.26), accurate to ~1.5e-7
normal_cdf(x, mu, sigma)
Calculates the normal cumulative distribution function (CDF)
Parameters:
x (float) : The value to evaluate
mu (float) : Mean of the distribution (default: 0)
sigma (float) : Standard deviation (default: 1, must be > 0)
Returns: Probability P(X <= x) for normal N(mu, sigma²)
normal_inv_standard(p)
Calculates the inverse standard normal CDF (quantile function)
Parameters:
p (float) : Probability value (must be in (0, 1))
Returns: x such that P(X <= x) = p for standard normal N(0,1)
@description Uses Acklam's algorithm, accurate to ~1.15e-9
normal_inv(p, mu, sigma)
Calculates the inverse normal CDF (quantile function)
Parameters:
p (float) : Probability value (must be in (0, 1))
mu (float) : Mean of the distribution
sigma (float) : Standard deviation (must be > 0)
Returns: x such that P(X <= x) = p for normal N(mu, sigma²)
power_law_alpha(src, length, tail_pct)
Estimates power-law exponent (alpha) using Hill estimator
Parameters:
src (float) : Source series (typically absolute returns or drawdowns)
length (simple int) : Lookback period (must be >= 10 for reliable estimates)
tail_pct (simple float) : Percentage of data to use for tail estimation (default: 0.1 = top 10%)
Returns: Estimated alpha (tail index), typically 2-4 for financial data
@description Alpha < 2 indicates infinite variance (very heavy tails)
@description Alpha < 3 indicates infinite kurtosis
@description Alpha > 4 suggests near-Gaussian behavior
power_law_alpha_mle(src, length, x_min)
Estimates power-law alpha using maximum likelihood (Clauset method)
Parameters:
src (float) : Source series (positive values expected)
length (simple int) : Lookback period (must be >= 20)
x_min (float) : Minimum threshold for power-law behavior
Returns: Estimated alpha using MLE
power_law_pdf(x, alpha, x_min)
Calculates power-law probability density (Pareto Type I)
Parameters:
x (float) : Value to evaluate (must be >= x_min)
alpha (float) : Power-law exponent (must be > 1)
x_min (float) : Minimum value / scale parameter (must be > 0)
Returns: PDF value
power_law_survival(x, alpha, x_min)
Calculates power-law survival function P(X > x)
Parameters:
x (float) : Value to evaluate (must be >= x_min)
alpha (float) : Power-law exponent (must be > 1)
x_min (float) : Minimum value / scale parameter (must be > 0)
Returns: Probability of exceeding x
power_law_ks(src, length, alpha, x_min)
Tests if data follows power-law using simplified Kolmogorov-Smirnov
Parameters:
src (float) : Source series
length (simple int) : Lookback period
alpha (float) : Estimated alpha from power_law_alpha()
x_min (float) : Threshold value
Returns: KS statistic (lower = better fit, typically < 0.1 for good fit)
is_power_law(src, length, tail_pct, ks_threshold)
Simple test if distribution appears to follow power-law
Parameters:
src (float) : Source series
length (simple int) : Lookback period
tail_pct (simple float) : Tail percentage for alpha estimation
ks_threshold (simple float) : Maximum KS statistic for acceptance (default: 0.1)
Returns: true if KS test suggests power-law fit
exp_pdf(x, lambda)
Calculates exponential probability density function
Parameters:
x (float) : Value to evaluate (must be >= 0)
lambda (float) : Rate parameter (must be > 0)
Returns: PDF value
exp_cdf(x, lambda)
Calculates exponential cumulative distribution function
Parameters:
x (float) : Value to evaluate (must be >= 0)
lambda (float) : Rate parameter (must be > 0)
Returns: Probability P(X <= x)
exp_lambda(src, length)
Estimates exponential rate parameter (lambda) using MLE
Parameters:
src (float) : Source series (positive values)
length (simple int) : Lookback period
Returns: Estimated lambda (1/mean)
jarque_bera(src, length)
Calculates Jarque-Bera test statistic for normality
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 10)
Returns: JB statistic (higher = more deviation from normality)
@description Under normality, JB ~ chi-squared(2). JB > 6 suggests non-normality at 5% level
is_normal(src, length, significance)
Tests if distribution is approximately normal
Parameters:
src (float) : Source series
length (simple int) : Lookback period
significance (simple float) : Significance level (default: 0.05)
Returns: true if Jarque-Bera test does not reject normality
shannon_entropy(src, length, n_bins)
Calculates Shannon entropy from a probability distribution
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 10)
n_bins (simple int) : Number of histogram bins for discretization (default: 10)
Returns: Shannon entropy in bits (log base 2)
@description Higher entropy = more randomness/uncertainty, lower = more predictability
shannon_entropy_norm(src, length, n_bins)
Calculates normalized Shannon entropy
Parameters:
src (float) : Source series
length (simple int) : Lookback period
n_bins (simple int) : Number of histogram bins
Returns: Normalized entropy where 0 = perfectly predictable, 1 = maximum randomness
tsallis_entropy(src, length, q, n_bins)
Calculates Tsallis entropy with q-parameter
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 10)
q (float) : Entropic index (q=1 recovers Shannon entropy)
n_bins (simple int) : Number of histogram bins
Returns: Tsallis entropy value
@description q < 1: emphasizes rare events (fat tails)
@description q = 1: equivalent to Shannon entropy
@description q > 1: emphasizes common events
optimal_q(src, length)
Estimates optimal q parameter from kurtosis
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Estimated q value that best captures the distribution's tail behavior
@description Uses relationship: q ≈ (5 + kurtosis) / (3 + kurtosis) for kurtosis > 0
tsallis_q_gaussian(x, q, beta)
Calculates Tsallis q-Gaussian probability density
Parameters:
x (float) : Value to evaluate
q (float) : Tsallis q parameter (must be < 3)
beta (float) : Width parameter (inverse temperature, must be > 0)
Returns: q-Gaussian PDF value
@description q=1 recovers standard Gaussian
permutation_entropy(src, length, order)
Calculates permutation entropy (ordinal pattern complexity)
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 20)
order (simple int) : Embedding dimension / pattern length (2-5, default: 3)
Returns: Normalized permutation entropy
@description Measures complexity of temporal ordering patterns
@description 0 = perfectly predictable sequence, 1 = random
approx_entropy(src, length, m, r)
Calculates Approximate Entropy (ApEn) - regularity measure
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 50)
m (simple int) : Embedding dimension (default: 2)
r (simple float) : Tolerance as fraction of stdev (default: 0.2)
Returns: Approximate entropy value (higher = more irregular/complex)
@description Lower ApEn indicates more self-similarity and predictability
entropy_regime(src, length, q, n_bins)
Detects market regime based on entropy level
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback period
q (float) : Tsallis q parameter (use optimal_q() or default 1.5)
n_bins (simple int) : Number of histogram bins
Returns: Regime indicator: -1 = trending (low entropy), 0 = transition, 1 = ranging (high entropy)
entropy_risk(src, length)
Calculates entropy-based risk indicator
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback period
Returns: Risk score where 1 = maximum divergence from Gaussian 1
hit_rate(src, length)
Calculates hit rate (probability of positive outcome) over lookback
Parameters:
src (float) : Source series (positive values count as hits)
length (simple int) : Lookback period
Returns: Hit rate as decimal
hit_rate_cond(condition, length)
Calculates hit rate for custom condition over lookback
Parameters:
condition (bool) : Boolean series (true = hit)
length (simple int) : Lookback period
Returns: Hit rate as decimal
expected_value(src, length)
Calculates expected value of a series
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Expected value (mean)
expected_value_trade(win_prob, take_profit, stop_loss)
Calculates expected value for a trade with TP and SL levels
Parameters:
win_prob (float) : Probability of hitting TP (0-1)
take_profit (float) : Take profit in price units or %
stop_loss (float) : Stop loss in price units or % (positive value)
Returns: Expected value per trade
@description EV = (win_prob * TP) - ((1 - win_prob) * SL)
breakeven_winrate(take_profit, stop_loss)
Calculates breakeven win rate for given TP/SL ratio
Parameters:
take_profit (float) : Take profit distance
stop_loss (float) : Stop loss distance
Returns: Required win rate for breakeven (EV = 0)
reward_risk_ratio(take_profit, stop_loss)
Calculates the reward-to-risk ratio
Parameters:
take_profit (float) : Take profit distance
stop_loss (float) : Stop loss distance
Returns: R:R ratio
fpt_probability(src, length, target, max_bars)
Estimates probability of price reaching target within N bars
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback for volatility estimation
target (float) : Target move (in same units as src, e.g., % return)
max_bars (simple int) : Maximum bars to consider
Returns: Probability of reaching target within max_bars
@description Based on random walk with drift approximation
fpt_mean(src, length, target)
Estimates mean first passage time to target level
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback for volatility estimation
target (float) : Target move
Returns: Expected number of bars to reach target (can be infinite)
fpt_historical(src, length, target)
Counts historical bars to reach target from each point
Parameters:
src (float) : Source series (typically price or returns)
length (simple int) : Lookback period
target (float) : Target move from each starting point
Returns: Array of first passage times (na if target not reached within lookback)
tp_probability(src, length, tp_distance, sl_distance)
Estimates probability of hitting TP before SL
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback for estimation
tp_distance (float) : Take profit distance (positive)
sl_distance (float) : Stop loss distance (positive)
Returns: Probability of TP being hit first
trade_probability(src, length, tp_pct, sl_pct)
Calculates complete trade probability and EV analysis
Parameters:
src (float) : Source series (typically returns)
length (simple int) : Lookback period
tp_pct (float) : Take profit percentage
sl_pct (float) : Stop loss percentage
Returns: Tuple:
cond_prob(condition_a, condition_b, length)
Calculates conditional probability P(B|A) from historical data
Parameters:
condition_a (bool) : Condition A (the given condition)
condition_b (bool) : Condition B (the outcome)
length (simple int) : Lookback period
Returns: P(B|A) = P(A and B) / P(A)
bayes_update(prior, likelihood, false_positive)
Updates probability using Bayes' theorem
Parameters:
prior (float) : Prior probability P(H)
likelihood (float) : P(E|H) - probability of evidence given hypothesis
false_positive (float) : P(E|~H) - probability of evidence given hypothesis is false
Returns: Posterior probability P(H|E)
streak_prob(win_rate, streak_length)
Calculates probability of N consecutive wins given win rate
Parameters:
win_rate (float) : Single-trade win probability
streak_length (simple int) : Number of consecutive wins
Returns: Probability of streak
losing_streak_prob(win_rate, streak_length)
Calculates probability of experiencing N consecutive losses
Parameters:
win_rate (float) : Single-trade win probability
streak_length (simple int) : Number of consecutive losses
Returns: Probability of losing streak
drawdown_prob(src, length, dd_threshold)
Estimates probability of drawdown exceeding threshold
Parameters:
src (float) : Source series (returns)
length (simple int) : Lookback period
dd_threshold (float) : Drawdown threshold (as positive decimal, e.g., 0.10 = 10%)
Returns: Historical probability of exceeding drawdown threshold
prob_to_odds(prob)
Calculates odds from probability
Parameters:
prob (float) : Probability (0-1)
Returns: Odds (prob / (1 - prob))
odds_to_prob(odds)
Calculates probability from odds
Parameters:
odds (float) : Odds ratio
Returns: Probability (0-1)
implied_prob(decimal_odds)
Calculates implied probability from decimal odds (betting)
Parameters:
decimal_odds (float) : Decimal odds (e.g., 2.5 means $2.50 return per $1 bet)
Returns: Implied probability
logit(prob)
Calculates log-odds (logit) from probability
Parameters:
prob (float) : Probability (must be in (0, 1))
Returns: Log-odds
inv_logit(log_odds)
Calculates probability from log-odds (inverse logit / sigmoid)
Parameters:
log_odds (float) : Log-odds value
Returns: Probability (0-1)
linreg_slope(src, length)
Calculates linear regression slope
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 2)
Returns: Slope coefficient (change per bar)
linreg_intercept(src, length)
Calculates linear regression intercept
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 2)
Returns: Intercept (predicted value at oldest bar in window)
linreg_value(src, length)
Calculates predicted value at current bar using linear regression
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Predicted value at current bar (end of regression line)
linreg_forecast(src, length, offset)
Forecasts value N bars ahead using linear regression
Parameters:
src (float) : Source series
length (simple int) : Lookback period for regression
offset (simple int) : Bars ahead to forecast (positive = future)
Returns: Forecasted value
linreg_channel(src, length, mult)
Calculates linear regression channel with bands
Parameters:
src (float) : Source series
length (simple int) : Lookback period
mult (simple float) : Standard deviation multiplier for bands
Returns: Tuple:
r_squared(src, length)
Calculates R-squared (coefficient of determination)
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: R² value where 1 = perfect linear fit
adj_r_squared(src, length)
Calculates adjusted R-squared (accounts for sample size)
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Adjusted R² value
std_error(src, length)
Calculates standard error of estimate (residual standard deviation)
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Standard error
residual(src, length)
Calculates residual at current bar
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Residual (actual - predicted)
residuals(src, length)
Returns array of all residuals in lookback window
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Array of residuals
t_statistic(src, length)
Calculates t-statistic for slope coefficient
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: T-statistic (slope / standard error of slope)
slope_pvalue(src, length)
Approximates p-value for slope t-test (two-tailed)
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Approximate p-value
is_significant(src, length, alpha)
Tests if regression slope is statistically significant
Parameters:
src (float) : Source series
length (simple int) : Lookback period
alpha (simple float) : Significance level (default: 0.05)
Returns: true if slope is significant at alpha level
trend_strength(src, length)
Calculates normalized trend strength based on R² and slope
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Trend strength where sign indicates direction
trend_angle(src, length)
Calculates trend angle in degrees
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Angle in degrees (positive = uptrend, negative = downtrend)
linreg_acceleration(src, length)
Calculates trend acceleration (second derivative)
Parameters:
src (float) : Source series
length (simple int) : Lookback period for each regression
Returns: Acceleration (change in slope)
linreg_deviation(src, length)
Calculates deviation from regression line in standard error units
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Deviation in standard error units (like z-score)
quadreg_coefficients(src, length)
Fits quadratic regression and returns coefficients
Parameters:
src (float) : Source series
length (simple int) : Lookback period (must be >= 4)
Returns: Tuple: for y = a*x² + b*x + c
quadreg_value(src, length)
Calculates quadratic regression value at current bar
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: Predicted value from quadratic fit
correlation(x, y, length)
Calculates Pearson correlation coefficient between two series
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period (must be >= 3)
Returns: Correlation coefficient
covariance(x, y, length)
Calculates sample covariance between two series
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period (must be >= 2)
Returns: Covariance value
beta(asset, benchmark, length)
Calculates beta coefficient (slope of regression of y on x)
Parameters:
asset (float) : Asset returns series
benchmark (float) : Benchmark returns series
length (simple int) : Lookback period
Returns: Beta coefficient
@description Beta = Cov(asset, benchmark) / Var(benchmark)
alpha(asset, benchmark, length, risk_free)
Calculates alpha (Jensen's alpha / intercept)
Parameters:
asset (float) : Asset returns series
benchmark (float) : Benchmark returns series
length (simple int) : Lookback period
risk_free (float) : Risk-free rate (default: 0)
Returns: Alpha value (excess return not explained by beta)
spearman(x, y, length)
Calculates Spearman rank correlation coefficient
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period (must be >= 3)
Returns: Spearman correlation
@description More robust to outliers than Pearson correlation
kendall_tau(x, y, length)
Calculates Kendall's tau rank correlation (simplified)
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period (must be >= 3)
Returns: Kendall's tau
correlation_change(x, y, length, change_period)
Calculates change in correlation over time
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period for correlation
change_period (simple int) : Period over which to measure change
Returns: Change in correlation
correlation_regime(x, y, length, ma_length)
Detects correlation regime based on level and stability
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period for correlation
ma_length (simple int) : Moving average length for smoothing
Returns: Regime: -1 = negative, 0 = uncorrelated, 1 = positive
correlation_stability(x, y, length, stability_length)
Calculates correlation stability (inverse of volatility)
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback for correlation
stability_length (simple int) : Lookback for stability calculation
Returns: Stability score where 1 = perfectly stable
relative_strength(asset, benchmark, length)
Calculates relative strength of asset vs benchmark
Parameters:
asset (float) : Asset price series
benchmark (float) : Benchmark price series
length (simple int) : Smoothing period
Returns: Relative strength ratio (normalized)
tracking_error(asset, benchmark, length)
Calculates tracking error (standard deviation of excess returns)
Parameters:
asset (float) : Asset returns
benchmark (float) : Benchmark returns
length (simple int) : Lookback period
Returns: Tracking error (annualize by multiplying by sqrt(252) for daily data)
information_ratio(asset, benchmark, length)
Calculates information ratio (risk-adjusted excess return)
Parameters:
asset (float) : Asset returns
benchmark (float) : Benchmark returns
length (simple int) : Lookback period
Returns: Information ratio
capture_ratio(asset, benchmark, length, up_capture)
Calculates up/down capture ratio
Parameters:
asset (float) : Asset returns
benchmark (float) : Benchmark returns
length (simple int) : Lookback period
up_capture (simple bool) : If true, calculate up capture; if false, down capture
Returns: Capture ratio
autocorrelation(src, length, lag)
Calculates autocorrelation at specified lag
Parameters:
src (float) : Source series
length (simple int) : Lookback period
lag (simple int) : Lag for autocorrelation (default: 1)
Returns: Autocorrelation at specified lag
partial_autocorr(src, length)
Calculates partial autocorrelation at lag 1
Parameters:
src (float) : Source series
length (simple int) : Lookback period
Returns: PACF at lag 1 (equals ACF at lag 1)
autocorr_test(src, length, max_lag)
Tests for significant autocorrelation (Ljung-Box inspired)
Parameters:
src (float) : Source series
length (simple int) : Lookback period
max_lag (simple int) : Maximum lag to test
Returns: Sum of squared autocorrelations (higher = more autocorrelation)
cross_correlation(x, y, length, lag)
Calculates cross-correlation at specified lag
Parameters:
x (float) : First series
y (float) : Second series (lagged)
length (simple int) : Lookback period
lag (simple int) : Lag to apply to y (positive = y leads x)
Returns: Cross-correlation at specified lag
cross_correlation_peak(x, y, length, max_lag)
Finds lag with maximum cross-correlation
Parameters:
x (float) : First series
y (float) : Second series
length (simple int) : Lookback period
max_lag (simple int) : Maximum lag to search (both directions)
Returns: Tuple:
moving_averages# MovingAverages Library - PineScript v6
A comprehensive PineScript v6 library containing **50+ Moving Average calculations** for TradingView.
---
## 📦 Installation
```pinescript
import TheTradingSpiderMan/moving_averages/1 as MA
```
---
## 📊 All Available Moving Averages (50+)
### Basic Moving Averages
| Function | Selector Key | Description |
| -------- | ------------ | ------------------------------------------ |
| `sma()` | `SMA` | Simple Moving Average - arithmetic mean |
| `ema()` | `EMA` | Exponential Moving Average |
| `wma()` | `WMA` | Weighted Moving Average |
| `vwma()` | `VWMA` | Volume Weighted Moving Average |
| `rma()` | `RMA` | Relative/Smoothed Moving Average |
| `smma()` | `SMMA` | Smoothed Moving Average (alias for RMA) |
| `swma()` | - | Symmetrically Weighted MA (4-period fixed) |
### Hull Family
| Function | Selector Key | Description |
| -------- | ------------ | ------------------------------- |
| `hma()` | `HMA` | Hull Moving Average |
| `ehma()` | `EHMA` | Exponential Hull Moving Average |
### Double/Triple Smoothed
| Function | Selector Key | Description |
| -------------- | ------------ | --------------------------------- |
| `dema()` | `DEMA` | Double Exponential Moving Average |
| `tema()` | `TEMA` | Triple Exponential Moving Average |
| `tma()` | `TMA` | Triangular Moving Average |
| `t3()` | `T3` | Tillson T3 Moving Average |
| `twma()` | `TWMA` | Triple Weighted Moving Average |
| `swwma()` | `SWWMA` | Smoothed Weighted Moving Average |
| `trixSmooth()` | `TRIXSMOOTH` | Triple EMA Smoothed |
### Zero/Low Lag
| Function | Selector Key | Description |
| --------- | ------------ | ----------------------------------- |
| `zlema()` | `ZLEMA` | Zero Lag Exponential MA |
| `lsma()` | `LSMA` | Least Squares Moving Average |
| `epma()` | `EPMA` | Endpoint Moving Average |
| `ilrs()` | `ILRS` | Integral of Linear Regression Slope |
### Adaptive Moving Averages
| Function | Selector Key | Description |
| ---------- | ------------ | ------------------------------- |
| `kama()` | `KAMA` | Kaufman Adaptive Moving Average |
| `frama()` | `FRAMA` | Fractal Adaptive Moving Average |
| `vidya()` | `VIDYA` | Variable Index Dynamic Average |
| `vma()` | `VMA` | Variable Moving Average |
| `vama()` | `VAMA` | Volume Adjusted Moving Average |
| `rvma()` | `RVMA` | Rolling VMA |
| `apexMA()` | `APEXMA` | Apex Moving Average |
### Ehlers Filters
| Function | Selector Key | Description |
| ----------------- | --------------- | --------------------------------- |
| `superSmoother()` | `SUPERSMOOTHER` | Ehlers Super Smoother |
| `butterworth2()` | `BUTTERWORTH2` | 2-Pole Butterworth Filter |
| `butterworth3()` | `BUTTERWORTH3` | 3-Pole Butterworth Filter |
| `instantTrend()` | `INSTANTTREND` | Ehlers Instantaneous Trendline |
| `edsma()` | `EDSMA` | Deviation Scaled Moving Average |
| `mama()` | `MAMA` | Mesa Adaptive Moving Average |
| `fama()` | `FAMAVAL` | Following Adaptive Moving Average |
### Laguerre Family
| Function | Selector Key | Description |
| -------------------- | ------------------ | ------------------------ |
| `laguerreFilter()` | `LAGUERRE` | Laguerre Filter |
| `adaptiveLaguerre()` | `ADAPTIVELAGUERRE` | Adaptive Laguerre Filter |
### Special Weighted
| Function | Selector Key | Description |
| ---------- | ------------ | -------------------------------- |
| `alma()` | `ALMA` | Arnaud Legoux Moving Average |
| `sinwma()` | `SINWMA` | Sine Weighted Moving Average |
| `gwma()` | `GWMA` | Gaussian Weighted Moving Average |
| `nma()` | `NMA` | Natural Moving Average |
### Jurik/McGinley/Coral
| Function | Selector Key | Description |
| ------------ | ------------ | --------------------- |
| `jma()` | `JMA` | Jurik Moving Average |
| `mcginley()` | `MCGINLEY` | McGinley Dynamic |
| `coral()` | `CORAL` | Coral Trend Indicator |
### Mean Types
| Function | Selector Key | Description |
| -------------- | ------------ | ------------------------- |
| `medianMA()` | `MEDIANMA` | Median Moving Average |
| `gma()` | `GMA` | Geometric Moving Average |
| `harmonicMA()` | `HARMONICMA` | Harmonic Moving Average |
| `trimmedMA()` | `TRIMMEDMA` | Trimmed Moving Average |
| `cma()` | `CMA` | Cumulative Moving Average |
### Volume-Based
| Function | Selector Key | Description |
| --------- | ------------ | -------------------------- |
| `evwma()` | `EVWMA` | Elastic Volume Weighted MA |
### Other Specialized
| Function | Selector Key | Description |
| ----------------- | --------------- | --------------------------- |
| `hwma()` | `HWMA` | Holt-Winters Moving Average |
| `gdema()` | `GDEMA` | Generalized DEMA |
| `rema()` | `REMA` | Regularized EMA |
| `modularFilter()` | `MODULARFILTER` | Modular Filter |
| `rmt()` | `RMT` | Recursive Moving Trendline |
| `qrma()` | `QRMA` | Quadratic Regression MA |
| `wilderSmooth()` | `WILDERSMOOTH` | Welles Wilder Smoothing |
| `leoMA()` | `LEOMA` | Leo Moving Average |
| `ahrensMA()` | `AHRENSMA` | Ahrens Moving Average |
| `runningMA()` | `RUNNINGMA` | Running Moving Average |
| `ppoMA()` | `PPOMA` | PPO-based Moving Average |
| `fisherMA()` | `FISHERMA` | Fisher Transform MA |
---
## 🎯 Helper Functions
| Function | Description |
| ---------------- | ------------------------------------------------------------- |
| `wcp()` | Weighted Close Price: (H+L+2\*C)/4 |
| `typicalPrice()` | Typical Price: (H+L+C)/3 |
| `medianPrice()` | Median Price: (H+L)/2 |
| `selector()` | **Master selector** - choose any MA by string name |
| `getAllTypes()` | Returns all supported MA type names as comma-separated string |
---
## 🔧 Usage Examples
### Basic Usage
```pinescript
//@version=6
indicator("MA Example")
import quantablex/moving_averages/1 as MA
// Simple calls
plot(MA.sma(close, 20), "SMA 20", color.blue)
plot(MA.ema(close, 20), "EMA 20", color.red)
plot(MA.hma(close, 20), "HMA 20", color.green)
```
### Using the Selector Function (50+ MA Types)
```pinescript
//@version=6
indicator("MA Selector")
import quantablex/moving_averages/1 as MA
// Full list of all supported types:
// SMA,EMA,WMA,VWMA,RMA,SMMA,HMA,EHMA,DEMA,TEMA,TMA,T3,TWMA,SWWMA,TRIXSMOOTH,
// ZLEMA,LSMA,EPMA,ILRS,KAMA,FRAMA,VIDYA,VMA,VAMA,RVMA,APEXMA,SUPERSMOOTHER,
// BUTTERWORTH2,BUTTERWORTH3,INSTANTTREND,EDSMA,LAGUERRE,ADAPTIVELAGUERRE,
// ALMA,SINWMA,GWMA,NMA,JMA,MCGINLEY,CORAL,MEDIANMA,GMA,HARMONICMA,TRIMMEDMA,
// EVWMA,HWMA,GDEMA,REMA,MODULARFILTER,RMT,QRMA,WILDERSMOOTH,LEOMA,AHRENSMA,
// RUNNINGMA,PPOMA,MAMA,FAMAVAL,FISHERMA,CMA
maType = input.string("EMA", "MA Type", options= )
length = input.int(20, "Length")
plot(MA.selector(close, length, maType), "Selected MA", color.orange)
```
### Advanced Moving Averages
```pinescript
//@version=6
indicator("Advanced MAs")
import quantablex/moving_averages/1 as MA
// ALMA with custom offset and sigma
plot(MA.alma(close, 20, 0.85, 6), "ALMA", color.purple)
// KAMA with custom fast/slow periods
plot(MA.kama(close, 10, 2, 30), "KAMA", color.teal)
// T3 with custom volume factor
plot(MA.t3(close, 20, 0.7), "T3", color.yellow)
// Laguerre Filter with custom gamma
plot(MA.laguerreFilter(close, 0.8), "Laguerre", color.lime)
```
---
## 📈 MA Selection Guide
| Use Case | Recommended MAs |
| ---------------------- | ------------------------------------------- |
| **Trend Following** | EMA, DEMA, TEMA, HMA, CORAL |
| **Low Lag Required** | ZLEMA, HMA, EHMA, JMA, LSMA |
| **Volatile Markets** | KAMA, VIDYA, FRAMA, VMA, ADAPTIVELAGUERRE |
| **Smooth Signals** | T3, LAGUERRE, SUPERSMOOTHER, BUTTERWORTH2/3 |
| **Support/Resistance** | SMA, WMA, TMA, MEDIANMA |
| **Scalping** | MCGINLEY, ZLEMA, HMA, INSTANTTREND |
| **Noise Reduction** | MAMA, EDSMA, GWMA, TRIMMEDMA |
| **Volume-Based** | VWMA, EVWMA, VAMA |
---
## ⚙️ Parameters Reference
### Common Parameters
- `src` - Source series (close, open, hl2, hlc3, etc.)
- `len` - Period length (integer)
### Special Parameters
- `alma()`: `offset` (0-1), `sigma` (curve shape)
- `kama()`: `fastLen`, `slowLen`
- `t3()`: `vFactor` (volume factor)
- `jma()`: `phase` (-100 to 100)
- `laguerreFilter()`: `gamma` (0-1 damping)
- `rema()`: `lambda` (regularization)
- `modularFilter()`: `beta` (sensitivity)
- `gdema()`: `mult` (multiplier, 2 = standard DEMA)
- `trimmedMA()`: `trimPct` (0-0.5, percentage to trim)
- `mama()/fama()`: `fastLimit`, `slowLimit`
- `adaptiveLaguerre()`: Uses `len` for adaptation period
---
## 📝 Notes
- All 50+ functions are exported for use in any PineScript v6 indicator/strategy
- The `selector()` function supports **all MA types** via string key
- Use `getAllTypes()` to get a comma-separated list of all supported MA names
- Some MAs (CMA, INSTANTTREND, LAGUERRE, MAMA) don't use `len` parameter
- Use `nz()` wrapper if handling potential NA values in your calculations
---
**Author:** thetradingspiderman
**Version:** 1.0
**PineScript Version:** 6
**Total MA Types:** 50+
SpatialIndexYou can start using this now by inserthing this at the top of your indicator/strategy/library.
import ArunaReborn/SpatialIndex/1 as SI
Overview
SpatialIndex is a high-performance Pine Script library that implements price-bucketed spatial indexing for efficient proximity queries on large datasets. Instead of scanning through hundreds or thousands of items linearly (O(n)), this library provides O(k) bucket lookup where k is typically just a handful of buckets, dramatically improving performance for price-based filtering operations.
This library works with any data type through index-based references, making it universally applicable for support/resistance levels, pivot points, order zones, pattern detection points, Fair Value Gaps, and any other price-based data that needs frequent proximity queries.
Why This Library Exists
The Problem
When building advanced technical indicators that track large numbers of price levels (support/resistance zones, pivot points, order blocks, etc.), you often need to answer questions like:
- *"Which levels are within 5% of the current price?"*
- *"What zones overlap with this price range?"*
- *"Are there any significant levels near my entry point?"*
The naive approach is to loop through every single item and check its price. For 500 levels across multiple timeframes, this means 500 comparisons every bar . On instruments with thousands of historical bars, this quickly becomes a performance bottleneck that can cause scripts to time out or lag.
The Solution
SpatialIndex solves this by organizing items into price buckets —like filing cabinets organized by price range. When you query for items near $50,000, the library only looks in the relevant buckets (e.g., $49,000-$51,000 range), ignoring all other price regions entirely.
Performance Example:
- Linear scan: Check 500 items = 500 comparisons per query
- Spatial index: Check 3-5 buckets with ~10 items each = 30-50 comparisons per query
- Result: 10-16x faster queries
Key Features
Core Capabilities
- ✅ Generic Design : Works with any data type via index references
- ✅ Multiple Index Strategies : Fixed bucket size or ATR-based dynamic sizing
- ✅ Range Support : Index items that span price ranges (zones, gaps, channels)
- ✅ Efficient Queries : O(k) bucket lookup instead of O(n) linear scan
- ✅ Multiple Query Types : Proximity percentage, fixed range, exact price with tolerance
- ✅ Dynamic Updates : Add, remove, update items in O(1) time
- ✅ Batch Operations : Efficient bulk removal and reindexing
- ✅ Query Caching : Optional caching for repeated queries within same bar
- ✅ Statistics & Debugging : Built-in stats and diagnostic functions
### Advanced Features
- ATR-Based Bucketing : Automatically adjusts bucket sizes based on volatility
- Multi-Bucket Spanning : Items that span ranges are indexed in all overlapping buckets
- Reindexing Support : Handles array removals with automatic index shifting
- Cache Management : Configurable query caching with automatic invalidation
- Empty Bucket Cleanup : Automatically removes empty buckets to minimize memory
How It Works
The Bucketing Concept
Think of price space as divided into discrete buckets, like a histogram:
```
Price Range: $98-$100 $100-$102 $102-$104 $104-$106 $106-$108
Bucket Key: 49 50 51 52 53
Items:
```
When you query for items near $103:
1. Calculate which buckets overlap the $101.50-$104.50 range (keys 50, 51, 52)
2. Return items from only those buckets:
3. Never check items in buckets 49 or 53
Bucket Size Selection
Fixed Size Mode:
```pine
var SI.SpatialBucket index = SI.newSpatialBucket(2.0) // $2 per bucket
```
- Good for: Instruments with stable price ranges
- Example: For stocks trading at $100, 2.0 = 2% increments
ATR-Based Mode:
```pine
float atr = ta.atr(14)
var SI.SpatialBucket index = SI.newSpatialBucketATR(1.0, atr) // 1x ATR per bucket
SI.updateATR(index, atr) // Update each bar
```
- Good for: Instruments with varying volatility
- Adapts automatically to market conditions
- 1.0 multiplier = one bucket spans one ATR unit
Optimal Bucket Size:
The library includes a helper function to calculate optimal size:
```pine
float optimalSize = SI.calculateOptimalBucketSize(close, 5.0) // For 5% proximity queries
```
This ensures queries span approximately 3 buckets for optimal performance.
Index-Based Architecture
The library doesn't store your actual data—it only stores indices that point to your external arrays:
```pine
// Your data
var array levels = array.new()
var array types = array.new()
var array ages = array.new()
// Your index
var SI.SpatialBucket index = SI.newSpatialBucket(2.0)
// Add a level
array.push(levels, 50000.0)
array.push(types, "support")
array.push(ages, 0)
SI.add(index, array.size(levels) - 1, 50000.0) // Store index 0
// Query near current price
SI.QueryResult result = SI.queryProximity(index, close, 5.0)
for idx in result.indices
float level = array.get(levels, idx)
string type = array.get(types, idx)
// Work with your actual data
```
This design means:
- ✅ Works with any data structure you define
- ✅ No data duplication
- ✅ Minimal memory footprint
- ✅ Full control over your data
---
Usage Guide
Basic Setup
```pine
// Import library
import username/SpatialIndex/1 as SI
// Create index
var SI.SpatialBucket index = SI.newSpatialBucket(2.0)
// Your data arrays
var array supportLevels = array.new()
var array touchCounts = array.new()
```
Adding Items
Single Price Point:
```pine
// Add a support level at $50,000
array.push(supportLevels, 50000.0)
array.push(touchCounts, 1)
int levelIdx = array.size(supportLevels) - 1
SI.add(index, levelIdx, 50000.0)
```
Price Range (Zones/Gaps):
```pine
// Add a resistance zone from $51,000 to $52,000
array.push(zoneBottoms, 51000.0)
array.push(zoneTops, 52000.0)
int zoneIdx = array.size(zoneBottoms) - 1
SI.addRange(index, zoneIdx, 51000.0, 52000.0) // Indexed in all overlapping buckets
```
Querying Items
Proximity Query (Percentage):
```pine
// Find all levels within 5% of current price
SI.QueryResult result = SI.queryProximity(index, close, 5.0)
if array.size(result.indices) > 0
for idx in result.indices
float level = array.get(supportLevels, idx)
// Process nearby level
```
Fixed Range Query:
```pine
// Find all items between $49,000 and $51,000
SI.QueryResult result = SI.queryRange(index, 49000.0, 51000.0)
```
Exact Price with Tolerance:
```pine
// Find items at exactly $50,000 +/- $100
SI.QueryResult result = SI.queryAt(index, 50000.0, 100.0)
```
Removing Items
Safe Removal Pattern:
```pine
SI.QueryResult result = SI.queryProximity(index, close, 5.0)
if array.size(result.indices) > 0
// IMPORTANT: Sort descending to safely remove from arrays
array sorted = SI.sortIndicesDescending(result)
for idx in sorted
// Remove from index
SI.remove(index, idx)
// Remove from your data arrays
array.remove(supportLevels, idx)
array.remove(touchCounts, idx)
// Reindex to maintain consistency
SI.reindexAfterRemoval(index, idx)
```
Batch Removal (More Efficient):
```pine
// Collect indices to remove
array toRemove = array.new()
for i = 0 to array.size(supportLevels) - 1
if array.get(touchCounts, i) > 10 // Remove old levels
array.push(toRemove, i)
// Remove in descending order from data arrays
array sorted = array.copy(toRemove)
array.sort(sorted, order.descending)
for idx in sorted
SI.remove(index, idx)
array.remove(supportLevels, idx)
array.remove(touchCounts, idx)
// Batch reindex (much faster than individual reindexing)
SI.reindexAfterBatchRemoval(index, toRemove)
```
Updating Items
```pine
// Update a level's price (e.g., after refinement)
float newPrice = 50100.0
SI.update(index, levelIdx, newPrice)
array.set(supportLevels, levelIdx, newPrice)
// Update a zone's range
SI.updateRange(index, zoneIdx, 51000.0, 52500.0)
array.set(zoneBottoms, zoneIdx, 51000.0)
array.set(zoneTops, zoneIdx, 52500.0)
```
Query Caching
For repeated queries within the same bar:
```pine
// Create cache (persistent)
var SI.CachedQuery cache = SI.newCachedQuery()
// Cached query (returns cached result if parameters match)
SI.QueryResult result = SI.queryProximityCached(
index,
cache,
close,
5.0, // proximity%
1 // cache duration in bars
)
// Invalidate cache when index changes significantly
if bigChangeDetected
SI.invalidateCache(cache)
```
---
Practical Examples
Example 1: Support/Resistance Finder
```pine
//@version=6
indicator("S/R with Spatial Index", overlay=true)
import username/SpatialIndex/1 as SI
// Data storage
var array levels = array.new()
var array types = array.new() // "support" or "resistance"
var array touches = array.new()
var array ages = array.new()
// Spatial index
var SI.SpatialBucket index = SI.newSpatialBucket(close * 0.02) // 2% buckets
// Detect pivots
bool isPivotHigh = ta.pivothigh(high, 5, 5)
bool isPivotLow = ta.pivotlow(low, 5, 5)
// Add new levels
if isPivotHigh
array.push(levels, high )
array.push(types, "resistance")
array.push(touches, 1)
array.push(ages, 0)
SI.add(index, array.size(levels) - 1, high )
if isPivotLow
array.push(levels, low )
array.push(types, "support")
array.push(touches, 1)
array.push(ages, 0)
SI.add(index, array.size(levels) - 1, low )
// Find nearby levels (fast!)
SI.QueryResult nearby = SI.queryProximity(index, close, 3.0) // Within 3%
// Process nearby levels
for idx in nearby.indices
float level = array.get(levels, idx)
string type = array.get(types, idx)
// Check for touch
if type == "support" and low <= level and low > level
array.set(touches, idx, array.get(touches, idx) + 1)
else if type == "resistance" and high >= level and high < level
array.set(touches, idx, array.get(touches, idx) + 1)
// Age and cleanup old levels
for i = array.size(ages) - 1 to 0
array.set(ages, i, array.get(ages, i) + 1)
// Remove levels older than 500 bars or with 5+ touches
if array.get(ages, i) > 500 or array.get(touches, i) >= 5
SI.remove(index, i)
array.remove(levels, i)
array.remove(types, i)
array.remove(touches, i)
array.remove(ages, i)
SI.reindexAfterRemoval(index, i)
// Visualization
for idx in nearby.indices
line.new(bar_index, array.get(levels, idx), bar_index + 10, array.get(levels, idx),
color=array.get(types, idx) == "support" ? color.green : color.red)
```
Example 2: Multi-Timeframe Zone Detector
```pine
//@version=6
indicator("MTF Zones", overlay=true)
import username/SpatialIndex/1 as SI
// Store zones from multiple timeframes
var array zoneBottoms = array.new()
var array zoneTops = array.new()
var array zoneTimeframes = array.new()
// ATR-based spatial index for adaptive bucketing
var SI.SpatialBucket index = SI.newSpatialBucketATR(1.0, ta.atr(14))
SI.updateATR(index, ta.atr(14)) // Update bucket size with volatility
// Request higher timeframe data
= request.security(syminfo.tickerid, "240", )
// Detect HTF zones
if not na(htf_high) and not na(htf_low)
float zoneTop = htf_high
float zoneBottom = htf_low * 0.995 // 0.5% zone thickness
// Check if zone already exists nearby
SI.QueryResult existing = SI.queryRange(index, zoneBottom, zoneTop)
if array.size(existing.indices) == 0 // No overlapping zones
// Add new zone
array.push(zoneBottoms, zoneBottom)
array.push(zoneTops, zoneTop)
array.push(zoneTimeframes, "4H")
int idx = array.size(zoneBottoms) - 1
SI.addRange(index, idx, zoneBottom, zoneTop)
// Query zones near current price
SI.QueryResult nearbyZones = SI.queryProximity(index, close, 2.0) // Within 2%
// Highlight nearby zones
for idx in nearbyZones.indices
box.new(bar_index - 50, array.get(zoneBottoms, idx),
bar_index, array.get(zoneTops, idx),
bgcolor=color.new(color.blue, 90))
```
### Example 3: Performance Comparison
```pine
//@version=6
indicator("Spatial Index Performance Test")
import username/SpatialIndex/1 as SI
// Generate 500 random levels
var array levels = array.new()
var SI.SpatialBucket index = SI.newSpatialBucket(close * 0.02)
if bar_index == 0
for i = 0 to 499
float randomLevel = close * (0.9 + math.random() * 0.2) // +/- 10%
array.push(levels, randomLevel)
SI.add(index, i, randomLevel)
// Method 1: Linear scan (naive approach)
int linearCount = 0
float proximityPct = 5.0
float lowBand = close * (1 - proximityPct/100)
float highBand = close * (1 + proximityPct/100)
for i = 0 to array.size(levels) - 1
float level = array.get(levels, i)
if level >= lowBand and level <= highBand
linearCount += 1
// Method 2: Spatial index query
SI.QueryResult result = SI.queryProximity(index, close, proximityPct)
int spatialCount = array.size(result.indices)
// Compare performance
plot(result.queryCount, "Items Examined (Spatial)", color=color.green)
plot(linearCount, "Items Examined (Linear)", color=color.red)
plot(spatialCount, "Results Found", color=color.blue)
// Spatial index typically examines 10-50 items vs 500 for linear scan!
```
API Reference Summary
Initialization
- `newSpatialBucket(bucketSize)` - Fixed bucket size
- `newSpatialBucketATR(atrMultiplier, atrValue)` - ATR-based buckets
- `updateATR(sb, newATR)` - Update ATR for dynamic sizing
Adding Items
- `add(sb, itemIndex, price)` - Add item at single price point
- `addRange(sb, itemIndex, priceBottom, priceTop)` - Add item spanning range
Querying
- `queryProximity(sb, refPrice, proximityPercent)` - Query by percentage
- `queryRange(sb, priceBottom, priceTop)` - Query fixed range
- `queryAt(sb, price, tolerance)` - Query exact price with tolerance
- `queryProximityCached(sb, cache, refPrice, pct, duration)` - Cached query
Removing & Updating
- `remove(sb, itemIndex)` - Remove item
- `update(sb, itemIndex, newPrice)` - Update item price
- `updateRange(sb, itemIndex, newBottom, newTop)` - Update item range
- `reindexAfterRemoval(sb, removedIndex)` - Reindex after single removal
- `reindexAfterBatchRemoval(sb, removedIndices)` - Batch reindex
- `clear(sb)` - Remove all items
Utilities
- `size(sb)` - Get item count
- `isEmpty(sb)` - Check if empty
- `contains(sb, itemIndex)` - Check if item exists
- `getStats(sb)` - Get debug statistics string
- `calculateOptimalBucketSize(price, pct)` - Calculate optimal bucket size
- `sortIndicesDescending(result)` - Sort for safe removal
- `sortIndicesAscending(result)` - Sort ascending
Performance Characteristics
Time Complexity
- Add : O(1) for single point, O(m) for range spanning m buckets
- Remove : O(1) lookup + O(b) bucket cleanup where b = buckets item spans
- Query : O(k) where k = buckets in range (typically 3-5) vs O(n) linear scan
- Update : O(1) removal + O(1) addition = O(1) total
Space Complexity
- Memory per item : ~8 bytes for index reference + map overhead
- Bucket overhead : Proportional to price range coverage
- Typical usage : For 500 items with 50 active buckets ≈ 4-8KB total
Scalability
- ✅ 100 items : ~5-10x faster than linear scan
- ✅ 500 items : ~10-15x faster
- ✅ 1000+ items : ~15-20x faster
- ⚠️ Performance degrades if bucket size is too small (too many buckets)
- ⚠️ Performance degrades if bucket size is too large (too many items per bucket)
Best Practices
Bucket Size Selection
1. Start with 2-5% of asset price for percentage-based queries
2. Use ATR-based mode for volatile assets or multi-symbol scripts
3. Test bucket size using `calculateOptimalBucketSize()` function
4. Monitor with `getStats()` to ensure reasonable bucket count
Memory Management
1. Clear old items regularly to prevent unbounded growth
2. Use age tracking to remove stale data
3. Set maximum item limits based on your needs
4. Batch removals are more efficient than individual removals
Query Optimization
1. Use caching for repeated queries within same bar
2. Invalidate cache when index changes significantly
3. Sort results descending before removal iteration
4. Batch operations when possible (reindexing, removal)
Data Consistency
1. Always reindex after removal to maintain index alignment
2. Remove from arrays in descending order to avoid index shifting issues
3. Use batch reindex for multiple simultaneous removals
4. Keep external arrays and index in sync at all times
Limitations & Caveats
Known Limitations
- Not suitable for exact price matching : Use tolerance with `queryAt()`
- Bucket size affects performance : Too small = many buckets, too large = many items per bucket
- Memory usage : Scales with price range coverage and item count
- Reindexing overhead : Removing items mid-array requires index shifting
When NOT to Use
- ❌ Datasets with < 50 items (linear scan is simpler)
- ❌ Items that change price every bar (constant reindexing overhead)
- ❌ When you need ALL items every time (no benefit over arrays)
- ❌ Exact price level matching without tolerance (use maps instead)
When TO Use
- ✅ Large datasets (100+ items) with occasional queries
- ✅ Proximity-based filtering (% of price, ATR-based ranges)
- ✅ Multi-timeframe level tracking
- ✅ Zone/range overlap detection
- ✅ Price-based spatial filtering
---
Technical Details
Bucketing Algorithm
Items are assigned to buckets using integer division:
```
bucketKey = floor((price - basePrice) / bucketSize)
```
For ATR-based mode:
```
effectiveBucketSize = atrValue × atrMultiplier
bucketKey = floor((price - basePrice) / effectiveBucketSize)
```
Range Indexing
Items spanning price ranges are indexed in all overlapping buckets to ensure accurate range queries. The midpoint bucket is designated as the "primary bucket" for removal operations.
Index Consistency
The library maintains two maps:
1. `buckets`: Maps bucket keys → IntArray wrappers containing item indices
2. `itemToBucket`: Maps item indices → primary bucket key (for O(1) removal)
This dual-mapping ensures both fast queries and fast removal while maintaining consistency.
Implementation Note: Pine Script doesn't allow nested collections (map containing arrays directly), so the library uses an `IntArray` wrapper type to hold arrays within the map structure. This is an internal implementation detail that doesn't affect usage.
---
Version History
Version 1.0
- Initial release
Credits & License
License : Mozilla Public License 2.0 (TradingView default)
Library Type : Open-source educational resource
This library is designed as a public domain utility for the Pine Script community. As per TradingView library rules, this code can be freely reused by other authors. If you use this library in your scripts, please provide appropriate credit as required by House Rules.
Summary
SpatialIndex is a specialized library that solves a specific problem: fast proximity queries on large price-based datasets . If you're building indicators that track hundreds of levels, zones, or price points and need to frequently filter by proximity to current price, this library can provide 10-20x performance improvements over naive linear scanning.
The index-based architecture makes it universally applicable to any data type, and the ATR-based bucketing ensures it adapts to market conditions automatically. Combined with query caching and batch operations, it provides a complete solution for spatial data management in Pine Script.
Use this library when speed matters and your dataset is large.
MLMatrixLibOverview
MLMatrixLib is a comprehensive Pine Script v6 library implementing machine learning algorithms using native matrix operations. This library provides traders and developers with a toolkit of statistical and ML methods for building quantitative trading systems, performing data analysis, and creating adaptive indicators.
How It Works
The library leverages Pine Script's native matrix type to perform efficient linear algebra operations. Each algorithm is implemented from first principles, using matrix decomposition, iterative optimization, and statistical estimation techniques. All functions are designed for numerical stability with careful handling of edge cases.
---
Library Contents (34 Sections)
Section 1: Utility Functions & Matrix Operations
Core building blocks including:
• identity(n) - Creates n×n identity matrix
• diagonal(values) - Creates diagonal matrix from array
• ones(rows, cols) / zeros(rows, cols) - Matrix constructors
• frobeniusNorm(m) / l1Norm(m) - Matrix norm calculations
• hadamard(m1, m2) - Element-wise multiplication
• columnMeans(m) / rowMeans(m) - Statistical aggregations
• standardize(m) - Z-score normalization (zero mean, unit variance)
• minMaxNormalize(m) - Scale values to range
• fitStandardScaler(m) / fitMinMaxScaler(m) - Reusable scaler parameters
• addBiasColumn(m) - Prepend column of ones for regression
• arrayMedian(arr) / arrayPercentile(arr, p) - Array statistics
Section 2: Activation Functions
Numerically stable implementations:
• sigmoid(x) / sigmoidMatrix(m) - Logistic function with overflow protection
• tanhActivation(x) / tanhMatrix(m) - Hyperbolic tangent
• relu(x) / reluMatrix(m) - Rectified Linear Unit
• leakyRelu(x, alpha) - Leaky ReLU with configurable slope
• elu(x, alpha) - Exponential Linear Unit
• Derivatives for backpropagation: sigmoidDerivative, tanhDerivative, reluDerivative
Section 3: Linear Regression (OLS)
Ordinary Least Squares implementation using the normal equation (X'X)⁻¹X'y:
• fitLinearRegression(X, y) - Fits model, returns coefficients, R², standard error
• fitSimpleLinearRegression(x, y) - Single-variable regression
• predictLinear(model, X) - Generate predictions
• predictionInterval(model, X, confidence) - Confidence intervals using t-distribution
• Model type stores: coefficients, R-squared, residuals, standard error
Section 4: Weighted Linear Regression
Generalized least squares with observation weights:
• fitWeightedLinearRegression(X, y, weights) - Solves (X'WX)⁻¹X'Wy
• Useful for downweighting outliers or emphasizing recent data
Section 5: Polynomial Regression
Fits polynomials of arbitrary degree:
• fitPolynomialRegression(x, y, degree) - Constructs Vandermonde matrix
• predictPolynomial(model, x) - Evaluate polynomial at points
Section 6: Ridge Regression (L2 Regularization)
Adds penalty term λ||β||² to prevent overfitting:
• fitRidgeRegression(X, y, lambda) - Solves (X'X + λI)⁻¹X'y
• Lambda parameter controls regularization strength
Section 7: LASSO Regression (L1 Regularization)
Coordinate descent algorithm for sparse solutions:
• fitLassoRegression(X, y, lambda, maxIter, tolerance) - Iterative soft-thresholding
• Produces sparse coefficients by driving some to exactly zero
• softThreshold(x, lambda) - Core shrinkage operator
Section 8: Elastic Net (L1 + L2 Regularization)
Combines LASSO and Ridge penalties:
• fitElasticNet(X, y, lambda, alpha, maxIter, tolerance)
• Alpha balances L1 vs L2: alpha=1 is LASSO, alpha=0 is Ridge
Section 9: Huber Robust Regression
Iteratively Reweighted Least Squares (IRLS) for outlier resistance:
• fitHuberRegression(X, y, delta, maxIter, tolerance)
• Delta parameter defines transition between L1 and L2 loss
• Downweights observations with large residuals
Section 10: Quantile Regression
Estimates conditional quantiles using linear programming approximation:
• fitQuantileRegression(X, y, tau, maxIter, tolerance)
• Tau specifies quantile (0.5 = median, 0.25 = lower quartile, etc.)
Section 11: Logistic Regression (Binary Classification)
Gradient descent optimization of cross-entropy loss:
• fitLogisticRegression(X, y, learningRate, maxIter, tolerance)
• predictProbability(model, X) - Returns probabilities
• predictClass(model, X, threshold) - Returns binary predictions
Section 12: Linear SVM (Support Vector Machine)
Sub-gradient descent with hinge loss:
• fitLinearSVM(X, y, C, learningRate, maxIter)
• C parameter controls regularization (higher = harder margin)
• predictSVM(model, X) - Returns class predictions
Section 13: Recursive Least Squares (RLS)
Online learning with exponential forgetting:
• createRLSState(nFeatures, lambda, delta) - Initialize state
• updateRLS(state, x, y) - Update with new observation
• Lambda is forgetting factor (0.95-0.99 typical)
• Useful for adaptive indicators that update incrementally
Section 14: Covariance and Correlation
Matrix statistics:
• covarianceMatrix(m) - Sample covariance
• correlationMatrix(m) - Pearson correlations
• pearsonCorrelation(x, y) - Single correlation coefficient
• spearmanCorrelation(x, y) - Rank-based correlation
Section 15: Principal Component Analysis (PCA)
Dimensionality reduction via eigendecomposition:
• fitPCA(X, nComponents) - Power iteration method
• transformPCA(X, model) - Project data onto principal components
• Returns components, explained variance, and mean
Section 16: K-Means Clustering
Lloyd's algorithm with k-means++ initialization:
• fitKMeans(X, k, maxIter, tolerance) - Cluster data points
• predictCluster(model, X) - Assign new points to clusters
• withinClusterVariance(model) - Measure cluster compactness
Section 17: Gaussian Mixture Model (GMM)
Expectation-Maximization algorithm:
• fitGMM(X, k, maxIter, tolerance) - Soft clustering with probabilities
• predictProbaGMM(model, X) - Returns membership probabilities
• Models data as mixture of Gaussian distributions
Section 18: Kalman Filter
Linear state estimation:
• createKalman1D(processNoise, measurementNoise, ...) - 1D filter
• createKalman2D(processNoise, measurementNoise) - Position + velocity tracking
• kalmanStep(state, measurement) - Predict-update cycle
• Optimal filtering for noisy measurements
Section 19: K-Nearest Neighbors (KNN)
Instance-based learning:
• fitKNN(X, y) - Store training data
• predictKNN(model, X, k) - Classify by majority vote
• predictKNNRegression(model, X, k) - Average of k neighbors
• predictKNNWeighted(model, X, k) - Distance-weighted voting
Section 20: Neural Network (Feedforward)
Multi-layer perceptron:
• createNeuralNetwork(architecture) - Define layer sizes
• trainNeuralNetwork(nn, X, y, learningRate, epochs) - Backpropagation
• predictNN(nn, X) - Forward pass
• Supports configurable hidden layers
Section 21: Naive Bayes Classifier
Gaussian Naive Bayes:
• fitNaiveBayes(X, y) - Estimate class-conditional distributions
• predictNaiveBayes(model, X) - Maximum a posteriori classification
• Assumes feature independence given class
Section 22: Anomaly Detection
Statistical outlier detection:
• fitAnomalyDetector(X, contamination) - Mahalanobis distance-based
• detectAnomalies(model, X) - Returns anomaly scores
• isAnomaly(model, X, threshold) - Binary classification
Section 23: Dynamic Time Warping (DTW)
Time series similarity:
• dtw(series1, series2) - Compute DTW distance
• Handles sequences of different lengths
• Useful for pattern matching
Section 24: Markov Chain / Regime Detection
Discrete state transitions:
• fitMarkovChain(states, nStates) - Estimate transition matrix
• predictNextState(transitionMatrix, currentState) - Most likely next state
• stationaryDistribution(transitionMatrix) - Long-run probabilities
Section 25: Hidden Markov Model (Simple)
Baum-Welch algorithm:
• fitHMM(observations, nStates, maxIter) - EM training
• viterbi(model, observations) - Most likely state sequence
• Useful for regime detection
Section 26: Exponential Smoothing & Holt-Winters
Time series smoothing:
• exponentialSmooth(data, alpha) - Simple exponential smoothing
• holtWinters(data, alpha, beta, gamma, seasonLength) - Triple smoothing
• Captures trend and seasonality
Section 27: Entropy and Information Theory
Information measures:
• entropy(probabilities) - Shannon entropy in bits
• conditionalEntropy(jointProbs, marginalProbs) - H(X|Y)
• mutualInformation(probsX, probsY, jointProbs) - I(X;Y)
• kldivergence(p, q) - Kullback-Leibler divergence
Section 28: Hurst Exponent
Long-range dependence measure:
• hurstExponent(data) - R/S analysis
• H < 0.5: mean-reverting, H = 0.5: random walk, H > 0.5: trending
Section 29: Change Detection (CUSUM)
Cumulative sum control chart:
• cusumChangeDetection(data, threshold, drift) - Detect regime changes
• cusumOnline(value, prevCusumPos, prevCusumNeg, target, drift) - Streaming version
Section 30: Autocorrelation
Serial dependence analysis:
• autocorrelation(data, maxLag) - ACF for all lags
• partialAutocorrelation(data, maxLag) - PACF via Durbin-Levinson
• Useful for time series model identification
Section 31: Ensemble Methods
Model combination:
• baggingPredict(models, X) - Average predictions
• votingClassify(models, X) - Majority vote
• Improves robustness through aggregation
Section 32: Model Evaluation Metrics
Performance assessment:
• mse(actual, predicted) / rmse / mae / mape - Regression metrics
• accuracy(actual, predicted) - Classification accuracy
• precision / recall / f1Score - Binary classification metrics
• confusionMatrix(actual, predicted, nClasses) - Multi-class evaluation
• rSquared(actual, predicted) / adjustedRSquared - Goodness of fit
Section 33: Cross-Validation
Model validation:
• trainTestSplit(X, y, trainRatio) - Random split
• Foundation for walk-forward validation
Section 34: Trading Convenience Functions
Trading-specific utilities:
• priceMatrix(length) - OHLC data as matrix
• logReturns(length) - Log return series
• rollingSlope(src, length) - Linear trend strength
• kalmanFilter(src, processNoise, measurementNoise) - Filtered price
• kalmanFilter2D(src, ...) - Price with velocity estimate
• adaptiveMA(src, sensitivity) - Kalman-based adaptive moving average
• volAdjMomentum(src, length) - Volatility-normalized momentum
• detectSRLevels(length, nLevels) - K-means based S/R detection
• buildFeatures(src, lengths) - Multi-timeframe feature construction
• technicalFeatures(length) - Standard indicator feature set (RSI, MACD, BB, ATR, etc.)
• lagFeatures(src, lags) - Time-lagged features
• sharpeRatio(returns) - Risk-adjusted return measure
• sortinoRatio(returns) - Downside risk-adjusted return
• maxDrawdown(equity) - Maximum peak-to-trough decline
• calmarRatio(returns, equity) - Return/drawdown ratio
• kellyCriterion(winRate, avgWin, avgLoss) - Optimal position sizing
• fractionalKelly(...) - Conservative Kelly sizing
• rollingBeta(assetReturns, benchmarkReturns) - Market exposure
• fractalDimension(data) - Market complexity measure
---
Usage Example
```
import YourUsername/MLMatrixLib/1 as ml
// Create feature matrix
matrix X = ml.priceMatrix(50)
X := ml.standardize(X)
// Fit linear regression
ml.LinearRegressionModel model = ml.fitLinearRegression(X, y)
float prediction = ml.predictLinear(model, X_new)
// Kalman filter for smoothing
float smoothedPrice = ml.kalmanFilter(close, 0.01, 1.0)
// Detect support/resistance levels
array levels = ml.detectSRLevels(100, 3)
// K-means clustering for regime detection
ml.KMeansModel km = ml.fitKMeans(features, 3)
int cluster = ml.predictCluster(km, newFeature)
```
---
Technical Notes
• All matrix operations use Pine Script's native matrix type
• Numerical stability ensured through:
- Clamping exponential arguments to prevent overflow
- Division by zero protection with epsilon thresholds
- Iterative algorithms with convergence tolerance
• Designed for bar-by-bar execution in Pine Script's event-driven model
• Compatible with Pine Script v6
---
Disclaimer
This library provides mathematical tools for quantitative analysis. It does not constitute financial advice. Past performance of any algorithm does not guarantee future results. Users are responsible for validating models on their specific use cases and understanding the limitations of each method.
BarCoreLibrary "BarCore"
BarCore is a foundational library for technical analysis, providing essential functions for evaluating the structural properties of candlesticks and inter-bar relationships.
It prioritizes ratio-based metrics (0.0 to 1.0) over absolute prices, making it asset-agnostic and ideal for robust pattern recognition, momentum analysis, and volume-weighted pressure evaluation.
Key modules:
- Structure & Range: High-precision bar and body metrics with relative positioning.
- Wick Dynamics: Absolute and relative wick analysis for identifying price rejection.
- Inter-bar Logic: Containment, coverage, and quantitative price overlap (Ratio-based).
- Gap Intelligence: Real body and price gaps with customizable significance thresholds.
- Flow & Pressure: Volume-weighted buying/selling pressure and Money Flow metrics.
isBuyingBar()
Checks if the bar is a bullish (up) bar, where close is greater than open.
Returns: bool True if the bar closed higher than it opened.
isSellingBar()
Checks if the bar is a bearish (down) bar, where close is less than open.
Returns: bool True if the bar closed lower than it opened.
barMidpoint()
Calculates the absolute midpoint of the bar's total range (High + Low) / 2.
Returns: float The midpoint price of the bar.
barRange()
Calculates the absolute size of the bar's total range (High to Low).
Returns: float The absolute difference between high and low.
barRangeMidpoint()
Calculates half of the bar's total range size.
Returns: float Half the bar's range size.
realBodyHigh()
Returns the higher price between the open and close.
Returns: float The top of the real body.
realBodyLow()
Returns the lower price between the open and close.
Returns: float The bottom of the real body.
realBodyMidpoint()
Calculates the absolute midpoint of the bar's real body.
Returns: float The midpoint price of the real body.
realBodyRange()
Calculates the absolute size of the bar's real body.
Returns: float The absolute difference between open and close.
realBodyRangeMidpoint()
Calculates half of the bar's real body size.
Returns: float Half the real body size.
upperWickRange()
Calculates the absolute size of the upper wick.
Returns: float The range from high to the real body high.
lowerWickRange()
Calculates the absolute size of the lower wick.
Returns: float The range from the real body low to low.
openRatio()
Returns the location of the open price relative to the bar's total range (0.0 at low to 1.0 at high).
Returns: float The ratio of the distance from low to open, divided by the total range.
closeRatio()
Returns the location of the close price relative to the bar's total range (0.0 at low to 1.0 at high).
Returns: float The ratio of the distance from low to close, divided by the total range.
realBodyRatio()
Calculates the ratio of the real body size to the total bar range.
Returns: float The real body size divided by the bar range. Returns 0 if barRange is 0.
upperWickRatio()
Calculates the ratio of the upper wick size to the total bar range.
Returns: float The upper wick size divided by the bar range. Returns 0 if barRange is 0.
lowerWickRatio()
Calculates the ratio of the lower wick size to the total bar range.
Returns: float The lower wick size divided by the bar range. Returns 0 if barRange is 0.
upperWickToBodyRatio()
Calculates the ratio of the upper wick size to the real body size.
Returns: float The upper wick size divided by the real body size. Returns 0 if realBodyRange is 0.
lowerWickToBodyRatio()
Calculates the ratio of the lower wick size to the real body size.
Returns: float The lower wick size divided by the real body size. Returns 0 if realBodyRange is 0.
totalWickRatio()
Calculates the ratio of the total wick range (Upper Wick + Lower Wick) to the total bar range.
Returns: float The total wick range expressed as a ratio of the bar's total range. Returns 0 if barRange is 0.
isBodyExpansion()
Checks if the current bar's real body range is larger than the previous bar's real body range (body expansion).
Returns: bool True if realBodyRange() > realBodyRange() .
isBodyContraction()
Checks if the current bar's real body range is smaller than the previous bar's real body range (body contraction).
Returns: bool True if realBodyRange() < realBodyRange() .
isWithinPrevBar(inclusive)
Checks if the current bar's range is entirely within the previous bar's range.
Parameters:
inclusive (bool) : If true, allows equality (<=, >=). Default is false.
Returns: bool True if High < High AND Low > Low .
isCoveringPrevBar(inclusive)
Checks if the current bar's range fully covers the entire previous bar's range.
Parameters:
inclusive (bool) : If true, allows equality (<=, >=). Default is false.
Returns: bool True if High > High AND Low < Low .
isWithinPrevBody(inclusive)
Checks if the current bar's real body is entirely inside the previous bar's real body.
Parameters:
inclusive (bool) : If true, allows equality (<=, >=). Default is false.
Returns: bool True if the current body is contained inside the previous body.
isCoveringPrevBody(inclusive)
Checks if the current bar's real body fully covers the previous bar's real body.
Parameters:
inclusive (bool) : If true, allows equality (<=, >=). Default is false.
Returns: bool True if the current body fully covers the previous body.
isOpenWithinPrevBody(inclusive)
Checks if the current bar's open price falls within the real body range of the previous bar.
Parameters:
inclusive (bool) : If true, includes the boundary prices. Default is false.
Returns: bool True if the open price is between the previous bar's real body high and real body low.
isCloseWithinPrevBody(inclusive)
Checks if the current bar's close price falls within the real body range of the previous bar.
Parameters:
inclusive (bool) : If true, includes the boundary prices. Default is false.
Returns: bool True if the close price is between the previous bar's real body high and real body low.
isPrevOpenWithinBody(inclusive)
Checks if the previous bar's open price falls within the current bar's real body range.
Parameters:
inclusive (bool) : If true, includes the boundary prices. Default is false.
Returns: bool True if open is between the current bar's real body high and real body low.
isPrevCloseWithinBody(inclusive)
Checks if the previous bar's closing price falls within the current bar's real body range.
Parameters:
inclusive (bool) : If true, includes the boundary prices. Default is false.
Returns: bool True if close is between the current bar's real body high and real body low.
isOverlappingPrevBar()
Checks if there is any price overlap between the current bar's range and the previous bar's range.
Returns: bool True if the current bar's range has any intersection with the previous bar's range.
bodyOverlapRatio()
Calculates the percentage of the current real body that overlaps with the previous real body.
Returns: float The overlap ratio (0.0 to 1.0). 1.0 means the current body is entirely within the previous body's price range.
isCompletePriceGapUp()
Checks for a complete price gap up where the current bar's low is strictly above the previous bar's high, meaning there is zero price overlap between the two bars.
Returns: bool True if the current low is greater than the previous high.
isCompletePriceGapDown()
Checks for a complete price gap down where the current bar's high is strictly below the previous bar's low, meaning there is zero price overlap between the two bars.
Returns: bool True if the current high is less than the previous low.
isRealBodyGapUp()
Checks for a gap between the current and previous real bodies.
Returns: bool True if the current body is completely above the previous body.
isRealBodyGapDown()
Checks for a gap between the current and previous real bodies.
Returns: bool True if the current body is completely below the previous body.
gapRatio()
Calculates the percentage difference between the current open and the previous close, expressed as a decimal ratio.
Returns: float The gap ratio (positive for gap up, negative for gap down). Returns 0 if the previous close is 0.
gapPercentage()
Calculates the percentage difference between the current open and the previous close.
Returns: float The gap percentage (positive for gap up, negative for gap down). Returns 0 if previous close is 0.
isGapUp()
Checks for a basic gap up, where the current bar's open is strictly higher than the previous bar's close. This is the minimum condition for a gap up.
Returns: bool True if the current open is greater than the previous close (i.e., gapRatio is positive).
isGapDown()
Checks for a basic gap down, where the current bar's open is strictly lower than the previous bar's close. This is the minimum condition for a gap down.
Returns: bool True if the current open is less than the previous close (i.e., gapRatio is negative).
isSignificantGapUp(minRatio)
Checks if the current bar opened significantly higher than the previous close, as defined by a minimum percentage ratio.
Parameters:
minRatio (float) : The minimum required gap percentage ratio. Default is 0.03 (3%).
Returns: bool True if the gap ratio (open vs. previous close) is greater than or equal to the minimum ratio.
isSignificantGapDown(minRatio)
Checks if the current bar opened significantly lower than the previous close, as defined by a minimum percentage ratio.
Parameters:
minRatio (float) : The minimum required gap percentage ratio. Default is 0.03 (3%).
Returns: bool True if the absolute value of the gap ratio (open vs. previous close) is greater than or equal to the minimum ratio.
trueRangeComponentHigh()
Calculates the absolute distance from the current bar's High to the previous bar's Close, representing one of the components of the True Range.
Returns: float The absolute difference: |High - Close |.
trueRangeComponentLow()
Calculates the absolute distance from the current bar's Low to the previous bar's Close, representing one of the components of the True Range.
Returns: float The absolute difference: |Low - Close |.
isUpperWickDominant(minRatio)
Checks if the upper wick is significantly long relative to the total range.
Parameters:
minRatio (float) : Minimum ratio of the wick to the total bar range. Default is 0.7 (70%).
Returns: bool True if the upper wick dominates the bar's range.
isUpperWickNegligible(maxRatio)
Checks if the upper wick is very small relative to the total range.
Parameters:
maxRatio (float) : Maximum ratio of the wick to the total bar range. Default is 0.05 (5%).
Returns: bool True if the upper wick is negligible.
isLowerWickDominant(minRatio)
Checks if the lower wick is significantly long relative to the total range.
Parameters:
minRatio (float) : Minimum ratio of the wick to the total bar range. Default is 0.7 (70%).
Returns: bool True if the lower wick dominates the bar's range.
isLowerWickNegligible(maxRatio)
Checks if the lower wick is very small relative to the total range.
Parameters:
maxRatio (float) : Maximum ratio of the wick to the total bar range. Default is 0.05 (5%).
Returns: bool True if the lower wick is negligible.
isSymmetric(maxTolerance)
Checks if the upper and lower wicks are roughly equal in length.
Parameters:
maxTolerance (float) : Maximum allowable percentage difference between the two wicks. Default is 0.15 (15%).
Returns: bool True if wicks are symmetric within the tolerance level.
isMarubozuBody(minRatio)
Candle with a very large body relative to the total range (minimal wicks).
Parameters:
minRatio (float) : Minimum body size ratio. Default is 0.9 (90%).
Returns: bool True if the bar has minimal wicks (Marubozu body).
isLargeBody(minRatio)
Candle with a large body relative to the total range.
Parameters:
minRatio (float) : Minimum body size ratio. Default is 0.6 (60%).
Returns: bool True if the bar has a large body.
isSmallBody(maxRatio)
Candle with a small body relative to the total range.
Parameters:
maxRatio (float) : Maximum body size ratio. Default is 0.4 (40%).
Returns: bool True if the bar has small body.
isDojiBody(maxRatio)
Candle with a very small body relative to the total range (indecision).
Parameters:
maxRatio (float) : Maximum body size ratio. Default is 0.1 (10%).
Returns: bool True if the bar has a very small body.
isLowerWickExtended(minRatio)
Checks if the lower wick is significantly extended relative to the real body size.
Parameters:
minRatio (float) : Minimum required ratio of the lower wick length to the real body size. Default is 2.0 (Lower wick must be at least twice the body's size).
Returns: bool True if the lower wick's length is at least `minRatio` times the size of the real body.
isUpperWickExtended(minRatio)
Checks if the upper wick is significantly extended relative to the real body size.
Parameters:
minRatio (float) : Minimum required ratio of the upper wick length to the real body size. Default is 2.0 (Upper wick must be at least twice the body's size).
Returns: bool True if the upper wick's length is at least `minRatio` times the size of the real body.
isStrongBuyingBar(minCloseRatio, maxOpenRatio)
Checks for a bar with strong bullish momentum (open near low, close near high), indicating high conviction.
Parameters:
minCloseRatio (float) : Minimum required ratio for the close location (relative to range, e.g., 0.7 means close must be in the top 30%). Default is 0.7 (70%).
maxOpenRatio (float) : Maximum allowed ratio for the open location (relative to range, e.g., 0.3 means open must be in the bottom 30%). Default is 0.3 (30%).
Returns: bool True if the bar is bullish, opened in the low extreme, and closed in the high extreme.
isStrongSellingBar(maxCloseRatio, minOpenRatio)
Checks for a bar with strong bearish momentum (open near high, close near low), indicating high conviction.
Parameters:
maxCloseRatio (float) : Maximum allowed ratio for the close location (relative to range, e.g., 0.3 means close must be in the bottom 30%). Default is 0.3 (30%).
minOpenRatio (float) : Minimum required ratio for the open location (relative to range, e.g., 0.7 means open must be in the top 30%). Default is 0.7 (70%).
Returns: bool True if the bar is bearish, opened in the high extreme, and closed in the low extreme.
isWeakBuyingBar(maxCloseRatio, maxBodyRatio)
Identifies a bar that is technically bullish but shows significant weakness, characterized by a failure to close near the high and a small body size.
Parameters:
maxCloseRatio (float) : Maximum allowed ratio for the close location relative to the range (e.g., 0.6 means the close must be in the bottom 60% of the bar's range). Default is 0.6 (60%).
maxBodyRatio (float) : Maximum allowed ratio for the real body size relative to the bar's range (e.g., 0.4 means the body is small). Default is 0.4 (40%).
Returns: bool True if the bar is bullish, but its close is weak and its body is small.
isWeakSellingBar(minCloseRatio, maxBodyRatio)
Identifies a bar that is technically bearish but shows significant weakness, characterized by a failure to close near the low and a small body size.
Parameters:
minCloseRatio (float) : Minimum required ratio for the close location relative to the range (e.g., 0.4 means the close must be in the top 60% of the bar's range). Default is 0.4 (40%).
maxBodyRatio (float) : Maximum allowed ratio for the real body size relative to the bar's range (e.g., 0.4 means the body is small). Default is 0.4 (40%).
Returns: bool True if the bar is bearish, but its close is weak and its body is small.
balanceOfPower()
Measures the net pressure of buyers vs. sellers within the bar, normalized to the bar's range.
Returns: float A value between -1.0 (strong selling) and +1.0 (strong buying), representing the strength and direction of the close relative to the open.
buyingPressure()
Measures the net buying volume pressure based on the close location and volume.
Returns: float A numerical value representing the volume weighted buying pressure.
sellingPressure()
Measures the net selling volume pressure based on the close location and volume.
Returns: float A numerical value representing the volume weighted selling pressure.
moneyFlowMultiplier()
Calculates the Money Flow Multiplier (MFM), which is the price component of Money Flow and CMF.
Returns: float A normalized value from -1.0 (strong selling) to +1.0 (strong buying), representing the net directional pressure.
moneyFlowVolume()
Calculates the Money Flow Volume (MFV), which is the Money Flow Multiplier weighted by the bar's volume.
Returns: float A numerical value representing the volume-weighted money flow. Positive = buying dominance; negative = selling dominance.
isAccumulationBar()
Checks for basic accumulation on the current bar, requiring both positive Money Flow Volume and a buying bar (closing higher than opening).
Returns: bool True if the bar exhibits buying dominance through its internal range location and is a buying bar.
isDistributionBar()
Checks for basic distribution on the current bar, requiring both negative Money Flow Volume and a selling bar (closing lower than opening).
Returns: bool True if the bar exhibits selling dominance through its internal range location and is a selling bar.
CausalityLib - granger casuality and transfer entropy helpersLibrary "CausalityLib"
Causality Analysis Library - Transfer Entropy, Granger Causality, and Causality Filtering
f_shannon_entropy(data, num_bins)
Calculate Shannon entropy of data distribution
Parameters:
data (array) : Array of continuous values
num_bins (int) : Number of bins for discretization
Returns: Entropy value (higher = more randomness)
f_calculate_te_score(primary_arr, ticker_arr, window, bins, lag)
Calculate Transfer Entropy from source to target
Parameters:
primary_arr (array) : Target series (e.g., primary ticker returns)
ticker_arr (array) : Source series (e.g., basket ticker returns)
window (int) : Window size for TE calculation
bins (int) : Number of bins for discretization
lag (int) : Lag for source series
Returns: - TE score and direction (-1 or 1)
f_correlation_at_lag(primary_arr, ticker_arr, lag, window, correlation_method)
Calculate Pearson correlation at specific lag
Parameters:
primary_arr (array) : Primary series
ticker_arr (array) : Ticker series
lag (int) : Lag value (positive = ticker lags primary)
window (int) : Window size for correlation
correlation_method (string) : Correlation method to use ("Pearson", "Spearman", "Kendall")
Returns: Correlation coefficient
f_calculate_granger_score(primary_arr, ticker_arr, window, max_lag, correlation_method)
Calculate Granger causality score with lag testing
Parameters:
primary_arr (array) : Primary series
ticker_arr (array) : Ticker series
window (int) : Window size for correlation
max_lag (int) : Maximum lag to test
correlation_method (string) : Correlation method to use
Returns: - Granger score and directional beta
f_partial_correlation(x_arr, y_arr, z_arr, window)
Calculate partial correlation between X and Y controlling for Z
Parameters:
x_arr (array) : First series
y_arr (array) : Second series
z_arr (array) : Mediator series
window (int) : Window size for correlation
Returns: Partial correlation coefficient
f_pcmci_filter_score(raw_score, primary_arr, ticker_arr, mediator1, mediator2, mediator3, mediator4, window)
PCMCI Filter: Adjust Granger score by checking for mediating tickers
Parameters:
raw_score (float) : Original Granger score
primary_arr (array) : Primary series
ticker_arr (array) : Ticker series
mediator1 (array) : First potential mediator series
mediator2 (array) : Second potential mediator series
mediator3 (array) : Third potential mediator series
mediator4 (array) : Fourth potential mediator series
window (int) : Window size for correlation
Returns: Filtered score (reduced if causality is indirect/spurious)
demark_utilsLibrary "demark_utils"
f_grade(score)
Parameters:
score (float)
f_clampScore(score)
Parameters:
score (float)
f_px(v)
Parameters:
v (float)
f_pxOrDash(v)
Parameters:
v (float)
f_sum(src, length)
Parameters:
src (float)
length (int)
f_hasAnyBits(bus, mask)
Parameters:
bus (int)
mask (int)
f_busSetMask(bus, mask)
Parameters:
bus (int)
mask (int)
f_evSet(bus, flag)
Parameters:
bus (int)
flag (int)
f_evSet2(bus, flag)
Parameters:
bus (int)
flag (int)
LunarSolverLunarSolver Library
Implements analytical approximations from Éphéméride Lunaire Parisienne (ELP2000-82B) lunar theory (Chapront-Touzé & Chapront). Uses truncated Fourier series of the main problem in Delaunay arguments D, l, l', F.
Exported functions:
delta_t(t_ms) → ΔT (seconds); polynomial fit valid ~1950–2050.
julian_day_tt(t_ms) → JD in Terrestrial Time from UTC millisecond timestamp.
jde_tt_to_utc_ms(jde_tt) → Approximate UTC millisecond timestamp from JD TT.
elp_true_distance_km_50(jd_tt) → Geocentric distance (km); 50 largest-amplitude terms.
elp_new_moon_solver_50(k) → JDE TT of new moon nearest lunation number k (k=0 ≈ 2000-01-06); 50-term longitude series + Newton-Raphson iteration (convergence <0.005 days).
elp_full_moon_solver_50(k) → JDE TT of full moon nearest k (k=0 ≈ 2000-01-21); 50-term longitude series + nutation correction + damped iteration (convergence <0.001 days).
Accuracy (truncation-limited):
- Distance: typically ± 10 - 100 km.
- Syzygy times: typically ± few minutes.
Import:
import telephonejack/LunarSolver/1 as lunar
Usage examples:
//@version=6
indicator("Lunar Distance Demo")
float jd_tt = lunar.julian_day_tt(time)
float dist_km = lunar.elp_true_distance_km_50(jd_tt)
plot(dist_km, "Distance (km)")
// Approximate lunation k for current bar
float k_approx = (lunar.julian_day_tt(time) - 2451550.25977) / 29.530588861
int k = math.round(k_approx)
float new_jde = lunar.elp_new_moon_solver_50(k)
float full_jde = lunar.elp_full_moon_solver_50(k)
Coefficients: top 50 terms by amplitude from ELP main problem series (radius vector & longitude). Phase solvers use longitude terms only. Library contains no latitude series.
Disclaimer: The library was developed with assistance from Grok 4.1, always under human supervision and decision-making.
[GYTS] VolatilityToolkit LibraryVolatilityToolkit Library
🌸 Part of GoemonYae Trading System (GYTS) 🌸
🌸 --------- INTRODUCTION --------- 🌸
💮 What Does This Library Contain?
VolatilityToolkit provides a comprehensive suite of volatility estimation functions derived from academic research in financial econometrics. Rather than relying on simplistic measures, this library implements range-based estimators that extract maximum information from OHLC data — delivering estimates that are 5–14× more efficient than traditional close-to-close methods.
The library spans the full volatility workflow: estimation, smoothing, and regime detection.
💮 Key Categories
• Range-Based Estimators — Parkinson, Garman-Klass, Rogers-Satchell, Yang-Zhang (academically-grounded variance estimators)
• Classical Measures — Close-to-Close, ATR, Chaikin Volatility (baseline and price-unit measures)
• Smoothing & Post-Processing — Asymmetric EWMA for differential decay rates
• Aggregation & Regime Detection — Multi-horizon blending, MTF aggregation, Volatility Burst Ratio
💮 Originality
To the best of our knowledge, no other TradingView script combines range-based estimators (Parkinson, Garman-Klass, Rogers-Satchell, Yang-Zhang), classical measures, and regime detection tools in a single package. Unlike typical volatility implementations that offer only a single method, this library:
• Implements four academically-grounded range-based estimators with proper mathematical foundations
• Handles drift bias and overnight gaps, issues that plague simpler estimators in trending markets
• Integrates with GYTS FiltersToolkit for advanced smoothing (10 filter types vs. typical SMA-only)
• Provides regime detection tools (Burst Ratio, MTF aggregation) for systematic strategy integration
• Standardises output units for seamless estimator comparison and swapping
🌸 --------- ADDED VALUE --------- 🌸
💮 Academic Rigour
Each estimator implements peer-reviewed methodologies with proper mathematical foundations. The library handles aspects that are easily missed, e.g. drift independence, overnight gap adjustment, and optimal weighting factors. All functions include guards against edge cases (division by zero, negative variance floors, warmup handling).
💮 Statistical Efficiency
Range-based estimators extract more information from the same data. Yang-Zhang achieves up to 14× the efficiency of close-to-close variance, meaning you can achieve the same estimation accuracy with far fewer bars — critical for adapting quickly to changing market conditions.
💮 Flexible Smoothing
All estimators support configurable smoothing via the GYTS FiltersToolkit integration. Choose from 10 filter types to balance responsiveness against noise reduction:
• Ultimate Smoother (2-Pole / 3-Pole) — Near-zero lag; the 3-pole variant is a GYTS design with tunable overshoot
• Super Smoother (2-Pole / 3-Pole) — Excellent noise reduction with minimal lag
• BiQuad — Second-order IIR filter with quality factor control
• ADXvma — Adaptive smoothing based on directional volatility
• MAMA — Cycle-adaptive moving average
• A2RMA — Adaptive autonomous recursive moving average
• SMA / EMA — Classical averages (SMA is default for most estimators)
Using Infinite Impulse Response (IIR) filters (e.g. Super Smoother, Ultimate Smoother) instead of SMA avoids the "drop-off artefact" where volatility readings crash when old spikes exit the window.
💮 Plug-and-Play Integration
Standardised output units (per-bar log-return volatility) make it trivial to swap estimators. The annualize() helper converts to yearly volatility with a single call. All functions work seamlessly with other GYTS components.
🌸 --------- RANGE-BASED ESTIMATORS --------- 🌸
These estimators utilise High, Low, Open, and Close prices to extract significantly more information about the underlying diffusion process than close-only methods.
💮 parkinson()
The Extreme Value Method -- approximately 5× more efficient than close-to-close, requiring about 80% less data for equivalent accuracy. Uses only the High-Low range, making it simple and robust.
• Assumption: Zero drift (random walk). May be biased in strongly trending markets.
• Best for: Quick volatility reads when drift is minimal.
• Parameters: smoothing_length (default 14), filter_type (default SMA), smoothing_factor (default 0.7)
Source: Parkinson, M. (1980). The Extreme Value Method for Estimating the Variance of the Rate of Return. Journal of Business, 53 (1), 61–65. DOI
💮 garman_klass()
Extends Parkinson by incorporating Open and Close prices, achieving approximately 7.4× efficiency over close-to-close. Implements the "practical" analytic estimator (σ̂²₅) which avoids cross-product terms whilst maintaining near-optimal efficiency.
• Assumption: Zero drift, continuous trading (no gaps).
• Best for: Markets with minimal overnight gaps and ranging conditions.
• Parameters: smoothing_length (default 14), filter_type (default SMA), smoothing_factor (default 0.7)
Source: Garman, M.B. & Klass, M.J. (1980). On the Estimation of Security Price Volatilities from Historical Data. Journal of Business, 53 (1), 67–78. DOI
💮 rogers_satchell()
The drift-independent estimator correctly isolates variance even in strongly trending markets where Parkinson and Garman-Klass become significantly biased. Uses the formula: ln(H/C)·ln(H/O) + ln(L/C)·ln(L/O).
• Key advantage: Unbiased regardless of trend direction or magnitude.
• Best for: Trending markets, crypto (24/7 trading with minimal gaps), general-purpose use.
• Parameters: smoothing_length (default 14), filter_type (default SMA), smoothing_factor (default 0.7)
Source: Rogers, L.C.G. & Satchell, S.E. (1991). Estimating Variance from High, Low and Closing Prices. Annals of Applied Probability, 1 (4), 504–512. DOI
💮 yang_zhang()
The minimum-variance composite estimator — both drift-independent AND gap-aware. Combines overnight returns, open-to-close returns, and the Rogers-Satchell component with optimal weighting to minimise estimator variance. Up to 14× more efficient than close-to-close.
• Parameters: lookback (default 14, minimum 2), alpha (default 1.34, optimised for equities).
• Best for: Equity markets with significant overnight gaps, highest-quality volatility estimation.
• Note: Unlike other estimators, Yang-Zhang does not support custom filter types — it uses rolling sample variance internally.
Source: Yang, D. & Zhang, Q. (2000). Drift-Independent Volatility Estimation Based on High, Low, Open, and Close Prices. Journal of Business, 73 (3), 477–491. DOI
🌸 --------- CLASSICAL MEASURES --------- 🌸
💮 close_to_close()
Classical sample variance of logarithmic returns. Provided primarily as a baseline benchmark — it is approximately 5–8× less efficient than range-based estimators, requiring proportionally more data for the same accuracy.
• Parameters: lookback (default 14), filter_type (default SMA), smoothing_factor (default 0.7)
• Use case: Comparison baseline, situations requiring strict methodological consistency with academic literature.
💮 atr()
Average True Range -- measures volatility in price units rather than log-returns. Directly interpretable for stop-loss placement (e.g., "2× ATR trailing stop") and handles gaps naturally via the True Range formula.
• Output: Price units (not comparable across different price levels).
• Parameters: smoothing_length (default 14), filter_type (default SMA), smoothing_factor (default 0.7)
• Best for: Position sizing, trailing stops, any application requiring volatility in currency terms.
Source: Wilder, J.W. (1978). New Concepts in Technical Trading Systems . Trend Research.
💮 chaikin_volatility()
Rate of Change of the smoothed trading range. Unlike level-based measures, Chaikin Volatility shows whether volatility is expanding or contracting relative to recent history.
• Output: Percentage change (oscillates around zero).
• Parameters: length (default 10), roc_length (default 10), filter_type (default EMA), smoothing_factor (default 0.7)
• Interpretation: High values suggest nervous, wide-ranging markets; low values indicate compression.
• Best for: Detecting volatility regime shifts, breakout anticipation.
🌸 --------- SMOOTHING & POST-PROCESSING --------- 🌸
💮 asymmetric_ewma()
Differential smoothing with separate alphas for rising versus falling volatility. Allows volatility to spike quickly (fast reaction to shocks) whilst decaying slowly (stability). Essential for trailing stops that should widen rapidly during turbulence but narrow gradually.
• Parameters: alpha_up (default 0.1), alpha_down (default 0.02).
• Note: Stateful function — call exactly once per bar.
💮 annualize()
Converts per-bar volatility to annualised volatility using the square-root-of-time rule: σ_annual = σ_bar × √(periods_per_year).
• Parameters: vol (series float), periods (default 252 for daily equity bars).
• Common values: 365 (crypto), 52 (weekly), 12 (monthly).
🌸 --------- AGGREGATION & REGIME DETECTION --------- 🌸
💮 weighted_horizon_volatility()
Blends volatility readings across short, medium, and long lookback horizons. Inspired by the Heterogeneous Autoregressive (HAR-RV) model's recognition that market participants operate on different time scales.
• Default horizons: 1-bar (short), 5-bar (medium), 22-bar (long).
• Default weights: 0.5, 0.3, 0.2.
• Note: This is a weighted trailing average, not a forecasting regression. For true HAR-RV forecasting, it would be required to fit regression coefficients.
Inspired by: Corsi, F. (2009). A Simple Approximate Long-Memory Model of Realized Volatility. Journal of Financial Econometrics .
💮 volatility_mtf()
Multi-timeframe aggregation for intraday charts. Combines base volatility with higher-timeframe (Daily, Weekly, Monthly) readings, automatically scaling HTF volatilities down to the current timeframe's magnitude using the square-root-of-time rule.
• Usage: Calculate HTF volatilities via request.security() externally, then pass to this function.
• Behaviour: Returns base volatility unchanged on Daily+ timeframes (MTF aggregation not applicable).
💮 volatility_burst_ratio()
Regime shift detector comparing short-term to long-term volatility.
• Parameters: short_period (default 8), long_period (default 50), filter_type (default Super Smoother 2-Pole), smoothing_factor (default 0.7)
• Interpretation: Ratio > 1.0 indicates expanding volatility; values > 1.5 often precede or accompany explosive breakouts.
• Best for: Filtering entries (e.g., "only enter if volatility is expanding"), dynamic risk adjustment, breakout confirmation.
🌸 --------- PRACTICAL USAGE NOTES --------- 🌸
💮 Choosing an Estimator
• Trending equities with gaps: yang_zhang() — handles both drift and overnight gaps optimally.
• Crypto (24/7 trading): rogers_satchell() — drift-independent without the lag of Yang-Zhang's multi-period window.
• Ranging markets: garman_klass() or parkinson() — simpler, no drift adjustment needed.
• Price-based stops: atr() — output in price units, directly usable for stop distances.
• Regime detection: Combine any estimator with volatility_burst_ratio().
💮 Output Units
All range-based estimators output per-bar volatility in log-return units (standard deviation). To convert to annualised percentage volatility (the convention in options and risk management), use:
vol_annual = annualize(yang_zhang(14), 252) // For daily bars
vol_percent = vol_annual * 100 // Express as percentage
💮 Smoothing Selection
The library integrates with FiltersToolkit for flexible smoothing. General guidance:
• SMA: Classical, statistically valid, but suffers from "drop-off" artefacts when spikes exit the window.
• Super Smoother / Ultimate Smoother / BiQuad: Natural decay, reduced lag — preferred for trading applications.
• MAMA / ADXvma / A2RMA: Adaptive smoothing, sometimes interesting for highly dynamic environments.
💮 Edge Cases and Limitations
• Flat candles: Guards prevent log(0) errors, but single-tick bars produce near-zero variance readings.
• Illiquid assets: Discretisation bias causes underestimation when ticks-per-bar is small. Use higher timeframes for more reliable estimates.
• Yang-Zhang minimum: Requires lookback ≥ 2 (enforced internally). Cannot produce instantaneous readings.
• Drift in Parkinson/GK: These estimators overestimate variance in trending conditions — switch to Rogers-Satchell or Yang-Zhang.
Note: This library is actively maintained. Suggestions for additional estimators or improvements are welcome.
GaussianWavePacketLibrary "GaussianWavePacket"
gaussianEnvelope(gamma, t, t0)
Parameters:
gamma (float)
t (int)
t0 (int)
oscillatorReal(omega, t, phase)
Parameters:
omega (float)
t (int)
phase (float)
oscillatorImag(omega, t, phase)
Parameters:
omega (float)
t (int)
phase (float)
wavePacket(amplitude, gamma, omega, t, t0, phase)
Parameters:
amplitude (float)
gamma (float)
omega (float)
t (int)
t0 (int)
phase (float)
estimateGamma(amplitudeCurrent, amplitudePast, timeDelta)
Parameters:
amplitudeCurrent (float)
amplitudePast (float)
timeDelta (int)
periodToOmega(period)
Parameters:
period (float)
omegaToPeriod(omega)
Parameters:
omega (float)






















