ema_stoploss_libLibrary "ema_stoploss_lib"
This library derives stop-loss levels from a dynamic list of EMA lengths. It computes each EMA internally (so dynamic lengths are allowed), keeps strict side filtering (long: only EMAs below the source; short: only EMAs above), sorts by distance to the source, and returns the n-th nearest value plus the original index of that EMA length.
get_stop_loss(index)
Initializes (once) a default length list:
21, 50, 100, 200, 250, 500, 750, 1000.
Returns:
sl_buy / sl_sell: selected EMA values
nearest_buy_idx / nearest_sell_idx: 0-based indices in the original lensArr
Parameters & Notes
Index (input in the example; default 2) is 0-based:
0 = nearest, 1 = second nearest, 2 = third, etc.
If there aren’t enough EMAs on the requested side, the value becomes na (plot will skip that bar).
Strict filtering means no fallback to the opposite side.
Performance:
EMA updates are O(n) per bar (n = number of lengths).
Sorting is O(k²) (k = candidates on the chosen side) — negligible for small lists.
MATH
ema_stoplossLibrary "ema_stoploss"
What it does
A small library that builds stop-loss levels from dynamically computed EMAs. It finds EMAs strictly on the desired side of price (long: below; short: above), sorts them by distance to price, and returns the n-th nearest as your stop.
How it works
sortEMAsByDistanceStrictDyn(signal, lensArr, src, ascending)
Computes each EMA internally with the alpha formula (alpha = 2/(len+1)), so you can pass a dynamic array of lengths.
Strict side filter:
signal = 1 → only EMAs < src (below)
signal = -1 → only EMAs > src (above)
Sorts candidates by distance to src (default: nearest → farthest) and returns two arrays: EMA values and their lengths.
get_stop_loss(index) (exported)
Builds a default length array: 21, 50, 100, 200, 250, 500, 750, 1000.
Long side uses low to find the index-th nearest lower EMA.
Short side uses high to find the index-th nearest upper EMA.
Returns .
Plots
Stop-Loss Long (green): the selected lower EMA (based on low).
Stop-Loss Short (red): the selected upper EMA (based on high).
Input
Index (default 2): 0-based.
0 = nearest, 1 = second nearest, 2 = third, etc.
If there aren’t enough EMAs on the required side, the function returns na (no plot).
Why internal EMA calc?
ta.ema() doesn’t accept a series length; by updating each EMA with its alpha step every bar, the library supports arbitrary dynamic length arrays and stays bar-consistent.
Customize
Edit the list in get_stop_loss() to use your own EMA lengths.
Change ascending in sortEMAsByDistanceStrictDyn if you prefer farthest → nearest.
Use a different src if needed (e.g., close, hlc3, etc.).
The example intentionally uses low for long stops and high for short stops.
Notes
Strict side filtering: EMAs on the wrong side are ignored (no fallback).
If no EMA qualifies on a side, you’ll get na for that side.
Complexity is O(n²) for sorting, which is negligible for small EMA lists.
Adaptive FoS LibraryThis library provides Adaptive Functions that I use in my scripts. For calculations, I use the max_bars_back function with a fixed length of 200 bars to prevent errors when a script tries to access data beyond its available history. This is a key difference from most other adaptive libraries — if you don’t need it, you don’t have to use it.
Some of the adaptive length functions are normalized. In addition to the adaptive length functions, this library includes various methods for calculating moving averages, normalized differences between fast and slow MA's, as well as several normalized oscillators.
TimeSeriesBenchmarkMeasuresLibrary "TimeSeriesBenchmarkMeasures"
Time Series Benchmark Metrics. \
Provides a comprehensive set of functions for benchmarking time series data, allowing you to evaluate the accuracy, stability, and risk characteristics of various models or strategies. The functions cover a wide range of statistical measures, including accuracy metrics (MAE, MSE, RMSE, NRMSE, MAPE, SMAPE), autocorrelation analysis (ACF, ADF), and risk measures (Theils Inequality, Sharpness, Resolution, Coverage, and Pinball).
___
Reference:
- github.com .
- medium.com .
- www.salesforce.com .
- towardsdatascience.com .
- github.com .
mae(actual, forecasts)
In statistics, mean absolute error (MAE) is a measure of errors between paired observations expressing the same phenomenon. Examples of Y versus X include comparisons of predicted versus observed, subsequent time versus initial time, and one technique of measurement versus an alternative technique of measurement.
Parameters:
actual (array) : List of actual values.
forecasts (array) : List of forecasts values.
Returns: - Mean Absolute Error (MAE).
___
Reference:
- en.wikipedia.org .
- The Orange Book of Machine Learning - Carl McBride Ellis .
mse(actual, forecasts)
The Mean Squared Error (MSE) is a measure of the quality of an estimator. As it is derived from the square of Euclidean distance, it is always a positive value that decreases as the error approaches zero.
Parameters:
actual (array) : List of actual values.
forecasts (array) : List of forecasts values.
Returns: - Mean Squared Error (MSE).
___
Reference:
- en.wikipedia.org .
rmse(targets, forecasts, order, offset)
Calculates the Root Mean Squared Error (RMSE) between target observations and forecasts. RMSE is a standard measure of the differences between values predicted by a model and the values actually observed.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
order (int) : Model order parameter that determines the starting position in the targets array, `default=0`.
offset (int) : Forecast offset related to target, `default=0`.
Returns: - RMSE value.
nmrse(targets, forecasts, order, offset)
Normalised Root Mean Squared Error.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
order (int) : Model order parameter that determines the starting position in the targets array, `default=0`.
offset (int) : Forecast offset related to target, `default=0`.
Returns: - NRMSE value.
rmse_interval(targets, forecasts)
Root Mean Squared Error for a set of interval windows. Computes RMSE by converting interval forecasts (with min/max bounds) into point forecasts using the mean of the interval bounds, then compares against actual target values.
Parameters:
targets (array) : List of target observations.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - RMSE value for the combined interval list.
mape(targets, forecasts)
Mean Average Percentual Error.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
Returns: - MAPE value.
smape(targets, forecasts, mode)
Symmetric Mean Average Percentual Error. Calculates the Mean Absolute Percentage Error (MAPE) between actual targets and forecasts. MAPE is a common metric for evaluating forecast accuracy, expressed as a percentage, lower values indicate a better forecast accuracy.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
mode (int) : Type of method: default=0:`sum(abs(Fi-Ti)) / sum(Fi+Ti)` , 1:`mean(abs(Fi-Ti) / ((Fi + Ti) / 2))` , 2:`mean(abs(Fi-Ti) / (abs(Fi) + abs(Ti))) * 100`
Returns: - SMAPE value.
mape_interval(targets, forecasts)
Mean Average Percentual Error for a set of interval windows.
Parameters:
targets (array) : List of target observations.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - MAPE value for the combined interval list.
acf(data, k)
Autocorrelation Function (ACF) for a time series at a specified lag.
Parameters:
data (array) : Sample data of the observations.
k (int) : The lag period for which to calculate the autocorrelation. Must be a non-negative integer.
Returns: - The autocorrelation value at the specified lag, ranging from -1 to 1.
___
The autocorrelation function measures the linear dependence between observations in a time series
at different time lags. It quantifies how well the series correlates with itself at different
time intervals, which is useful for identifying patterns, seasonality, and the appropriate
lag structure for time series models.
ACF values close to 1 indicate strong positive correlation, values close to -1 indicate
strong negative correlation, and values near 0 indicate no linear correlation.
___
Reference:
- statisticsbyjim.com
acf_multiple(data, k)
Autocorrelation function (ACF) for a time series at a set of specified lags.
Parameters:
data (array) : Sample data of the observations.
k (array) : List of lag periods for which to calculate the autocorrelation. Must be a non-negative integer.
Returns: - List of ACF values for provided lags.
___
The autocorrelation function measures the linear dependence between observations in a time series
at different time lags. It quantifies how well the series correlates with itself at different
time intervals, which is useful for identifying patterns, seasonality, and the appropriate
lag structure for time series models.
ACF values close to 1 indicate strong positive correlation, values close to -1 indicate
strong negative correlation, and values near 0 indicate no linear correlation.
___
Reference:
- statisticsbyjim.com
adfuller(data, n_lag, conf)
: Augmented Dickey-Fuller test for stationarity.
Parameters:
data (array) : Data series.
n_lag (int) : Maximum lag.
conf (string) : Confidence Probability level used to test for critical value, (`90%`, `95%`, `99%`).
Returns: - `adf` The test statistic.
- `crit` Critical value for the test statistic at the 10 % levels.
- `nobs` Number of observations used for the ADF regression and calculation of the critical values.
___
The Augmented Dickey-Fuller test is used to determine whether a time series is stationary
or contains a unit root (non-stationary). The null hypothesis is that the series has a unit root
(is non-stationary), while the alternative hypothesis is that the series is stationary.
A stationary time series has statistical properties that do not change over time, making it
suitable for many time series forecasting models. If the test statistic is less than the
critical value, we reject the null hypothesis and conclude the series is stationary.
___
Reference:
- www.jstor.org
- en.wikipedia.org
theils_inequality(targets, forecasts)
Calculates Theil's Inequality Coefficient, a measure of forecast accuracy that quantifies the relative difference between actual and predicted values.
Parameters:
targets (array) : List of target observations.
forecasts (array) : Matrix with list of forecasts, ordered column wise.
Returns: - Theil's Inequality Coefficient value, value closer to 0 is better.
___
Theil's Inequality Coefficient is calculated as: `sqrt(Sum((y_i - f_i)^2)) / (sqrt(Sum(y_i^2)) + sqrt(Sum(f_i^2)))`
where `y_i` represents actual values and `f_i` represents forecast values.
This metric ranges from 0 to infinity, with 0 indicating perfect forecast accuracy.
___
Reference:
- en.wikipedia.org
sharpness(forecasts)
The average width of the forecast intervals across all observations, representing the sharpness or precision of the predictive intervals.
Parameters:
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - Sharpness The sharpness level, which is the average width of all prediction intervals across the forecast horizon.
___
Sharpness is an important metric for evaluating forecast quality. It measures how narrow or wide the
prediction intervals are. Higher sharpness (narrower intervals) indicates greater precision in the
forecast intervals, while lower sharpness (wider intervals) suggests less precision.
The sharpness metric is calculated as the mean of the interval widths across all observations, where
each interval width is the difference between the upper and lower bounds of the prediction interval.
Note: This function assumes that the forecasts matrix has at least 2 columns, with the first column
representing the lower bounds and the second column representing the upper bounds of prediction intervals.
___
Reference:
- Hyndman, R. J., & Athanasopoulos, G. (2018). Forecasting: principles and practice. OTexts. otexts.com
resolution(forecasts)
Calculates the resolution of forecast intervals, measuring the average absolute difference between individual forecast interval widths and the overall sharpness measure.
Parameters:
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - The average absolute difference between individual forecast interval widths and the overall sharpness measure, representing the resolution of the forecasts.
___
Resolution is a key metric for evaluating forecast quality that measures the consistency of prediction
interval widths. It quantifies how much the individual forecast intervals vary from the average interval
width (sharpness). High resolution indicates that the forecast intervals are relatively consistent
across observations, while low resolution suggests significant variation in interval widths.
The resolution is calculated as the mean absolute deviation of individual interval widths from the
overall sharpness value. This provides insight into the uniformity of the forecast uncertainty
estimates across the forecast horizon.
Note: This function requires the forecasts matrix to have at least 2 columns (min, max) representing
the lower and upper bounds of prediction intervals.
___
Reference:
- (sites.stat.washington.edu)
- (www.jstor.org)
coverage(targets, forecasts)
Calculates the coverage probability, which is the percentage of target values that fall within the corresponding forecasted prediction intervals.
Parameters:
targets (array) : List of target values.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - Percent of target values that fall within their corresponding forecast intervals, expressed as a decimal value between 0 and 1 (or 0% and 100%).
___
Coverage probability is a crucial metric for evaluating the reliability of prediction intervals.
It measures how well the forecast intervals capture the actual observed values. An ideal forecast
should have a coverage probability close to the nominal confidence level (e.g., 90%, 95%, or 99%).
For example, if a 95% prediction interval is used, we expect approximately 95% of the actual
target values to fall within those intervals. If the coverage is significantly lower than the
nominal level, the intervals may be too narrow; if it's significantly higher, the intervals may
be too wide.
Note: This function requires the targets array and forecasts matrix to have the same number of
observations, and the forecasts matrix must have at least 2 columns (min, max) representing
the lower and upper bounds of prediction intervals.
___
Reference:
- (www.jstor.org)
pinball(tau, target, forecast)
Pinball loss function, measures the asymmetric loss for quantile forecasts.
Parameters:
tau (float) : The quantile level (between 0 and 1), where 0.5 represents the median.
target (float) : The actual observed value to compare against.
forecast (float) : The forecasted value.
Returns: - The Pinball loss value, which quantifies the distance between the forecast and target relative to the specified quantile level.
___
The Pinball loss function is specifically designed for evaluating quantile forecasts. It is
asymmetric, meaning it penalizes underestimates and overestimates differently depending on the
quantile level being evaluated.
For a given quantile τ, the loss function is defined as:
- If target >= forecast: (target - forecast) * τ
- If target < forecast: (forecast - target) * (1 - τ)
This loss function is commonly used in quantile regression and probabilistic forecasting
to evaluate how well forecasts capture specific quantiles of the target distribution.
___
Reference:
- (www.otexts.com)
pinball_mean(tau, targets, forecasts)
Calculates the mean pinball loss for quantile regression.
Parameters:
tau (float) : The quantile level (between 0 and 1), where 0.5 represents the median.
targets (array) : The actual observed values to compare against.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - The mean pinball loss value across all observations.
___
The pinball_mean() function computes the average Pinball loss across multiple observations,
making it suitable for evaluating overall forecast performance in quantile regression tasks.
This function leverages the asymmetric Pinball loss function to evaluate how well forecasts
capture specific quantiles of the target distribution. The choice of which column from the
forecasts matrix to use depends on the quantile level:
- For τ ≤ 0.5: Uses the first column (min) of forecasts
- For τ > 0.5: Uses the second column (max) of forecasts
This loss function is commonly used in quantile regression and probabilistic forecasting
to evaluate how well forecasts capture specific quantiles of the target distribution.
___
Reference:
- (www.otexts.com)
Correlation HeatMap Matrix Data [TradingFinder]🔵 Introduction
Correlation is a statistical measure that shows the degree and direction of a linear relationship between two assets.
Its value ranges from -1 to +1 : +1 means perfect positive correlation, 0 means no linear relationship, and -1 means perfect negative correlation.
In financial markets, correlation is used for portfolio diversification, risk management, pairs trading, intermarket analysis, and identifying divergences.
Correlation HeatMap Matrix Data TradingFinder is a Pine Script v6 library that calculates and returns raw correlation matrix data between up to 20 symbols. It only provides the data – it does not draw or render the heatmap – making it ideal for use in other scripts that handle visualization or further analysis. The library uses ta.correlation for fast and accurate calculations.
It also includes two helper functions for visual styling :
CorrelationColor(corr) : takes the correlation value as input and generates a smooth gradient color, ranging from strong negative to strong positive correlation.
CorrelationTextColor(corr) : takes the correlation value as input and returns a text color that ensures optimal contrast over the background color.
Library
"Correlation_HeatMap_Matrix_Data_TradingFinder"
CorrelationColor(corr)
Parameters:
corr (float)
CorrelationTextColor(corr)
Parameters:
corr (float)
Data_Matrix(Corr_Period, Sym_1, Sym_2, Sym_3, Sym_4, Sym_5, Sym_6, Sym_7, Sym_8, Sym_9, Sym_10, Sym_11, Sym_12, Sym_13, Sym_14, Sym_15, Sym_16, Sym_17, Sym_18, Sym_19, Sym_20)
Parameters:
Corr_Period (int)
Sym_1 (string)
Sym_2 (string)
Sym_3 (string)
Sym_4 (string)
Sym_5 (string)
Sym_6 (string)
Sym_7 (string)
Sym_8 (string)
Sym_9 (string)
Sym_10 (string)
Sym_11 (string)
Sym_12 (string)
Sym_13 (string)
Sym_14 (string)
Sym_15 (string)
Sym_16 (string)
Sym_17 (string)
Sym_18 (string)
Sym_19 (string)
Sym_20 (string)
🔵 How to use
Import the library into your Pine Script using the import keyword and its full namespace.
Decide how many symbols you want to include in your correlation matrix (up to 20). Each symbol must be provided as a string, for example FX:EURUSD .
Choose the correlation period (Corr\_Period) in bars. This is the lookback window used for the calculation, such as 20, 50, or 100 bars.
Call Data_Matrix(Corr_Period, Sym_1, ..., Sym_20) with your selected parameters. The function will return an array containing the correlation values for every symbol pair (upper triangle of the matrix plus diagonal).
For example :
var string Sym_1 = '' , var string Sym_2 = '' , var string Sym_3 = '' , var string Sym_4 = '' , var string Sym_5 = '' , var string Sym_6 = '' , var string Sym_7 = '' , var string Sym_8 = '' , var string Sym_9 = '' , var string Sym_10 = ''
var string Sym_11 = '', var string Sym_12 = '', var string Sym_13 = '', var string Sym_14 = '', var string Sym_15 = '', var string Sym_16 = '', var string Sym_17 = '', var string Sym_18 = '', var string Sym_19 = '', var string Sym_20 = ''
switch Market
'Forex' => Sym_1 := 'EURUSD' , Sym_2 := 'GBPUSD' , Sym_3 := 'USDJPY' , Sym_4 := 'USDCHF' , Sym_5 := 'USDCAD' , Sym_6 := 'AUDUSD' , Sym_7 := 'NZDUSD' , Sym_8 := 'EURJPY' , Sym_9 := 'EURGBP' , Sym_10 := 'GBPJPY'
,Sym_11 := 'AUDJPY', Sym_12 := 'EURCHF', Sym_13 := 'EURCAD', Sym_14 := 'GBPCAD', Sym_15 := 'CADJPY', Sym_16 := 'CHFJPY', Sym_17 := 'NZDJPY', Sym_18 := 'AUDNZD', Sym_19 := 'USDSEK' , Sym_20 := 'USDNOK'
'Stock' => Sym_1 := 'NVDA' , Sym_2 := 'AAPL' , Sym_3 := 'GOOGL' , Sym_4 := 'GOOG' , Sym_5 := 'META' , Sym_6 := 'MSFT' , Sym_7 := 'AMZN' , Sym_8 := 'AVGO' , Sym_9 := 'TSLA' , Sym_10 := 'BRK.B'
,Sym_11 := 'UNH' , Sym_12 := 'V' , Sym_13 := 'JPM' , Sym_14 := 'WMT' , Sym_15 := 'LLY' , Sym_16 := 'ORCL', Sym_17 := 'HD' , Sym_18 := 'JNJ' , Sym_19 := 'MA' , Sym_20 := 'COST'
'Crypto' => Sym_1 := 'BTCUSD' , Sym_2 := 'ETHUSD' , Sym_3 := 'BNBUSD' , Sym_4 := 'XRPUSD' , Sym_5 := 'SOLUSD' , Sym_6 := 'ADAUSD' , Sym_7 := 'DOGEUSD' , Sym_8 := 'AVAXUSD' , Sym_9 := 'DOTUSD' , Sym_10 := 'TRXUSD'
,Sym_11 := 'LTCUSD' , Sym_12 := 'LINKUSD', Sym_13 := 'UNIUSD', Sym_14 := 'ATOMUSD', Sym_15 := 'ICPUSD', Sym_16 := 'ARBUSD', Sym_17 := 'APTUSD', Sym_18 := 'FILUSD', Sym_19 := 'OPUSD' , Sym_20 := 'USDT.D'
'Custom' => Sym_1 := Sym_1_C , Sym_2 := Sym_2_C , Sym_3 := Sym_3_C , Sym_4 := Sym_4_C , Sym_5 := Sym_5_C , Sym_6 := Sym_6_C , Sym_7 := Sym_7_C , Sym_8 := Sym_8_C , Sym_9 := Sym_9_C , Sym_10 := Sym_10_C
,Sym_11 := Sym_11_C, Sym_12 := Sym_12_C, Sym_13 := Sym_13_C, Sym_14 := Sym_14_C, Sym_15 := Sym_15_C, Sym_16 := Sym_16_C, Sym_17 := Sym_17_C, Sym_18 := Sym_18_C, Sym_19 := Sym_19_C , Sym_20 := Sym_20_C
= Corr.Data_Matrix(Corr_period, Sym_1 ,Sym_2 ,Sym_3 ,Sym_4 ,Sym_5 ,Sym_6 ,Sym_7 ,Sym_8 ,Sym_9 ,Sym_10,Sym_11,Sym_12,Sym_13,Sym_14,Sym_15,Sym_16,Sym_17,Sym_18,Sym_19,Sym_20)
Loop through or index into this array to retrieve each correlation value for your custom layout or logic.
Pass each correlation value to CorrelationColor() to get the corresponding gradient background color, which reflects the correlation’s strength and direction (negative to positive).
For example :
Corr.CorrelationColor(SYM_3_10)
Pass the same correlation value to CorrelationTextColor() to get the correct text color for readability against that background.
For example :
Corr.CorrelationTextColor(SYM_1_1)
Use these colors in a table or label to render your own heatmap or any other visualization you need.
Primes_4These libraries (Primes_1 -> Primes_4) contain arrays of reduced Prime Numbers to minimize the amount of tokens, allowing more information to be exported.
Values, for example:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021
are reduced to:
7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21
With the restoreValues() function found in this library, the reduced values can be restored back to its original state.
7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21
is restored back to:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021
The libraries contain all Prime Numbers from 2 to 1.340.011
------------------------------------------------------------
Library "Primes_4"
Prime Numbers 1.096.031 - 1.340.011
primes_a()
Prime numbers 1.096.031 - 1.205.999
primes_b()
Prime numbers 1.206.013 - 1.317.989
primes_c()
Prime numbers 1.318.003 - 1.340.011
method restoreValues(iArray, iShow, iFrom, iTo)
restoreValues : Restores reduced prime number values in an array to their original state, for example `7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21` is restored to `7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021`
Namespace types: array
Parameters:
iArray (array)
iShow (bool)
iFrom (int)
iTo (int)
Returns: Initial array with restored prime number values
Primes_3These libraries (Primes_1 -> Primes_4) contain arrays of reduced Prime Numbers to minimize the amount of tokens, allowing more information to be exported.
Values, for example:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021
are reduced to:
7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21
With the restoreValues() function found in the Primes_4 library, the reduced values can be restored back to its original state.
7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21
is restored back to:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021
The libraries contain all Prime Numbers from 2 to 1.340.011
------------------------------------------------------------
Library "Primes_3"
Prime Numbers 713.021 - 1.095.989
primes_a()
Prime numbers 713.021 - 820.997
primes_b()
Prime numbers 821.003 - 928.979
primes_c()
Prime numbers 929.003 - 1.038.953
primes_d()
Prime numbers 1.039.001 - 1.095.989
Primes_2These libraries (Primes_1 -> Primes_4) contain arrays of reduced Prime Numbers to minimize the amount of tokens, allowing more information to be exported.
Values, for example:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021
are reduced to:
7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21
With the restoreValues() function found in the Primes_4 library, the reduced values can be restored back to its original state.
7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21
is restored back to:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021
The libraries contain all Prime Numbers from 2 to 1.340.011
------------------------------------------------------------
Library "Primes_2"
Prime Numbers 340.007 - 712.981
primes_a()
Prime numbers 340.007 - 441.971
primes_b()
Prime numbers 442.003 - 545.959
primes_c()
Prime numbers 546.001 - 650.987
primes_d()
Prime numbers 651.017 - 712.981
Primes_1These libraries (Primes_1 -> Primes_4) contain arrays of reduced Prime Numbers to minimize the amount of tokens, allowing more information to be exported.
Values, for example:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021
are reduced to:
7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21
With the restoreValues() function found in the Primes_4 library, the reduced values can be restored back to its original state.
7001, 13, 19, 27, 39, 43, 57, 69, 79, 7103, 9, 21
is restored back to:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7021
The libraries contain all Prime Numbers from 2 to 1.340.011
------------------------------------------------------------
Library "Primes_1"
Prime Numbers 2 - 339.991
primes_a()
Prime numbers 2 - 81.689
primes_b()
Prime numbers 81.701 - 175.897
primes_c()
Prime numbers 175.909 - 273.997
primes_d()
Prime numbers 274.007 - 339.991
SITFX_FuturesSpec_v17SITFX_FuturesSpec_v17 – Universal Futures Contract Library
Full-scale futures contract specification library for Pine Script v6. Covers CME, CBOT, NYMEX, COMEX, CFE, Eurex, ICE, and more – including minis, micros, metals, energies, FX, and bonds.
Key Features:
✅ Instrument‑agnostic: ES/MES, NQ/MNQ, YM/MYM, RTY/M2K, metals, energies, FX, bonds
✅ Full contract data: Tick size, tick value, point value, margins
✅ Continuation‑safe: Single‑line logic, no arrays or continuation errors
✅ Foundation for SITFX tools: Gann, Fibs, structure, and risk modules
Usage example:
import SITFX_FuturesSpec_v17/1 as fs
spec = fs.get(syminfo.root)
label.new(bar_index, high, str.format("{0}: Tick={1}, Value=${2}", spec.name, spec.tickSize, spec.tickValue))
FunctionADFLibrary "FunctionADF"
Augmented Dickey-Fuller test (ADF), The ADF test is a statistical method used to assess whether a time series is stationary – meaning its statistical properties (like mean and variance) do not change over time. A time series with a unit root is considered non-stationary and often exhibits non-mean-reverting behavior, which is a key concept in technical analysis.
Reference:
-
- rtmath.net
- en.wikipedia.org
adftest(data, n_lag, conf)
: Augmented Dickey-Fuller test for stationarity.
Parameters:
data (array) : Data series.
n_lag (int) : Maximum lag.
conf (string) : Confidence Probability level used to test for critical value, (`90%`, `95%`, `99%`).
Returns: `adf` The test statistic. \
`crit` Critical value for the test statistic at the 10 % levels. \
`nobs` Number of observations used for the ADF regression and calculation of the critical values.
DoublePatternsDetects Double Top and Double Bottom patterns from pivot points using structural symmetry, valley/peak depth, and extreme validation. Returns a detailed result object including similarity score, target price, and breakout quality.
WedgePatternsDetects Rising and Falling Wedge chart patterns using pivot points, trendline convergence, and volume confirmation. Includes adaptive wedge length analysis and a quality score for each match. Returns full wedge geometry and classification via WedgeResult.
HeadShouldersPatternsDetects Head & Shoulders and Inverse Head & Shoulders chart patterns from pivot point arrays. Includes neckline validation, shoulder symmetry checks, and head extremeness filtering. Returns a detailed result object with structure points, bar indices, and projected price target.
XABCD_HarmonicsLibrary for detecting harmonic patterns using ZigZag pivots or custom swing points. Supports Butterfly, Gartley, Bat, and Crab patterns with automatic Fibonacci ratio validation and optional D-point projection using extremes. Returns detailed PatternResult including structure points and target projection. Ideal for technical analysis, algorithmic detection, or overlay visualizations.
FastMetrixLibrary "FastMetrix"
This is a library I've been tweaking and working with for a while and I find it useful to get valuable technical analysis metrics faster (why its called FastMetrix). A lot of is personal to my trading style, so sorry if it does not have everything you want. The way I get my variables from library to script is by copying the return function into my new script.
TODO: Volatility and short term price analysis functions
slope(source, smoothing)
Parameters:
source (float)
smoothing (int)
integral(topfunction, bottomfunction, start, end)
Parameters:
topfunction (float)
bottomfunction (float)
start (int)
end (int)
deviation(x, y)
Parameters:
x (float)
y (float)
getema(len)
TODO: return important exponential long term moving averages and derivatives/variables
Parameters:
len (simple int)
getsma(len)
TODO: return requested sma
Parameters:
len (int)
kc(mult, len)
TODO: Return Keltner Channels variables and calculations
Parameters:
mult (simple float)
len (simple int)
bollinger(len, mult)
TODO: returns bollinger bands with optimal settings
Parameters:
len (int)
mult (simple float)
volatility(atrlen, smoothing)
TODO: Returns volatility indicators based on atr
Parameters:
atrlen (simple int)
smoothing (int)
premarketfib()
countinday(xcondition)
Parameters:
xcondition (bool)
countinsession(condition, n)
Parameters:
condition (bool)
n (int)
MathConstantsSolarSystemLibrary "MathConstantsSolarSystem"
Properties and data for the celestial objects in the Solar System.
LMAsLibrary "LMAs"
Credits
Thank you to @QuantraSystems for dynamic calculations.
Introduction
This lightweight library offers dynamic implementations of popular moving averages that adapt their length automatically as new bars are added to the chart.
Each function is built on a dynamic length formula:
len = math.min(maxLength, bar_index + 1)
This approach ensures that calculations begin as early as the first bar, allowing for smoother initialization and more consistent behavior across all timeframes. It’s especially useful in custom scripts that run from bar 0 or when historical data is limited.
Usage
You can use this library as a drop-in replacement for standard moving averages. It provides more flexibility and stability in live or backtesting environments where fixed-length indicators may delay or fail to initialize properly.
Why Use This?
• Works from the very first bar
• Avoids na values during early bars
• Great for real-time indicators, strategies, and bar-replay
• Clean and efficient code with dynamic behavior
How to Use
Import the library into your script and call any of the included functions just like you would with their native counterparts.
Summary
A lightweight Pine Script™ library offering dynamic moving averages that work seamlessly from the very first bar. Ideal for strategies and indicators requiring robust initialization and adaptive behavior.
SMA(sourceData, maxLength)
Dynamic SMA
Parameters:
sourceData (float)
maxLength (int)
EMA(src, length)
Dynamic EMA
Parameters:
src (float)
length (int)
DEMA(src, length)
Dynamic DEMA
Parameters:
src (float)
length (int)
TEMA(src, length)
Dynamic TEMA
Parameters:
src (float)
length (int)
WMA(src, length)
Dynamic WMA
Parameters:
src (float)
length (int)
HMA(src, length)
Dynamic HMA
Parameters:
src (float)
length (int)
VWMA(src, volsrc, length)
Dynamic VWMA
Parameters:
src (float)
volsrc (float)
length (int)
SMMA(src, length)
Dynamic SMMA
Parameters:
src (float)
length (int)
LSMA(src, length, offset)
Dynamic LSMA
Parameters:
src (float)
length (int)
offset (int)
RMA(src, length)
Dynamic RMA
Parameters:
src (float)
length (int)
ALMA(src, length, offset_sigma, sigma)
Dynamic ALMA
Parameters:
src (float)
length (int)
offset_sigma (float)
sigma (float)
ZLSMA(src, length)
Dynamic ZLSMA
Parameters:
src (float)
length (int)
FRAMA(src, length)
Parameters:
src (float)
length (int)
KAMA(src, length)
Dynamic KAMA
Parameters:
src (float)
length (int)
JMA(src, length, phase)
Dynamic JMA
Parameters:
src (float)
length (int)
phase (float)
T3(src, length, volumeFactor)
Dynamic T3
Parameters:
src (float)
length (int)
volumeFactor (float)
FvgCalculations█ OVERVIEW
This library provides the core calculation engine for identifying Fair Value Gaps (FVGs) across different timeframes and for processing their interaction with price. It includes functions to detect FVGs on both the current chart and higher timeframes, as well as to check for their full or partial mitigation.
█ CONCEPTS
The library's primary functions revolve around the concept of Fair Value Gaps and their lifecycle.
Fair Value Gap (FVG) Identification
An FVG, or imbalance, represents a price range where buying or selling pressure was significant enough to cause a rapid price movement, leaving an "inefficiency" in the market. This library identifies FVGs based on three-bar patterns:
Bullish FVG: Forms when the low of the current bar (bar 3) is higher than the high of the bar two periods prior (bar 1). The FVG is the space between the high of bar 1 and the low of bar 3.
Bearish FVG: Forms when the high of the current bar (bar 3) is lower than the low of the bar two periods prior (bar 1). The FVG is the space between the low of bar 1 and the high of bar 3.
The library provides distinct functions for detecting FVGs on the current (Low Timeframe - LTF) and specified higher timeframes (Medium Timeframe - MTF / High Timeframe - HTF).
FVG Mitigation
Mitigation refers to price revisiting an FVG.
Full Mitigation: An FVG is considered fully mitigated when price completely closes the gap. For a bullish FVG, this occurs if the current low price moves below or touches the FVG's bottom. For a bearish FVG, it occurs if the current high price moves above or touches the FVG's top.
Partial Mitigation (Entry/Fill): An FVG is partially mitigated when price enters the FVG's range but does not fully close it. The library tracks the extent of this fill. For a bullish FVG, if the current low price enters the FVG from above, that low becomes the new effective top of the remaining FVG. For a bearish FVG, if the current high price enters the FVG from below, that high becomes the new effective bottom of the remaining FVG.
FVG Interaction
This refers to any instance where the current bar's price range (high to low) touches or crosses into the currently unfilled portion of an active (visible and not fully mitigated) FVG.
Multi-Timeframe Data Acquisition
To detect FVGs on higher timeframes, specific historical bar data (high, low, and time of bars at indices and relative to the higher timeframe's last completed bar) is required. The requestMultiTFBarData function is designed to fetch this data efficiently.
█ CALCULATIONS AND USE
The functions in this library are typically used in a sequence to manage FVGs:
1. Data Retrieval (for MTF/HTF FVGs):
Call requestMultiTFBarData() with the desired higher timeframe string (e.g., "60", "D").
This returns a tuple of htfHigh1, htfLow1, htfTime1, htfHigh3, htfLow3, htfTime3.
2. FVG Detection:
For LTF FVGs: Call detectFvg() on each confirmed bar. It uses high , low, low , and high along with barstate.isconfirmed.
For MTF/HTF FVGs: Call detectMultiTFFvg() using the data obtained from requestMultiTFBarData().
Both detection functions return an fvgObject (defined in FvgTypes) if an FVG is found, otherwise na. They also can classify FVGs as "Large Volume" (LV) if classifyLV is true and the FVG size (top - bottom) relative to the tfAtr (Average True Range of the respective timeframe) meets the lvAtrMultiplier.
3. FVG State Updates (on each new bar for existing FVGs):
First, check for overall price interaction using fvgInteractionCheck(). This function determines if the current bar's high/low has touched or entered the FVG's currentTop or currentBottom.
If interaction occurs and the FVG is not already mitigated:
Call checkMitigation() to determine if the FVG has been fully mitigated by the current bar's currentHigh and currentLow. If true, the FVG's isMitigated status is updated.
If not fully mitigated, call checkPartialMitigation() to see if the price has further entered the FVG. This function returns the newLevel to which the FVG has been filled (e.g., currentLow for a bullish FVG, currentHigh for bearish). This newLevel is then used to update the FVG's currentTop or currentBottom.
The calling script (e.g., fvgMain.c) is responsible for storing and managing the array of fvgObject instances and passing them to these update functions.
█ NOTES
Bar State for LTF Detection: The detectFvg() function relies on barstate.isconfirmed to ensure FVG detection is based on closed bars, preventing FVGs from being detected prematurely on the currently forming bar.
Higher Timeframe Data (lookahead): The requestMultiTFBarData() function uses lookahead = barmerge.lookahead_on. This means it can access historical data from the higher timeframe that corresponds to the current bar on the chart, even if the higher timeframe bar has not officially closed. This is standard for multi-timeframe analysis aiming to plot historical HTF data accurately on a lower timeframe chart.
Parameter Typing: Functions like detectMultiTFFvg and detectFvg infer the type for boolean (classifyLV) and numeric (lvAtrMultiplier) parameters passed from the main script, while explicitly typed series parameters (like htfHigh1, currentAtr) expect series data.
fvgObject Dependency: The FVG detection functions return fvgObject instances, and fvgInteractionCheck takes an fvgObject as a parameter. This UDT is defined in the FvgTypes library, making it a dependency for using FvgCalculations.
ATR for LV Classification: The tfAtr (for MTF/HTF) and currentAtr (for LTF) parameters are expected to be the Average True Range values for the respective timeframes. These are used, if classifyLV is enabled, to determine if an FVG's size qualifies it as a "Large Volume" FVG based on the lvAtrMultiplier.
MTF/HTF FVG Appearance Timing: When displaying FVGs from a higher timeframe (MTF/HTF) on a lower timeframe (LTF) chart, users might observe that the most recent MTF/HTF FVG appears one LTF bar later compared to its appearance on a native MTF/HTF chart. This is an expected behavior due to the detection mechanism in `detectMultiTFFvg`. This function uses historical bar data from the MTF/HTF (specifically, data equivalent to `HTF_bar ` and `HTF_bar `) to identify an FVG. Therefore, all three bars forming the FVG on the MTF/HTF must be fully closed and have shifted into these historical index positions relative to the `request.security` call from the LTF chart before the FVG can be detected and displayed on the LTF. This ensures that the MTF/HTF FVG is identified based on confirmed, closed bars from the higher timeframe.
█ EXPORTED FUNCTIONS
requestMultiTFBarData(timeframe)
Requests historical bar data for specific previous bars from a specified higher timeframe.
It fetches H , L , T (for the bar before last) and H , L , T (for the bar three periods prior)
from the requested timeframe.
This is typically used to identify FVG patterns on MTF/HTF.
Parameters:
timeframe (simple string) : The higher timeframe to request data from (e.g., "60" for 1-hour, "D" for Daily).
Returns: A tuple containing: .
- htfHigh1 (series float): High of the bar at index 1 (one bar before the last completed bar on timeframe).
- htfLow1 (series float): Low of the bar at index 1.
- htfTime1 (series int) : Time of the bar at index 1.
- htfHigh3 (series float): High of the bar at index 3 (three bars before the last completed bar on timeframe).
- htfLow3 (series float): Low of the bar at index 3.
- htfTime3 (series int) : Time of the bar at index 3.
detectMultiTFFvg(htfHigh1, htfLow1, htfTime1, htfHigh3, htfLow3, htfTime3, tfAtr, classifyLV, lvAtrMultiplier, tfType)
Detects a Fair Value Gap (FVG) on a higher timeframe (MTF/HTF) using pre-fetched bar data.
Parameters:
htfHigh1 (float) : High of the first relevant bar (typically high ) from the higher timeframe.
htfLow1 (float) : Low of the first relevant bar (typically low ) from the higher timeframe.
htfTime1 (int) : Time of the first relevant bar (typically time ) from the higher timeframe.
htfHigh3 (float) : High of the third relevant bar (typically high ) from the higher timeframe.
htfLow3 (float) : Low of the third relevant bar (typically low ) from the higher timeframe.
htfTime3 (int) : Time of the third relevant bar (typically time ) from the higher timeframe.
tfAtr (float) : ATR value for the higher timeframe, used for Large Volume (LV) FVG classification.
classifyLV (bool) : If true, FVGs will be assessed to see if they qualify as Large Volume.
lvAtrMultiplier (float) : The ATR multiplier used to define if an FVG is Large Volume.
tfType (series tfType enum from no1x/FvgTypes/1) : The timeframe type (e.g., types.tfType.MTF, types.tfType.HTF) of the FVG being detected.
Returns: An fvgObject instance if an FVG is detected, otherwise na.
detectFvg(classifyLV, lvAtrMultiplier, currentAtr)
Detects a Fair Value Gap (FVG) on the current (LTF - Low Timeframe) chart.
Parameters:
classifyLV (bool) : If true, FVGs will be assessed to see if they qualify as Large Volume.
lvAtrMultiplier (float) : The ATR multiplier used to define if an FVG is Large Volume.
currentAtr (float) : ATR value for the current timeframe, used for LV FVG classification.
Returns: An fvgObject instance if an FVG is detected, otherwise na.
checkMitigation(isBullish, fvgTop, fvgBottom, currentHigh, currentLow)
Checks if an FVG has been fully mitigated by the current bar's price action.
Parameters:
isBullish (bool) : True if the FVG being checked is bullish, false if bearish.
fvgTop (float) : The top price level of the FVG.
fvgBottom (float) : The bottom price level of the FVG.
currentHigh (float) : The high price of the current bar.
currentLow (float) : The low price of the current bar.
Returns: True if the FVG is considered fully mitigated, false otherwise.
checkPartialMitigation(isBullish, currentBoxTop, currentBoxBottom, currentHigh, currentLow)
Checks for partial mitigation of an FVG by the current bar's price action.
It determines if the price has entered the FVG and returns the new fill level.
Parameters:
isBullish (bool) : True if the FVG being checked is bullish, false if bearish.
currentBoxTop (float) : The current top of the FVG box (this might have been adjusted by previous partial fills).
currentBoxBottom (float) : The current bottom of the FVG box (similarly, might be adjusted).
currentHigh (float) : The high price of the current bar.
currentLow (float) : The low price of the current bar.
Returns: The new price level to which the FVG has been filled (e.g., currentLow for a bullish FVG).
Returns na if no new partial fill occurred on this bar.
fvgInteractionCheck(fvg, highVal, lowVal)
Checks if the current bar's price interacts with the given FVG.
Interaction means the price touches or crosses into the FVG's
current (possibly partially filled) range.
Parameters:
fvg (fvgObject type from no1x/FvgTypes/1) : The FVG object to check.
Its isMitigated, isVisible, isBullish, currentTop, and currentBottom fields are used.
highVal (float) : The high price of the current bar.
lowVal (float) : The low price of the current bar.
Returns: True if price interacts with the FVG, false otherwise.
WeightedVolumeUtilsLibrary "WeightedVolumeUtils"
fun(x)
Returns the input value (placeholder function).
Parameters:
x (float) : A float value.
Returns: The same float value passed as input.
weightedBSEVolume()
Calculates the weighted volume for BSE index based on top constituent stocks.
Returns: Weighted volume value based on fixed weights for BSE SENSEX stocks.
getAdjustedVolume()
Returns the adjusted volume for SENSEX or regular volume otherwise.
Returns: Weighted BSE volume if current symbol is SENSEX, else raw volume.
MathSpecialFunctionsConvolve1DLibrary "MathSpecialFunctionsConvolve1D"
Convolution is one of the most important mathematical operations used in signal processing. This simple mathematical operation pops up in many scientific and industrial applications, from its use in a billion-layer large CNN to simple image denoising.
___
Reference:
www.algorithm-archive.org
numpy.org
lloydrochester.com
www.geeksforgeeks.org
f(signal, filter)
Convolve
Parameters:
signal (array) : List with signal data.
filter (array) : List with weights to apply to the signal data.
Returns: Discrete, linear convolution of `signal` and `filter`.
ErrorFunctionsLibrary "ErrorFunctions"
A collection of functions used to approximate the area beneath a Gaussian curve.
Because an ERF (Error Function) is an integral, there is no closed-form solution to calculating the area beneath the curve. Meaning all ERFs are approximations; precisely wrong, but mostly accurate. How close you need to get to the actual area depends entirely on your use case, with more precision being less efficient.
The internal precision of floats in Pine Script is 1e-16 (16 decimals, aka. double precision). This library adapts well known algorithms designed to efficiently reach double precision. Single precision alternates are also included. All of them were made free to use, modify, and distribute by their original authors.
HASTINGS
Adaptation of a single precision ERF by Cecil Hastings Jr, published through Princeton University in 1955. It was later documented by Abramowitz and Stegun as equation 7.1.26 in their 1972 Handbook of Mathematical Functions. Fast, efficient, and ideal when precision beyond a few decimals is unnecessary.
GILES
Adaptation of a single precision Inverse ERF by Michael Giles, published through the University of Oxford in 2012. It reverses the ERF, estimating an X coordinate from an area. It too is fast, efficient, and ideal when precision beyond a few decimals is unnecessary.
LIBC
Adaptation of the double precision ERF & ERFC in the standard C library (aka. libc). It is also the same ERF & ERFC that SciPy uses. While not quite as efficient as the Hastings approximation, it's still very fast and fully maximizes Pines precision.
BOOST
Adaptation of the double precision Inverse ERF & Inverse ERFC in the Boost Math C++ library. SciPy uses these as well. These reverse the ERF & ERFC, estimating an X coordinate from an area. It too isn't quite as efficient as the Giles approximation, but still fast and fully maximizes Pines precision.
While these algorithms are not exported directly, they are available through their exported counterparts.
- - -
ERROR FUNCTIONS
erf(x, precise)
An Error Function estimates the theoretical error of a measurement.
Parameters:
x (float) : (float) Upper limit of the integration.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between -1 and 1.
erfc(x, precise)
A Complementary Error Function estimates the difference between a theoretical error and infinity.
Parameters:
x (float) : (float) Lower limit of the integration.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and 2.
erfinv(x, precise)
An Inverse Error Function reverses the erf() by estimating the original measurement from the theoretical error.
Parameters:
x (float) : (float) Theoretical error.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and ± infinity.
erfcinv(x, precise)
An Inverse Complementary Error Function reverses the erfc() by estimating the original measurement from the difference between the theoretical error and infinity.
Parameters:
x (float) : (float) Difference between the theoretical error and infinity.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and ± infinity.
- - -
DISTRIBUTION FUNCTIONS
pdf(x, m, s)
A Probability Density Function estimates the probability density . For clarity, density is not a probability .
Parameters:
x (float) : (float) X coordinate for which a density will be estimated.
m (float) : (float) Mean
s (float) : (float) Sigma
Returns: (float) Between 0 and ∞.
cdf(z, precise)
A Cumulative Distribution Function estimates the area under a Gaussian curve between negative infinity and the Z Score.
Parameters:
z (float) : (float) Z Score.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and 1.
cdfinv(a, precise)
An Inverse Cumulative Distribution Function reverses the cdf() by estimating the Z Score from an area.
Parameters:
a (float) : (float) Area between 0 and 1.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between -∞ and +∞
cdfab(z1, z2, precise)
A Cumulative Distribution Function from A to B estimates the area under a Gaussian curve between two Z Scores (A and B).
Parameters:
z1 (float) : (float) First Z Score.
z2 (float) : (float) Second Z Score.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and 1.
ttt(z, precise)
A Two-Tailed Test estimates the area under a Gaussian curve between symmetrical ± Z scores and ± infinity.
Parameters:
z (float) : (float) One of the symmetrical Z Scores.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and 1.
tttinv(a, precise)
An Inverse Two-Tailed Test reverses the ttt() by estimating the absolute Z Score from an area.
Parameters:
a (float) : (float) Area between 0 and 1.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and ∞.
ott(z, precise)
A One-Tailed Test estimates the area under a Gaussian curve between an absolute Z Score and infinity.
Parameters:
z (float) : (float) Z Score.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and 1.
ottinv(a, precise)
An Inverse One-Tailed Test Reverses the ott() by estimating the Z Score from a an area.
Parameters:
a (float) : (float) Area between 0 and 1.
precise (bool) : Double precision (true) or single precision (false).
Returns: (float) Between 0 and ∞.