PubLibSwingLibrary "PubLibSwing"
swing high and swing low conditions, prices, bar indices and range ratios for indicator and strategy development
sh()
swing high condition
Returns: bool
sl()
swing low condition
Returns: bool
shbi(occ)
swing high bar index, condition occurrence n
Parameters:
occ (simple int)
Returns: int
slbi(occ)
swing low bar index, condition occurrence n
Parameters:
occ (simple int)
Returns: int
shcp(occ)
swing high close price, condition occurrence n
Parameters:
occ (simple int)
Returns: float
slcp(occ)
swing low close price, condition occurrence n
Parameters:
occ (simple int)
Returns: float
shp(occ)
swing high price, condition occurrence n
Parameters:
occ (simple int)
Returns: float
slp(occ)
swing low price, condition occurrence n
Parameters:
occ (simple int)
Returns: float
shpbi(occ)
swing high price bar index, condition occurrence n
Parameters:
occ (simple int)
Returns: int
slpbi(occ)
swing low price bar index, condition occurrence n
Parameters:
occ (simple int)
Returns: int
shrr(occ)
swing high range ratio, condition occurrence n
Parameters:
occ (simple int)
Returns: float
slrr(occ)
swing low range ratio, condition occurrence n
Parameters:
occ (simple int)
Returns: float
Statistics
PubLibCandleTrendLibrary "PubLibCandleTrend"
candle trend, multi-part candle trend, multi-part green/red candle trend, double candle trend and multi-part double candle trend conditions for indicator and strategy development
chh()
candle higher high condition
Returns: bool
chl()
candle higher low condition
Returns: bool
clh()
candle lower high condition
Returns: bool
cll()
candle lower low condition
Returns: bool
cdt()
candle double top condition
Returns: bool
cdb()
candle double bottom condition
Returns: bool
gc()
green candle condition
Returns: bool
gchh()
green candle higher high condition
Returns: bool
gchl()
green candle higher low condition
Returns: bool
gclh()
green candle lower high condition
Returns: bool
gcll()
green candle lower low condition
Returns: bool
gcdt()
green candle double top condition
Returns: bool
gcdb()
green candle double bottom condition
Returns: bool
rc()
red candle condition
Returns: bool
rchh()
red candle higher high condition
Returns: bool
rchl()
red candle higher low condition
Returns: bool
rclh()
red candle lower high condition
Returns: bool
rcll()
red candle lower low condition
Returns: bool
rcdt()
red candle double top condition
Returns: bool
rcdb()
red candle double bottom condition
Returns: bool
chh_1p()
1-part candle higher high condition
Returns: bool
chh_2p()
2-part candle higher high condition
Returns: bool
chh_3p()
3-part candle higher high condition
Returns: bool
chh_4p()
4-part candle higher high condition
Returns: bool
chh_5p()
5-part candle higher high condition
Returns: bool
chh_6p()
6-part candle higher high condition
Returns: bool
chh_7p()
7-part candle higher high condition
Returns: bool
chh_8p()
8-part candle higher high condition
Returns: bool
chh_9p()
9-part candle higher high condition
Returns: bool
chh_10p()
10-part candle higher high condition
Returns: bool
chh_11p()
11-part candle higher high condition
Returns: bool
chh_12p()
12-part candle higher high condition
Returns: bool
chh_13p()
13-part candle higher high condition
Returns: bool
chh_14p()
14-part candle higher high condition
Returns: bool
chh_15p()
15-part candle higher high condition
Returns: bool
chh_16p()
16-part candle higher high condition
Returns: bool
chh_17p()
17-part candle higher high condition
Returns: bool
chh_18p()
18-part candle higher high condition
Returns: bool
chh_19p()
19-part candle higher high condition
Returns: bool
chh_20p()
20-part candle higher high condition
Returns: bool
chh_21p()
21-part candle higher high condition
Returns: bool
chh_22p()
22-part candle higher high condition
Returns: bool
chh_23p()
23-part candle higher high condition
Returns: bool
chh_24p()
24-part candle higher high condition
Returns: bool
chh_25p()
25-part candle higher high condition
Returns: bool
chh_26p()
26-part candle higher high condition
Returns: bool
chh_27p()
27-part candle higher high condition
Returns: bool
chh_28p()
28-part candle higher high condition
Returns: bool
chh_29p()
29-part candle higher high condition
Returns: bool
chh_30p()
30-part candle higher high condition
Returns: bool
chl_1p()
1-part candle higher low condition
Returns: bool
chl_2p()
2-part candle higher low condition
Returns: bool
chl_3p()
3-part candle higher low condition
Returns: bool
chl_4p()
4-part candle higher low condition
Returns: bool
chl_5p()
5-part candle higher low condition
Returns: bool
chl_6p()
6-part candle higher low condition
Returns: bool
chl_7p()
7-part candle higher low condition
Returns: bool
chl_8p()
8-part candle higher low condition
Returns: bool
chl_9p()
9-part candle higher low condition
Returns: bool
chl_10p()
10-part candle higher low condition
Returns: bool
chl_11p()
11-part candle higher low condition
Returns: bool
chl_12p()
12-part candle higher low condition
Returns: bool
chl_13p()
13-part candle higher low condition
Returns: bool
chl_14p()
14-part candle higher low condition
Returns: bool
chl_15p()
15-part candle higher low condition
Returns: bool
chl_16p()
16-part candle higher low condition
Returns: bool
chl_17p()
17-part candle higher low condition
Returns: bool
chl_18p()
18-part candle higher low condition
Returns: bool
chl_19p()
19-part candle higher low condition
Returns: bool
chl_20p()
20-part candle higher low condition
Returns: bool
chl_21p()
21-part candle higher low condition
Returns: bool
chl_22p()
22-part candle higher low condition
Returns: bool
chl_23p()
23-part candle higher low condition
Returns: bool
chl_24p()
24-part candle higher low condition
Returns: bool
chl_25p()
25-part candle higher low condition
Returns: bool
chl_26p()
26-part candle higher low condition
Returns: bool
chl_27p()
27-part candle higher low condition
Returns: bool
chl_28p()
28-part candle higher low condition
Returns: bool
chl_29p()
29-part candle higher low condition
Returns: bool
chl_30p()
30-part candle higher low condition
Returns: bool
clh_1p()
1-part candle lower high condition
Returns: bool
clh_2p()
2-part candle lower high condition
Returns: bool
clh_3p()
3-part candle lower high condition
Returns: bool
clh_4p()
4-part candle lower high condition
Returns: bool
clh_5p()
5-part candle lower high condition
Returns: bool
clh_6p()
6-part candle lower high condition
Returns: bool
clh_7p()
7-part candle lower high condition
Returns: bool
clh_8p()
8-part candle lower high condition
Returns: bool
clh_9p()
9-part candle lower high condition
Returns: bool
clh_10p()
10-part candle lower high condition
Returns: bool
clh_11p()
11-part candle lower high condition
Returns: bool
clh_12p()
12-part candle lower high condition
Returns: bool
clh_13p()
13-part candle lower high condition
Returns: bool
clh_14p()
14-part candle lower high condition
Returns: bool
clh_15p()
15-part candle lower high condition
Returns: bool
clh_16p()
16-part candle lower high condition
Returns: bool
clh_17p()
17-part candle lower high condition
Returns: bool
clh_18p()
18-part candle lower high condition
Returns: bool
clh_19p()
19-part candle lower high condition
Returns: bool
clh_20p()
20-part candle lower high condition
Returns: bool
clh_21p()
21-part candle lower high condition
Returns: bool
clh_22p()
22-part candle lower high condition
Returns: bool
clh_23p()
23-part candle lower high condition
Returns: bool
clh_24p()
24-part candle lower high condition
Returns: bool
clh_25p()
25-part candle lower high condition
Returns: bool
clh_26p()
26-part candle lower high condition
Returns: bool
clh_27p()
27-part candle lower high condition
Returns: bool
clh_28p()
28-part candle lower high condition
Returns: bool
clh_29p()
29-part candle lower high condition
Returns: bool
clh_30p()
30-part candle lower high condition
Returns: bool
cll_1p()
1-part candle lower low condition
Returns: bool
cll_2p()
2-part candle lower low condition
Returns: bool
cll_3p()
3-part candle lower low condition
Returns: bool
cll_4p()
4-part candle lower low condition
Returns: bool
cll_5p()
5-part candle lower low condition
Returns: bool
cll_6p()
6-part candle lower low condition
Returns: bool
cll_7p()
7-part candle lower low condition
Returns: bool
cll_8p()
8-part candle lower low condition
Returns: bool
cll_9p()
9-part candle lower low condition
Returns: bool
cll_10p()
10-part candle lower low condition
Returns: bool
cll_11p()
11-part candle lower low condition
Returns: bool
cll_12p()
12-part candle lower low condition
Returns: bool
cll_13p()
13-part candle lower low condition
Returns: bool
cll_14p()
14-part candle lower low condition
Returns: bool
cll_15p()
15-part candle lower low condition
Returns: bool
cll_16p()
16-part candle lower low condition
Returns: bool
cll_17p()
17-part candle lower low condition
Returns: bool
cll_18p()
18-part candle lower low condition
Returns: bool
cll_19p()
19-part candle lower low condition
Returns: bool
cll_20p()
20-part candle lower low condition
Returns: bool
cll_21p()
21-part candle lower low condition
Returns: bool
cll_22p()
22-part candle lower low condition
Returns: bool
cll_23p()
23-part candle lower low condition
Returns: bool
cll_24p()
24-part candle lower low condition
Returns: bool
cll_25p()
25-part candle lower low condition
Returns: bool
cll_26p()
26-part candle lower low condition
Returns: bool
cll_27p()
27-part candle lower low condition
Returns: bool
cll_28p()
28-part candle lower low condition
Returns: bool
cll_29p()
29-part candle lower low condition
Returns: bool
cll_30p()
30-part candle lower low condition
Returns: bool
gc_1p()
1-part green candle condition
Returns: bool
gc_2p()
2-part green candle condition
Returns: bool
gc_3p()
3-part green candle condition
Returns: bool
gc_4p()
4-part green candle condition
Returns: bool
gc_5p()
5-part green candle condition
Returns: bool
gc_6p()
6-part green candle condition
Returns: bool
gc_7p()
7-part green candle condition
Returns: bool
gc_8p()
8-part green candle condition
Returns: bool
gc_9p()
9-part green candle condition
Returns: bool
gc_10p()
10-part green candle condition
Returns: bool
gc_11p()
11-part green candle condition
Returns: bool
gc_12p()
12-part green candle condition
Returns: bool
gc_13p()
13-part green candle condition
Returns: bool
gc_14p()
14-part green candle condition
Returns: bool
gc_15p()
15-part green candle condition
Returns: bool
gc_16p()
16-part green candle condition
Returns: bool
gc_17p()
17-part green candle condition
Returns: bool
gc_18p()
18-part green candle condition
Returns: bool
gc_19p()
19-part green candle condition
Returns: bool
gc_20p()
20-part green candle condition
Returns: bool
gc_21p()
21-part green candle condition
Returns: bool
gc_22p()
22-part green candle condition
Returns: bool
gc_23p()
23-part green candle condition
Returns: bool
gc_24p()
24-part green candle condition
Returns: bool
gc_25p()
25-part green candle condition
Returns: bool
gc_26p()
26-part green candle condition
Returns: bool
gc_27p()
27-part green candle condition
Returns: bool
gc_28p()
28-part green candle condition
Returns: bool
gc_29p()
29-part green candle condition
Returns: bool
gc_30p()
30-part green candle condition
Returns: bool
rc_1p()
1-part red candle condition
Returns: bool
rc_2p()
2-part red candle condition
Returns: bool
rc_3p()
3-part red candle condition
Returns: bool
rc_4p()
4-part red candle condition
Returns: bool
rc_5p()
5-part red candle condition
Returns: bool
rc_6p()
6-part red candle condition
Returns: bool
rc_7p()
7-part red candle condition
Returns: bool
rc_8p()
8-part red candle condition
Returns: bool
rc_9p()
9-part red candle condition
Returns: bool
rc_10p()
10-part red candle condition
Returns: bool
rc_11p()
11-part red candle condition
Returns: bool
rc_12p()
12-part red candle condition
Returns: bool
rc_13p()
13-part red candle condition
Returns: bool
rc_14p()
14-part red candle condition
Returns: bool
rc_15p()
15-part red candle condition
Returns: bool
rc_16p()
16-part red candle condition
Returns: bool
rc_17p()
17-part red candle condition
Returns: bool
rc_18p()
18-part red candle condition
Returns: bool
rc_19p()
19-part red candle condition
Returns: bool
rc_20p()
20-part red candle condition
Returns: bool
rc_21p()
21-part red candle condition
Returns: bool
rc_22p()
22-part red candle condition
Returns: bool
rc_23p()
23-part red candle condition
Returns: bool
rc_24p()
24-part red candle condition
Returns: bool
rc_25p()
25-part red candle condition
Returns: bool
rc_26p()
26-part red candle condition
Returns: bool
rc_27p()
27-part red candle condition
Returns: bool
rc_28p()
28-part red candle condition
Returns: bool
rc_29p()
29-part red candle condition
Returns: bool
rc_30p()
30-part red candle condition
Returns: bool
cdut()
candle double uptrend condition
Returns: bool
cddt()
candle double downtrend condition
Returns: bool
cdut_1p()
1-part candle double uptrend condition
Returns: bool
cdut_2p()
2-part candle double uptrend condition
Returns: bool
cdut_3p()
3-part candle double uptrend condition
Returns: bool
cdut_4p()
4-part candle double uptrend condition
Returns: bool
cdut_5p()
5-part candle double uptrend condition
Returns: bool
cdut_6p()
6-part candle double uptrend condition
Returns: bool
cdut_7p()
7-part candle double uptrend condition
Returns: bool
cdut_8p()
8-part candle double uptrend condition
Returns: bool
cdut_9p()
9-part candle double uptrend condition
Returns: bool
cdut_10p()
10-part candle double uptrend condition
Returns: bool
cdut_11p()
11-part candle double uptrend condition
Returns: bool
cdut_12p()
12-part candle double uptrend condition
Returns: bool
cdut_13p()
13-part candle double uptrend condition
Returns: bool
cdut_14p()
14-part candle double uptrend condition
Returns: bool
cdut_15p()
15-part candle double uptrend condition
Returns: bool
cdut_16p()
16-part candle double uptrend condition
Returns: bool
cdut_17p()
17-part candle double uptrend condition
Returns: bool
cdut_18p()
18-part candle double uptrend condition
Returns: bool
cdut_19p()
19-part candle double uptrend condition
Returns: bool
cdut_20p()
20-part candle double uptrend condition
Returns: bool
cdut_21p()
21-part candle double uptrend condition
Returns: bool
cdut_22p()
22-part candle double uptrend condition
Returns: bool
cdut_23p()
23-part candle double uptrend condition
Returns: bool
cdut_24p()
24-part candle double uptrend condition
Returns: bool
cdut_25p()
25-part candle double uptrend condition
Returns: bool
cdut_26p()
26-part candle double uptrend condition
Returns: bool
cdut_27p()
27-part candle double uptrend condition
Returns: bool
cdut_28p()
28-part candle double uptrend condition
Returns: bool
cdut_29p()
29-part candle double uptrend condition
Returns: bool
cdut_30p()
30-part candle double uptrend condition
Returns: bool
cddt_1p()
1-part candle double downtrend condition
Returns: bool
cddt_2p()
2-part candle double downtrend condition
Returns: bool
cddt_3p()
3-part candle double downtrend condition
Returns: bool
cddt_4p()
4-part candle double downtrend condition
Returns: bool
cddt_5p()
5-part candle double downtrend condition
Returns: bool
cddt_6p()
6-part candle double downtrend condition
Returns: bool
cddt_7p()
7-part candle double downtrend condition
Returns: bool
cddt_8p()
8-part candle double downtrend condition
Returns: bool
cddt_9p()
9-part candle double downtrend condition
Returns: bool
cddt_10p()
10-part candle double downtrend condition
Returns: bool
cddt_11p()
11-part candle double downtrend condition
Returns: bool
cddt_12p()
12-part candle double downtrend condition
Returns: bool
cddt_13p()
13-part candle double downtrend condition
Returns: bool
cddt_14p()
14-part candle double downtrend condition
Returns: bool
cddt_15p()
15-part candle double downtrend condition
Returns: bool
cddt_16p()
16-part candle double downtrend condition
Returns: bool
cddt_17p()
17-part candle double downtrend condition
Returns: bool
cddt_18p()
18-part candle double downtrend condition
Returns: bool
cddt_19p()
19-part candle double downtrend condition
Returns: bool
cddt_20p()
20-part candle double downtrend condition
Returns: bool
cddt_21p()
21-part candle double downtrend condition
Returns: bool
cddt_22p()
22-part candle double downtrend condition
Returns: bool
cddt_23p()
23-part candle double downtrend condition
Returns: bool
cddt_24p()
24-part candle double downtrend condition
Returns: bool
cddt_25p()
25-part candle double downtrend condition
Returns: bool
cddt_26p()
26-part candle double downtrend condition
Returns: bool
cddt_27p()
27-part candle double downtrend condition
Returns: bool
cddt_28p()
28-part candle double downtrend condition
Returns: bool
cddt_29p()
29-part candle double downtrend condition
Returns: bool
cddt_30p()
30-part candle double downtrend condition
Returns: bool
Harmonic Patterns Library [TradingFinder]🔵 Introduction
Harmonic patterns blend geometric shapes with Fibonacci numbers, making these numbers fundamental to understanding the patterns.
One person who has done a lot of research on harmonic patterns is Scott Carney.Scott Carney's research on harmonic patterns in technical analysis focuses on precise price structures based on Fibonacci ratios to identify market reversals.
Key patterns include the Gartley, Bat, Butterfly, and Crab, each with specific alignment criteria. These patterns help traders anticipate potential market turning points and make informed trading decisions, enhancing the predictability of technical analysis.
🟣 Understanding 5-Point Harmonic Patterns
In the current library version, you can easily draw and customize most XABCD patterns. These patterns often form M or W shapes, or a combination of both. By calculating the Fibonacci ratios between key points, you can estimate potential price movements.
All five-point patterns share a similar structure, differing only in line lengths and Fibonacci ratios. Learning one pattern simplifies understanding others.
🟣 Exploring the Gartley Pattern
The Gartley pattern appears in both bullish (M shape) and bearish (W shape) forms. In the bullish Gartley, point X is below point D, and point A surpasses point C. Point D marks the start of a strong upward trend, making it an optimal point to place a buy order.
The bearish Gartley mirrors the bullish pattern with inverted Fibonacci ratios. In this scenario, point D indicates the start of a significant price drop. Traders can place sell orders at this point and buy at lower prices for profit in two-way markets.
🟣 Analyzing the Butterfly Pattern
The Butterfly pattern also manifests in bullish (M shape) and bearish (W shape) forms. It resembles the Gartley pattern but with point D lower than point X in the bullish version.
The Butterfly pattern involves deeper price corrections than the Gartley, leading to more significant price fluctuations. Point D in the bullish Butterfly indicates the beginning of a sharp price rise, making it an entry point for buy orders.
The bearish Butterfly has inverted Fibonacci ratios, with point D marking the start of a sharp price decline, ideal for sell orders followed by buying at lower prices in two-way markets.
🟣 Insights into the Bat Pattern
The Bat pattern, appearing in bullish (M shape) and bearish (W shape) forms, is one of the most precise harmonic patterns. It closely resembles the Butterfly and Gartley patterns, differing mainly in Fibonacci levels.
The bearish Bat pattern shares the Fibonacci ratios with the bullish Bat, with an inverted structure. Point D in the bearish Bat marks the start of a significant price drop, suitable for sell orders followed by buying at lower prices for profit.
🟣 The Crab Pattern Explained
The Crab pattern, found in both bullish (M shape) and bearish (W shape) forms, is highly favored by analysts. Discovered in 2000, the Crab pattern features a larger final wave correction compared to other harmonic patterns.
The bearish Crab shares Fibonacci ratios with the bullish version but in an inverted form. Point D in the bearish Crab signifies the start of a sharp price decline, making it an ideal point for sell orders followed by buying at lower prices for profitable trades.
🟣 Understanding the Shark Pattern
The Shark pattern appears in bullish (M shape) and bearish (W shape) forms. It differs from previous patterns as point C in the bullish Shark surpasses point A, with unique level measurements.
The bearish Shark pattern mirrors the Fibonacci ratios of the bullish Shark but is inverted. Point D in the bearish Shark indicates the start of a sharp price drop, ideal for placing sell orders and buying at lower prices to capitalize on the pattern.
🟣 The Cypher Pattern Overview
The Cypher pattern is another that appears in both bullish (M shape) and bearish (W shape) forms. It resembles the Shark pattern, with point C in the bullish Cypher extending beyond point A, and point D forming within the XA line.
The bearish Cypher shares the Fibonacci ratios with the bullish Cypher but in an inverted structure. Point D in the bearish Cypher marks the start of a significant price drop, perfect for sell orders followed by buying at lower prices.
🟣 Introducing the Nen-Star Pattern
The Nen-Star pattern appears in both bullish (M shape) and bearish (W shape) forms. In the bullish Nen-Star, point C extends beyond point A, and point D, the final point, forms outside the XA line, making CD the longest wave.
The bearish Nen-Star has inverted Fibonacci ratios, with point D indicating the start of a significant price drop. Traders can place sell orders at point D and buy at lower prices to profit from this pattern in two-way markets.
The 5-point harmonic patterns, commonly referred to as XABCD patterns, are specific geometric price structures identified in financial markets. These patterns are used by traders to predict potential price movements based on historical price data and Fibonacci retracement levels.
Here are the main 5-point harmonic patterns :
Gartley Pattern
Anti-Gartley Pattern
Bat Pattern
Anti-Bat Pattern
Alternate Bat Pattern
Butterfly Pattern
Anti-Butterfly Pattern
Crab Pattern
Anti-Crab Pattern
Deep Crab Pattern
Shark Pattern
Anti- Shark Pattern
Anti Alternate Shark Pattern
Cypher Pattern
Anti-Cypher Pattern
🔵 How to Use
To add "Order Block Refiner Library", you must first add the following code to your script.
import TFlab/Harmonic_Chart_Pattern_Library_TradingFinder/1 as HP
🟣 Parameters
XABCD(Name, Type, Show, Color, LineWidth, LabelSize, ShVF, FLPC, FLPCPeriod, Pivot, ABXAmin, ABXAmax, BCABmin, BCABmax, CDBCmin, CDBCmax, CDXAmin, CDXAmax) =>
Parameters:
Name (string)
Type (string)
Show (bool)
Color (color)
LineWidth (int)
LabelSize (string)
ShVF (bool)
FLPC (bool)
FLPCPeriod (int)
Pivot (int)
ABXAmin (float)
ABXAmax (float)
BCABmin (float)
BCABmax (float)
CDBCmin (float)
CDBCmax (float)
CDXAmin (float)
CDXAmax (float)
🟣 Genaral Parameters
Name : The name of the pattern.
Type: Enter "Bullish" to draw a Bullish pattern and "Bearish" to draw an Bearish pattern.
Show : Enter "true" to display the template and "false" to not display the template.
Color : Enter the desired color to draw the pattern in this parameter.
LineWidth : You can enter the number 1 or numbers higher than one to adjust the thickness of the drawing lines. This number must be an integer and increases with increasing thickness.
LabelSize : You can adjust the size of the labels by using the "size.auto", "size.tiny", "size.smal", "size.normal", "size.large" or "size.huge" entries.
🟣 Logical Parameters
ShVF : If this parameter is on "true" mode, only patterns will be displayed that they have exact format and no noise can be seen in them. If "false" is, the patterns displayed that maybe are noisy and do not exactly correspond to the original pattern.
FLPC : if Turned on, you can see this ability of patterns when their last pivot is formed. If this feature is off, it will see the patterns as soon as they are formed. The advantage of this option being clear is less formation of fielded patterns, and it is accompanied by the lateest pattern seeing and a sharp reduction in reward to risk.
FLPCPeriod : Using this parameter you can determine that the last pivot is based on Pivot period.
Pivot : You need to determine the period of the zigzag indicator. This factor is the most important parameter in pattern recognition.
ABXAmin : Minimum retracement of "AB" line compared to "XA" line.
ABXAmax : Maximum retracement of "AB" line compared to "XA" line.
BCABmin : Minimum retracement of "BC" line compared to "AB" line.
BCABmax : Maximum retracement of "BC" line compared to "AB" line.
CDBCmin : Minimum retracement of "CD" line compared to "BC" line.
CDBCmax : Maximum retracement of "CD" line compared to "BC" line.
CDXAmin : Minimum retracement of "CD" line compared to "XA" line.
CDXAmax : Maximum retracement of "CD" line compared to "XA" line.
🟣 Function Outputs
This library has two outputs. The first output is related to the alert of the formation of a new pattern. And the second output is related to the formation of the candlestick pattern and you can draw it using the "plotshape" tool.
Candle Confirmation Logic :
Example :
import TFlab/Harmonic_Chart_Pattern_Library_TradingFinder/1 as HP
PP = input.int(3, 'ZigZag Pivot Period')
ShowBull = input.bool(true, 'Show Bullish Pattern')
ShowBear = input.bool(true, 'Show Bearish Pattern')
ColorBull = input.color(#0609bb, 'Color Bullish Pattern')
ColorBear = input.color(#0609bb, 'Color Bearish Pattern')
LineWidth = input.int(1 , 'Width Line')
LabelSize = input.string(size.small , 'Label size' , options = )
ShVF = input.bool(false , 'Show Valid Format')
FLPC = input.bool(false , 'Show Formation Last Pivot Confirm')
FLPCPeriod =input.int(2, 'Period of Formation Last Pivot')
//Call function
= HP.XABCD('Bullish Bat', 'Bullish', ShowBull, ColorBull , LineWidth, LabelSize ,ShVF, FLPC, FLPCPeriod, PP, 0.382, 0.50, 0.382, 0.886, 1.618, 2.618, 0.85, 0.9)
= HP.XABCD('Bearish Bat', 'Bearish', ShowBear, ColorBear , LineWidth, LabelSize ,ShVF, FLPC, FLPCPeriod, PP, 0.382, 0.50, 0.382, 0.886, 1.618, 2.618, 0.85, 0.9)
//Alert
if BearAlert
alert('Bearish Harmonic')
if BullAlert
alert('Bulish Harmonic')
//CandleStick Confirm
plotshape(BearCandleConfirm, style = shape.arrowdown, color = color.red)
plotshape(BullCandleConfirm, style = shape.arrowup, color = color.green, location = location.belowbar )
MarketAnalysisLibrary "MarketAnalysis"
A collection of frequently used market analysis functions in my scripts.
bullFibRet(priceLow, priceHigh, fibLevel)
Calculates a bullish fibonacci retracement value.
Parameters:
priceLow (float) : (float) The lowest price point.
priceHigh (float) : (float) The highest price point.
fibLevel (float) : (float) The fibonacci level to calculate.
Returns: The fibonacci value of the given retracement level.
bearFibRet(priceLow, priceHigh, fibLevel)
Calculates a bearish fibonacci retracement value.
Parameters:
priceLow (float) : (float) The lowest price point.
priceHigh (float) : (float) The highest price point.
fibLevel (float) : (float) The fibonacci level to calculate.
Returns: The fibonacci value of the given retracement level.
bullFibExt(priceLow, priceHigh, thirdPivot, fibLevel)
Calculates a bullish fibonacci extension value.
Parameters:
priceLow (float) : (float) The lowest price point.
priceHigh (float) : (float) The highest price point.
thirdPivot (float) : (float) The third price point.
fibLevel (float) : (float) The fibonacci level to calculate.
Returns: The fibonacci value of the given extension level.
bearFibExt(priceLow, priceHigh, thirdPivot, fibLevel)
Calculates a bearish fibonacci extension value.
Parameters:
priceLow (float) : (float) The lowest price point.
priceHigh (float) : (float) The highest price point.
thirdPivot (float) : (float) The third price point.
fibLevel (float) : (float) The fibonacci level to calculate.
Returns: The fibonacci value of the given extension level.
Thuvien_publishLibrary "Thuvien_publish"
Thư viện build Strategy
entry_volume_func(risk, entry, sl)
Hàm tính khối lượng vào lệnh
Parameters:
risk (float)
entry (float)
sl (float)
Returns: entry_volume: trả về khối lượng cần vào
tp_sl_func(sl, entry, rr)
Tính TP/SL theo RR cho trước
Parameters:
sl (float)
entry (float)
rr (float)
Returns: Trả về giá trị Take profit
CryptoLibrary "Crypto"
This Library includes functions related to crytocurrencies and their blockchain
btcBlockReward(t)
Delivers the BTC block reward for a specific date/time
Parameters:
t (int) : Time of the current candle
Returns: blockRewardBtc
MathTransformLibrary "MathTransform"
Auxiliary functions for transforming data using mathematical and statistical methods
scaler_zscore(x, lookback_window)
Calculates Z-Score normalization of a series.
Parameters:
x (float) : : floating point series to normalize
lookback_window (int) : : lookback period for calculating mean and standard deviation
Returns: Z-Score normalized series
scaler_min_max(x, lookback_window, min_val, max_val, empiric_min, empiric_max, empiric_mid)
Performs Min-Max scaling of a series within a given window, user-defined bounds, and optional midpoint
Parameters:
x (float) : : floating point series to transform
lookback_window (int) : : int : optional lookback window size to consider for scaling.
min_val (float) : : float : minimum value of the scaled range. Default is 0.0.
max_val (float) : : float : maximum value of the scaled range. Default is 1.0.
empiric_min (float) : : float : user-defined minimum value of the input data. This means that the output could exceed the `min_val` bound if there is data in `x` lesser than `empiric_min`. If na, it's calculated from `x` and `lookback_window`.
empiric_max (float) : : float : user-defined maximum value of the input data. This means that the output could exceed the `max_val` bound if there is data in `x` greater than `empiric_max`. If na, it's calculated from `x` and `lookback_window`.
empiric_mid (float) : : float : user-defined midpoint value of the input data. If na, it's calculated from `empiric_min` and `empiric_max`.
Returns: rescaled series
log(x, base)
Applies logarithmic transformation to a value, base can be user-defined.
Parameters:
x (float) : : floating point value to transform
base (float) : : logarithmic base, must be greater than 0
Returns: logarithm of the value to the given base, if x <= 0, returns logarithm of 1 to the given base
exp(x, base)
Applies exponential transformation to a value, base can be user-defined.
Parameters:
x (float) : : floating point value to transform
base (float) : : base of the exponentiation, must be greater than 0
Returns: the result of raising the base to the power of the value
power(x, exponent)
Applies power transformation to a value, exponent can be user-defined.
Parameters:
x (float) : : floating point value to transform
exponent (float) : : exponent for the transformation
Returns: the value raised to the given exponent, preserving the sign of the original value
tanh(x, scale)
The hyperbolic tangent is the ratio of the hyperbolic sine and hyperbolic cosine. It limits an output to a range of −1 to 1.
Parameters:
x (float) : : floating point series
scale (float)
sigmoid(x, scale, offset)
Applies the sigmoid function to a series.
Parameters:
x (float) : : floating point series to transform
scale (float) : : scaling factor for the sigmoid function
offset (float) : : offset for the sigmoid function
Returns: transformed series using the sigmoid function
sigmoid_double(x, scale, offset)
Applies a double sigmoid function to a series, handling positive and negative values differently.
Parameters:
x (float) : : floating point series to transform
scale (float) : : scaling factor for the sigmoid function
offset (float) : : offset for the sigmoid function
Returns: transformed series using the double sigmoid function
logistic_decay(a, b, c, t)
Calculates logistic decay based on given parameters.
Parameters:
a (float) : : parameter affecting the steepness of the curve
b (float) : : parameter affecting the direction of the decay
c (float) : : the upper bound of the function's output
t (float) : : time variable
Returns: value of the logistic decay function at time t
RiskMetrics█ OVERVIEW
This library is a tool for Pine programmers that provides functions for calculating risk-adjusted performance metrics on periodic price returns. The calculations used by this library's functions closely mirror those the Broker Emulator uses to calculate strategy performance metrics (e.g., Sharpe and Sortino ratios) without depending on strategy-specific functionality.
█ CONCEPTS
Returns, risk, and volatility
The return on an investment is the relative gain or loss over a period, often expressed as a percentage. Investment returns can originate from several sources, including capital gains, dividends, and interest income. Many investors seek the highest returns possible in the quest for profit. However, prudent investing and trading entails evaluating such returns against the associated risks (i.e., the uncertainty of returns and the potential for financial losses) for a clearer perspective on overall performance and sustainability.
One way investors and analysts assess the risk of an investment is by analyzing its volatility , i.e., the statistical dispersion of historical returns. Investors often use volatility in risk estimation because it provides a quantifiable way to gauge the expected extent of fluctuation in returns. Elevated volatility implies heightened uncertainty in the market, which suggests higher expected risk. Conversely, low volatility implies relatively stable returns with relatively minimal fluctuations, thus suggesting lower expected risk. Several risk-adjusted performance metrics utilize volatility in their calculations for this reason.
Risk-free rate
The risk-free rate represents the rate of return on a hypothetical investment carrying no risk of financial loss. This theoretical rate provides a benchmark for comparing the returns on a risky investment and evaluating whether its excess returns justify the risks. If an investment's returns are at or below the theoretical risk-free rate or the risk premium is below a desired amount, it may suggest that the returns do not compensate for the extra risk, which might be a call to reassess the investment.
Since the risk-free rate is a theoretical concept, investors often utilize proxies for the rate in practice, such as Treasury bills and other government bonds. Conventionally, analysts consider such instruments "risk-free" for a domestic holder, as they are a form of government obligation with a low perceived likelihood of default.
The average yield on short-term Treasury bills, influenced by economic conditions, monetary policies, and inflation expectations, has historically hovered around 2-3% over the long term. This range also aligns with central banks' inflation targets. As such, one may interpret a value within this range as a minimum proxy for the risk-free rate, as it may correspond to the minimum rate required to maintain purchasing power over time.
The built-in Sharpe and Sortino ratios that strategies calculate and display in the Performance Summary tab use a default risk-free rate of 2%, and the metrics in this library's example code use the same default rate. Users can adjust this value to fit their analysis needs.
Risk-adjusted performance
Risk-adjusted performance metrics gauge the effectiveness of an investment by considering its returns relative to the perceived risk. They aim to provide a more well-rounded picture of performance by factoring in the level of risk taken to achieve returns. Investors can utilize such metrics to help determine whether the returns from an investment justify the risks and make informed decisions.
The two most commonly used risk-adjusted performance metrics are the Sharpe ratio and the Sortino ratio.
1. Sharpe ratio
The Sharpe ratio , developed by Nobel laureate William F. Sharpe, measures the performance of an investment compared to a theoretically risk-free asset, adjusted for the investment risk. The ratio uses the following formula:
Sharpe Ratio = (𝑅𝑎 − 𝑅𝑓) / 𝜎𝑎
Where:
• 𝑅𝑎 = Average return of the investment
• 𝑅𝑓 = Theoretical risk-free rate of return
• 𝜎𝑎 = Standard deviation of the investment's returns (volatility)
A higher Sharpe ratio indicates a more favorable risk-adjusted return, as it signifies that the investment produced higher excess returns per unit of increase in total perceived risk.
2. Sortino ratio
The Sortino ratio is a modified form of the Sharpe ratio that only considers downside volatility , i.e., the volatility of returns below the theoretical risk-free benchmark. Although it shares close similarities with the Sharpe ratio, it can produce very different values, especially when the returns do not have a symmetrical distribution, since it does not penalize upside and downside volatility equally. The ratio uses the following formula:
Sortino Ratio = (𝑅𝑎 − 𝑅𝑓) / 𝜎𝑑
Where:
• 𝑅𝑎 = Average return of the investment
• 𝑅𝑓 = Theoretical risk-free rate of return
• 𝜎𝑑 = Downside deviation (standard deviation of negative excess returns, or downside volatility)
The Sortino ratio offers an alternative perspective on an investment's return-generating efficiency since it does not consider upside volatility in its calculation. A higher Sortino ratio signifies that the investment produced higher excess returns per unit of increase in perceived downside risk.
█ CALCULATIONS
Return period detection
Calculating risk-adjusted performance metrics requires collecting returns across several periods of a given size. Analysts may use different period sizes based on the context and their preferences. However, two widely used standards are monthly or daily periods, depending on the available data and the investment's duration. The built-in ratios displayed in the Strategy Tester utilize returns from either monthly or daily periods in their calculations based on the following logic:
• Use monthly returns if the history of closed trades spans at least two months.
• Use daily returns if the trades span at least two days but less than two months.
• Do not calculate the ratios if the trade data spans fewer than two days.
This library's `detectPeriod()` function applies related logic to available chart data rather than trade data to determine which period is appropriate:
• It returns true if the chart's data spans at least two months, indicating that it's sufficient to use monthly periods.
• It returns false if the chart's data spans at least two days but not two months, suggesting the use of daily periods.
• It returns na if the length of the chart's data covers less than two days, signifying that the data is insufficient for meaningful ratio calculations.
It's important to note that programmers should only call `detectPeriod()` from a script's global scope or within the outermost scope of a function called from the global scope, as it requires the time value from the first bar to accurately measure the amount of time covered by the chart's data.
Collecting periodic returns
This library's `getPeriodicReturns()` function tracks price return data within monthly or daily periods and stores the periodic values in an array . It uses a `detectPeriod()` call as the condition to determine whether each element in the array represents the return over a monthly or daily period.
The `getPeriodicReturns()` function has two overloads. The first overload requires two arguments and outputs an array of monthly or daily returns for use in the `sharpe()` and `sortino()` methods. To calculate these returns:
1. The `percentChange` argument should be a series that represents percentage gains or losses. The values can be bar-to-bar return percentages on the chart timeframe or percentages requested from a higher timeframe.
2. The function compounds all non-na `percentChange` values within each monthly or daily period to calculate the period's total return percentage. When the `percentChange` represents returns from a higher timeframe, ensure the requested data includes gaps to avoid compounding redundant values.
3. After a period ends, the function queues the compounded return into the array , removing the oldest element from the array when its size exceeds the `maxPeriods` argument.
The resulting array represents the sequence of closed returns over up to `maxPeriods` months or days, depending on the available data.
The second overload of the function includes an additional `benchmark` parameter. Unlike the first overload, this version tracks and collects differences between the `percentChange` and the specified `benchmark` values. The resulting array represents the sequence of excess returns over up to `maxPeriods` months or days. Passing this array to the `sharpe()` and `sortino()` methods calculates generalized Information ratios , which represent the risk-adjustment performance of a sequence of returns compared to a risky benchmark instead of a risk-free rate. For consistency, ensure the non-na times of the `benchmark` values align with the times of the `percentChange` values.
Ratio methods
This library's `sharpe()` and `sortino()` methods respectively calculate the Sharpe and Sortino ratios based on an array of returns compared to a specified annual benchmark. Both methods adjust the annual benchmark based on the number of periods per year to suit the frequency of the returns:
• If the method call does not include a `periodsPerYear` argument, it uses `detectPeriod()` to determine whether the returns represent monthly or daily values based on the chart's history. If monthly, the method divides the `annualBenchmark` value by 12. If daily, it divides the value by 365.
• If the method call does specify a `periodsPerYear` argument, the argument's value supersedes the automatic calculation, facilitating custom benchmark adjustments, such as dividing by 252 when analyzing collected daily stock returns.
When the array passed to these methods represents a sequence of excess returns , such as the result from the second overload of `getPeriodicReturns()`, use an `annualBenchmark` value of 0 to avoid comparing those excess returns to a separate rate.
By default, these methods only calculate the ratios on the last available bar to minimize their resource usage. Users can override this behavior with the `forceCalc` parameter. When the value is true , the method calculates the ratio on each call if sufficient data is available, regardless of the bar index.
Look first. Then leap.
█ FUNCTIONS & METHODS
This library contains the following functions:
detectPeriod()
Determines whether the chart data has sufficient coverage to use monthly or daily returns
for risk metric calculations.
Returns: (bool) `true` if the period spans more than two months, `false` if it otherwise spans more
than two days, and `na` if the data is insufficient.
getPeriodicReturns(percentChange, maxPeriods)
(Overload 1 of 2) Tracks periodic return percentages and queues them into an array for ratio
calculations. The span of the chart's historical data determines whether the function uses
daily or monthly periods in its calculations. If the chart spans more than two months,
it uses "1M" periods. Otherwise, if the chart spans more than two days, it uses "1D"
periods. If the chart covers less than two days, it does not store changes.
Parameters:
percentChange (float) : (series float) The change percentage. The function compounds non-na values from each
chart bar within monthly or daily periods to calculate the periodic changes.
maxPeriods (simple int) : (simple int) The maximum number of periodic returns to store in the returned array.
Returns: (array) An array containing the overall percentage changes for each period, limited
to the maximum specified by `maxPeriods`.
getPeriodicReturns(percentChange, benchmark, maxPeriods)
(Overload 2 of 2) Tracks periodic excess return percentages and queues the values into an
array. The span of the chart's historical data determines whether the function uses
daily or monthly periods in its calculations. If the chart spans more than two months,
it uses "1M" periods. Otherwise, if the chart spans more than two days, it uses "1D"
periods. If the chart covers less than two days, it does not store changes.
Parameters:
percentChange (float) : (series float) The change percentage. The function compounds non-na values from each
chart bar within monthly or daily periods to calculate the periodic changes.
benchmark (float) : (series float) The benchmark percentage to compare against `percentChange` values.
The function compounds non-na values from each bar within monthly or
daily periods and subtracts the results from the compounded `percentChange` values to
calculate the excess returns. For consistency, ensure this series has a similar history
length to the `percentChange` with aligned non-na value times.
maxPeriods (simple int) : (simple int) The maximum number of periodic excess returns to store in the returned array.
Returns: (array) An array containing monthly or daily excess returns, limited
to the maximum specified by `maxPeriods`.
method sharpeRatio(returnsArray, annualBenchmark, forceCalc, periodsPerYear)
Calculates the Sharpe ratio for an array of periodic returns.
Callable as a method or a function.
Namespace types: array
Parameters:
returnsArray (array) : (array) An array of periodic return percentages, e.g., returns over monthly or
daily periods.
annualBenchmark (float) : (series float) The annual rate of return to compare against `returnsArray` values. When
`periodsPerYear` is `na`, the function divides this value by 12 to calculate a
monthly benchmark if the chart's data spans at least two months or 365 for a daily
benchmark if the data otherwise spans at least two days. If `periodsPerYear`
has a specified value, the function divides the rate by that value instead.
forceCalc (bool) : (series bool) If `true`, calculates the ratio on every call. Otherwise, ratio calculation
only occurs on the last available bar. Optional. The default is `false`.
periodsPerYear (simple int) : (simple int) If specified, divides the annual rate by this value instead of the value
determined by the time span of the chart's data.
Returns: (float) The Sharpe ratio, which estimates the excess return per unit of total volatility.
method sortinoRatio(returnsArray, annualBenchmark, forceCalc, periodsPerYear)
Calculates the Sortino ratio for an array of periodic returns.
Callable as a method or a function.
Namespace types: array
Parameters:
returnsArray (array) : (array) An array of periodic return percentages, e.g., returns over monthly or
daily periods.
annualBenchmark (float) : (series float) The annual rate of return to compare against `returnsArray` values. When
`periodsPerYear` is `na`, the function divides this value by 12 to calculate a
monthly benchmark if the chart's data spans at least two months or 365 for a daily
benchmark if the data otherwise spans at least two days. If `periodsPerYear`
has a specified value, the function divides the rate by that value instead.
forceCalc (bool) : (series bool) If `true`, calculates the ratio on every call. Otherwise, ratio calculation
only occurs on the last available bar. Optional. The default is `false`.
periodsPerYear (simple int) : (simple int) If specified, divides the annual rate by this value instead of the value
determined by the time span of the chart's data.
Returns: (float) The Sortino ratio, which estimates the excess return per unit of downside
volatility.
signalLib_yashgode9Signal Generation Library = "signalLib_yashgode9"
This library, named "signalLib_yashgode9", is designed to generate buy and sell signals based on the price action of a financial instrument. It utilizes various technical indicators and parameters to determine the market direction and provide actionable signals for traders.
Key Features:-
1.Trend Direction Identification: The library calculates the trend direction by comparing the number of bars since the highest and lowest prices within a specified depth. This allows the library to determine the overall market direction, whether it's bullish or bearish.
2.Dynamic Price Tracking: The library maintains two chart points, zee1 and zee2, which dynamically track the price levels based on the identified market direction. These points serve as reference levels for generating buy and sell signals.
3.Customizable Parameters: The library allows users to adjust several parameters, including the depth of the price analysis, the deviation threshold, and the number of bars to consider for the trend direction. This flexibility enables users to fine-tune the library's behavior to suit their trading strategies.
4.Visual Representation: The library provides a visual representation of the buy and sell signals by drawing a line between the zee1 and zee2 chart points. The line's color changes based on the identified market direction, with red indicating a bearish signal and green indicating a bullish signal.
Usage and Integration:
To use this library, you can call the "signalLib_yashgode9" function and pass in the necessary parameters, such as the lower and higher prices, the depth of the analysis, the deviation threshold, and the number of bars to consider for the trend direction. The function will return the direction of the market (1 for bullish, -1 for bearish), as well as the zee1 and zee2 chart points.You can then use these values to generate buy and sell signals in your trading strategy. For example, you could use the direction value to determine when to enter or exit a trade, and the zee1 and zee2 chart points to set stop-loss or take-profit levels.
Potential Use Cases:
This library can be particularly useful for traders who:
1.Trend-following Strategies: The library's ability to identify the market direction can be beneficial for traders who employ trend-following strategies, as it can help them identify the dominant trend and time their entries and exits accordingly.
2.Swing Trading: The dynamic price tracking provided by the zee1 and zee2 chart points can be useful for swing traders, who aim to capture medium-term price movements.
3.Automated Trading Systems: The library's functionality can be integrated into automated trading systems, allowing for the development of more sophisticated and rule-based trading strategies.
4.Educational Purposes: The library can also be used for educational purposes, as it provides a clear and concise way to demonstrate the application of technical analysis concepts in a trading context.
Important Notice:- This library effectively work on timeframe of 5-minute and 15-minute.
Useful_lib_publicLibrary "Useful_lib_public"
Useful functions
CountBarsOfDay()
count bars for one for the diffrent time frames
Returns: number of bars for one day
LastBarsOfDay()
Index number for the las bar for one day
Returns: TRUE is that the last bar from day
isTuesday()
TRUE is tuesday
Returns: TRUE is tuesday else FALSE
Rsi(src, len)
RSI calulation
Parameters:
src (float) : RSI Source
len (simple int) : RSI Length
Returns: RSI Value
CalcIndex(netPos, weeks)
Index calulation
Parameters:
netPos (float) : Source
weeks (simple int) : Length
Returns: "COT Index"
RsiStock(src, len, smoothK)
TRUE is tuesday
Parameters:
src (float)
len (simple int)
smoothK (int)
Returns: RSI Stochastik
Offset()
Use Offset for Day time frame
Returns: Offset
PercentChange(Data, LastData)
Calc different in Percent
Parameters:
Data (float)
LastData (float)
Returns: Change in percent
strategy_helpersThis library is designed to aid traders and developers in calculating risk metrics efficiently across different asset types like equities, futures, and forex. It includes comprehensive functions that calculate the number of units or contracts to trade, the value at risk, and the total value of the position based on provided entry prices, stop levels, and risk percentages. Whether you're managing a portfolio or developing trading strategies, this library provides essential tools for risk management. Functions also automatically select the appropriate risk calculation method based on asset type, calculate leverage levels, and determine potential liquidation points for leveraged positions. Perfect for enhancing the precision and effectiveness of your trading strategies.
Library "strategy_helpers"
Provides tools for calculating risk metrics across different types of trading strategies including equities, futures, and forex. Functions allow for precise control over risk management by calculating the number of units or contracts to trade, the value at risk, and the total position value based on entry prices, stop levels, and desired risk percentage. Additional utilities include automatic risk calculation based on asset type, leverage level calculations, and determination of liquidation levels for leveraged trades.
calculate_risk(entry, stop_level, stop_range, capital, risk_percent, trade_direction, whole_number_buy)
Calculates risk metrics for equity trades based on entry, stop level, and risk percent
Parameters:
entry (float) : The price at which the position is entered. Use close if you arent adding to a position. Use the original entry price if you are adding to a position.
stop_level (float) : The price level where the stop loss is placed
stop_range (float) : The price range from entry to stop level
capital (float) : The total capital available for trading
risk_percent (float) : The percentage of capital risked on the trade. 100% is represented by 100.
trade_direction (bool) : True for long trades, false for short trades
whole_number_buy (bool) : True to adjust the quantity to whole numbers
Returns: A tuple containing the number of units to trade, the value at risk, and the total value of the position:
calculate_risk_futures(risk_capital, stop_range)
Calculates risk metrics for futures trades based on the risk capital and stop range
Parameters:
risk_capital (float) : The capital allocated for the trade
stop_range (float) : The price range from entry to stop level
Returns: A tuple containing the number of contracts to trade, the value at risk, and the total value of the position:
calculate_risk_forex(entry, stop_level, stop_range, capital, risk_percent, trade_direction)
Calculates risk metrics for forex trades based on entry, stop level, and risk percent
Parameters:
entry (float) : The price at which the position is entered. Use close if you arent adding to a position. Use the original entry price if you are adding to a position.
stop_level (float) : The price level where the stop loss is placed
stop_range (float) : The price range from entry to stop level
capital (float) : The total capital available for trading
risk_percent (float) : The percentage of capital risked on the trade. 100% is represented by 100.
trade_direction (bool) : True for long trades, false for short trades
Returns: A tuple containing the number of lots to trade, the value at risk, and the total value of the position:
calculate_risk_auto(entry, stop_level, stop_range, capital, risk_percent, trade_direction, whole_number_buy)
Automatically selects the risk calculation method based on the asset type and calculates risk metrics
Parameters:
entry (float) : The price at which the position is entered. Use close if you arent adding to a position. Use the original entry price if you are adding to a position.
stop_level (float) : The price level where the stop loss is placed
stop_range (float) : The price range from entry to stop level
capital (float) : The total capital available for trading
risk_percent (float) : The percentage of capital risked on the trade. 100% is represented by 100.
trade_direction (bool) : True for long trades, false for short trades
whole_number_buy (bool) : True to adjust the quantity to whole numbers, applicable only for non-futures and non-forex trades
Returns: A tuple containing the number of units or contracts to trade, the value at risk, and the total value of the position:
leverage_level(account_equity, position_value)
Calculates the leverage level used based on account equity and position value
Parameters:
account_equity (float) : Total equity in the trading account
position_value (float) : Total value of the position taken
Returns: The leverage level used in the trade
calculate_liquidation_level(entry, leverage, trade_direction, maintenance_margine)
Calculates the liquidation price level for a leveraged trade
Parameters:
entry (float) : The price at which the position is entered
leverage (float) : The leverage level used in the trade
trade_direction (bool) : True for long trades, false for short trades
maintenance_margine (float) : The maintenance margin requirement, expressed as a percentage
Returns: The price level at which the position would be liquidated, or na if leverage is zero
mathLibrary "math"
It's a library of discrete aproximations of a price or Series float it uses Fourier Discrete transform, Laplace Discrete Original and Modified transform and Euler's Theoreum for Homogenus White noice operations. Calling functions without source value it automatically take close as the default source value.
Here is a picture of Laplace and Fourier approximated close prices from this library:
Copy this indicator and try it yourself:
import AutomatedTradingAlgorithms/math/1 as math
//@version=5
indicator("Close Price with Aproximations", shorttitle="Close and Aproximations", overlay=false)
// Sample input data (replace this with your own data)
inputData = close
// Plot Close Price
plot(inputData, color=color.blue, title="Close Price")
ltf32_result = math.LTF32(a=0.01)
plot(ltf32_result, color=color.green, title="LTF32 Aproximation")
fft_result = math.FFT()
plot(fft_result, color=color.red, title="Fourier Aproximation")
wavelet_result = math.Wavelet()
plot(wavelet_result, color=color.orange, title="Wavelet Aproximation")
wavelet_std_result = math.Wavelet_std()
plot(wavelet_std_result, color=color.yellow, title="Wavelet_std Aproximation")
DFT3(xval, _dir)
Discrete Fourier Transform with last 3 points
Parameters:
xval (float) : Source series
_dir (int) : Direction parameter
Returns: Aproxiated source value
DFT2(xval, _dir)
Discrete Fourier Transform with last 2 points
Parameters:
xval (float) : Source series
_dir (int) : Direction parameter
Returns: Aproxiated source value
FFT(xval)
Fast Fourier Transform once. It aproximates usig last 3 points.
Parameters:
xval (float) : Source series
Returns: Aproxiated source value
DFT32(xval)
Combined Discrete Fourier Transforms of DFT3 and DTF2 it aproximates last point by first
aproximating last 3 ponts and than using last 2 points of the previus.
Parameters:
xval (float) : Source series
Returns: Aproxiated source value
DTF32(xval)
Combined Discrete Fourier Transforms of DFT3 and DTF2 it aproximates last point by first
aproximating last 3 ponts and than using last 2 points of the previus.
Parameters:
xval (float) : Source series
Returns: Aproxiated source value
LFT3(xval, _dir, a)
Discrete Laplace Transform with last 3 points
Parameters:
xval (float) : Source series
_dir (int) : Direction parameter
a (float) : laplace coeficient
Returns: Aproxiated source value
LFT2(xval, _dir, a)
Discrete Laplace Transform with last 2 points
Parameters:
xval (float) : Source series
_dir (int) : Direction parameter
a (float) : laplace coeficient
Returns: Aproxiated source value
LFT(xval, a)
Fast Laplace Transform once. It aproximates usig last 3 points.
Parameters:
xval (float) : Source series
a (float) : laplace coeficient
Returns: Aproxiated source value
LFT32(xval, a)
Combined Discrete Laplace Transforms of LFT3 and LTF2 it aproximates last point by first
aproximating last 3 ponts and than using last 2 points of the previus.
Parameters:
xval (float) : Source series
a (float) : laplace coeficient
Returns: Aproxiated source value
LTF32(xval, a)
Combined Discrete Laplace Transforms of LFT3 and LTF2 it aproximates last point by first
aproximating last 3 ponts and than using last 2 points of the previus.
Parameters:
xval (float) : Source series
a (float) : laplace coeficient
Returns: Aproxiated source value
whitenoise(indic_, _devided, minEmaLength, maxEmaLength, src)
Ehler's Universal Oscillator with White Noise, without extra aproximated src.
It uses dinamic EMA to aproximate indicator and thus reducing noise.
Parameters:
indic_ (float) : Input series for the indicator values to be smoothed
_devided (int) : Divisor for oscillator calculations
minEmaLength (int) : Minimum EMA length
maxEmaLength (int) : Maximum EMA length
src (float) : Source series
Returns: Smoothed indicator value
whitenoise(indic_, dft1, _devided, minEmaLength, maxEmaLength, src)
Ehler's Universal Oscillator with White Noise and DFT1.
It uses src and sproxiated src (dft1) to clearly define white noice.
It uses dinamic EMA to aproximate indicator and thus reducing noise.
Parameters:
indic_ (float) : Input series for the indicator values to be smoothed
dft1 (float) : Aproximated src value for white noice calculation
_devided (int) : Divisor for oscillator calculations
minEmaLength (int) : Minimum EMA length
maxEmaLength (int) : Maximum EMA length
src (float) : Source series
Returns: Smoothed indicator value
smooth(dft1, indic__, _devided, minEmaLength, maxEmaLength, src)
Smoothing source value with help of indicator series and aproximated source value
It uses src and sproxiated src (dft1) to clearly define white noice.
It uses dinamic EMA to aproximate src and thus reducing noise.
Parameters:
dft1 (float) : Value to be smoothed.
indic__ (float) : Optional input for indicator to help smooth dft1 (default is FFT)
_devided (int) : Divisor for smoothing calculations
minEmaLength (int) : Minimum EMA length
maxEmaLength (int) : Maximum EMA length
src (float) : Source series
Returns: Smoothed source (src) series
smooth(indic__, _devided, minEmaLength, maxEmaLength, src)
Smoothing source value with help of indicator series
It uses dinamic EMA to aproximate src and thus reducing noise.
Parameters:
indic__ (float) : Optional input for indicator to help smooth dft1 (default is FFT)
_devided (int) : Divisor for smoothing calculations
minEmaLength (int) : Minimum EMA length
maxEmaLength (int) : Maximum EMA length
src (float) : Source series
Returns: Smoothed src series
vzo_ema(src, len)
Volume Zone Oscillator with EMA smoothing
Parameters:
src (float) : Source series
len (simple int) : Length parameter for EMA
Returns: VZO value
vzo_sma(src, len)
Volume Zone Oscillator with SMA smoothing
Parameters:
src (float) : Source series
len (int) : Length parameter for SMA
Returns: VZO value
vzo_wma(src, len)
Volume Zone Oscillator with WMA smoothing
Parameters:
src (float) : Source series
len (int) : Length parameter for WMA
Returns: VZO value
alma2(series, windowsize, offset, sigma)
Arnaud Legoux Moving Average 2 accepts sigma as series float
Parameters:
series (float) : Input series
windowsize (int) : Size of the moving average window
offset (float) : Offset parameter
sigma (float) : Sigma parameter
Returns: ALMA value
Wavelet(src, len, offset, sigma)
Aproxiates srt using Discrete wavelet transform.
Parameters:
src (float) : Source series
len (int) : Length parameter for ALMA
offset (simple float)
sigma (simple float)
Returns: Wavelet-transformed series
Wavelet_std(src, len, offset, mag)
Aproxiates srt using Discrete wavelet transform with standard deviation as a magnitude.
Parameters:
src (float) : Source series
len (int) : Length parameter for ALMA
offset (float) : Offset parameter for ALMA
mag (int) : Magnitude parameter for standard deviation
Returns: Wavelet-transformed series
LaplaceTransform(xval, N, a)
Original Laplace Transform over N set of close prices
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
Returns: Aproxiated source value
NLaplaceTransform(xval, N, a, repeat)
Y repetirions on Original Laplace Transform over N set of close prices, each time N-k set of close prices
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
repeat (int) : number of repetitions
Returns: Aproxiated source value
LaplaceTransformsum(xval, N, a, b)
Sum of 2 exponent coeficient of Laplace Transform over N set of close prices
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
b (float) : second laplace coeficient
Returns: Aproxiated source value
NLaplaceTransformdiff(xval, N, a, b, repeat)
Difference of 2 exponent coeficient of Laplace Transform over N set of close prices
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
b (float) : second laplace coeficient
repeat (int) : number of repetitions
Returns: Aproxiated source value
N_divLaplaceTransformdiff(xval, N, a, b, repeat)
N repetitions of Difference of 2 exponent coeficient of Laplace Transform over N set of close prices, with dynamic rotation
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
b (float) : second laplace coeficient
repeat (int) : number of repetitions
Returns: Aproxiated source value
LaplaceTransformdiff(xval, N, a, b)
Difference of 2 exponent coeficient of Laplace Transform over N set of close prices
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
b (float) : second laplace coeficient
Returns: Aproxiated source value
NLaplaceTransformdiffFrom2(xval, N, a, b, repeat)
N repetitions of Difference of 2 exponent coeficient of Laplace Transform over N set of close prices, second element has for 1 higher exponent factor
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
b (float) : second laplace coeficient
repeat (int) : number of repetitions
Returns: Aproxiated source value
N_divLaplaceTransformdiffFrom2(xval, N, a, b, repeat)
N repetitions of Difference of 2 exponent coeficient of Laplace Transform over N set of close prices, second element has for 1 higher exponent factor, dynamic rotation
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
b (float) : second laplace coeficient
repeat (int) : number of repetitions
Returns: Aproxiated source value
LaplaceTransformdiffFrom2(xval, N, a, b)
Difference of 2 exponent coeficient of Laplace Transform over N set of close prices, second element has for 1 higher exponent factor
Parameters:
xval (float) : series to aproximate
N (int) : number of close prices in calculations
a (float) : laplace coeficient
b (float) : second laplace coeficient
Returns: Aproxiated source value
Order Block Refiner [TradingFinder]🔵 Introduction
The "Refinement" feature allows you to adjust the width of the order block according to your strategy. There are two modes, "Aggressive" and "Defensive," in the "Order Block Refine". The difference between "Aggressive" and "Defensive" lies in the width of the order block.
For risk-averse traders, the "Defensive" mode is suitable as it provides a lower loss limit and a greater reward-to-risk ratio. For risk-taking traders, the "Aggressive" mode is more appropriate. These traders prefer to enter trades at higher prices, and this mode, which has a wider order block width, is more suitable for this group of individuals.
Important :
One of the advantages of using this library is increased code accuracy. Not only does it have the capability to create order blocks, but you can also simply define the condition for order block creation (true/false) and "bar_index," and you'll find the primary range without applying any filters.
🟣 Order Block Refinement Algorithm
The order block ranges are filtered in two stages. In the first stage, the "Open," "High," "Low," and "Close" of the current order block candle, its two or three previous candles, and one subsequent candle (if available) are examined. In this stage, minimum and maximum distances are calculated, and logical range filters are applied.
In the second stage, two modes, "Aggressive" and "Defensive," are calculated.
For the "Defensive" mode, the width of these ranges is compared with the "ATR" (Average True Range) of period 55, and if they are smaller than "ATR" or 1 to more than 4 times "ATR," the width of the range is reduced from 0 to 80 percent.
For the "Aggressive" mode, you get the same output as the first filter, which usually has a wider width than the "Defensive" mode.
• Order Block Refiner : Off
• Order Block Refiner : On / "Aggressive Mode"
• Order Block Refiner : On / "Defensive Mode"
🔵 How to Use
OBRefiner(string OBType, string OBRefine, string RefineMethod, bool TriggerCondition, int Index) =>
Parameters:
• OBType (string)
• OBRefine (string)
• RefineMethod (string)
• TriggerCondition (bool)
• Index (int)
To add "Order Block Refiner Library", you must first add the following code to your script.
import TFlab/OrderBlockRefiner_TradingFinder/1
OBType : This parameter receives 2 inputs. If the order block you want to "Refine" is of type demand, you should enter "Demand," and if it's of type supply, you should enter "Supply."
OBRefine : Set to "On" if you want the "Refine" operation to be performed. Otherwise, set to "Off."
RefineMethod : This input receives 2 modes, "Aggressive" and "Defensive." You can switch between these modes according to your needs.
TriggerCondition : Enter the condition with which the order block is formed in this parameter.
Index : Enter the "bar_index" of the candle where the order block is formed in this parameter.
🟣 Function Outputs
This function has 6 outputs: "bar_index" at the beginning of the "Distal" line, "bar_index+1" at the end of the "Distal" line, "Price" at the "Distal" line, "bar_index" at the beginning of the "Proximal" line, "bar_index+1" at the end of the "Proximal" line, and "Price" at the "Proximal" line, which can be used to draw order blocks.
Sample :
= Refiner.OBRefiner('Demand', 'Off', 'Aggressive',BuMChMain_Trigger, BuMChMain_Index)
if BuMChMain_Trigger
BuMChHlineMain := line.new(BuMChMain_Xp1 , BuMChMain_Yp12 , bar_index , BuMChMain_Yp12, color = color.black , style = line.style_dotted)
BuMChLlineMain := line.new(BuMChMain_Xd1 , BuMChMain_Yd12 , bar_index , BuMChMain_Yd12, color = color.black , style = line.style_dotted)
BuMChFilineMain := linefill.new(BuMChHlineMain ,BuMChLlineMain , color = color.rgb(76, 175, 80 , 75 ) )
Monty3192_LibraryLibrary "Monty3192_Library"
Libreria Monty3192 - MontyTrader
calc_func(inversion1, inversion2, inversion3, inversion4, inversion5, inversion6, inversion7, inversion8, inversion9, inversion10, precio1, precio2, precio3, precio4, precio5, precio6, precio7, precio8, precio9, precio10, act_1, act_2, act_3, act_4, act_5, act_6, act_7, act_8, act_9, act_10)
Parameters:
inversion1 (float)
inversion2 (float)
inversion3 (float)
inversion4 (float)
inversion5 (float)
inversion6 (float)
inversion7 (float)
inversion8 (float)
inversion9 (float)
inversion10 (float)
precio1 (float)
precio2 (float)
precio3 (float)
precio4 (float)
precio5 (float)
precio6 (float)
precio7 (float)
precio8 (float)
precio9 (float)
precio10 (float)
act_1 (bool)
act_2 (bool)
act_3 (bool)
act_4 (bool)
act_5 (bool)
act_6 (bool)
act_7 (bool)
act_8 (bool)
act_9 (bool)
act_10 (bool)
rend_func(p1, p2, p3, p4, p5, p6, p7, p8, p9, p10, po)
Parameters:
p1 (float)
p2 (float)
p3 (float)
p4 (float)
p5 (float)
p6 (float)
p7 (float)
p8 (float)
p9 (float)
p10 (float)
po (float)
f_drawLine(cond, x1, y1, x2, y2, colorr, txt, act, offset, txtc, txts)
Parameters:
cond (bool)
x1 (int)
y1 (float)
x2 (int)
y2 (float)
colorr (color)
txt (string)
act (bool)
offset (int)
txtc (color)
txts (string)
f_Vline(cond, x1, y1, x2, y2, colorr, txt, sel, txts, txtc)
Parameters:
cond (bool)
x1 (int)
y1 (float)
x2 (int)
y2 (float)
colorr (color)
txt (string)
sel (bool)
txts (string)
txtc (color)
get_all_time_high()
regressionsLibrary "regressions"
This library computes least square regression models for polynomials of any form for a given data set of x and y values.
fit(X, y, reg_type, degrees)
Takes a list of X and y values and the degrees of the polynomial and returns a least square regression for the given polynomial on the dataset.
Parameters:
X (array) : (float ) X inputs for regression fit.
y (array) : (float ) y outputs for regression fit.
reg_type (string) : (string) The type of regression. If passing value for degrees use reg.type_custom
degrees (array) : (int ) The degrees of the polynomial which will be fit to the data. ex: passing array.from(0, 3) would be a polynomial of form c1x^0 + c2x^3 where c2 and c1 will be coefficients of the best fitting polynomial.
Returns: (regression) returns a regression with the best fitting coefficients for the selecected polynomial
regress(reg, x)
Regress one x input.
Parameters:
reg (regression) : (regression) The fitted regression which the y_pred will be calulated with.
x (float) : (float) The input value cooresponding to the y_pred.
Returns: (float) The best fit y value for the given x input and regression.
predict(reg, X)
Predict a new set of X values with a fitted regression. -1 is one bar ahead of the realtime
Parameters:
reg (regression) : (regression) The fitted regression which the y_pred will be calulated with.
X (array)
Returns: (float ) The best fit y values for the given x input and regression.
generate_points(reg, x, y, left_index, right_index)
Takes a regression object and creates chart points which can be used for plotting visuals like lines and labels.
Parameters:
reg (regression) : (regression) Regression which has been fitted to a data set.
x (array) : (float ) x values which coorispond to passed y values
y (array) : (float ) y values which coorispond to passed x values
left_index (int) : (int) The offset of the bar farthest to the realtime bar should be larger than left_index value.
right_index (int) : (int) The offset of the bar closest to the realtime bar should be less than right_index value.
Returns: (chart.point ) Returns an array of chart points
plot_reg(reg, x, y, left_index, right_index, curved, close, line_color, line_width)
Simple plotting function for regression for more custom plotting use generate_points() to create points then create your own plotting function.
Parameters:
reg (regression) : (regression) Regression which has been fitted to a data set.
x (array)
y (array)
left_index (int) : (int) The offset of the bar farthest to the realtime bar should be larger than left_index value.
right_index (int) : (int) The offset of the bar closest to the realtime bar should be less than right_index value.
curved (bool) : (bool) If the polyline is curved or not.
close (bool) : (bool) If true the polyline will be closed.
line_color (color) : (color) The color of the line.
line_width (int) : (int) The width of the line.
Returns: (polyline) The polyline for the regression.
series_to_list(src, left_index, right_index)
Convert a series to a list. Creates a list of all the cooresponding source values
from left_index to right_index. This should be called at the highest scope for consistency.
Parameters:
src (float) : (float ) The source the list will be comprised of.
left_index (int) : (float ) The left most bar (farthest back historical bar) which the cooresponding source value will be taken for.
right_index (int) : (float ) The right most bar closest to the realtime bar which the cooresponding source value will be taken for.
Returns: (float ) An array of size left_index-right_index
range_list(start, stop, step)
Creates an from the start value to the stop value.
Parameters:
start (int) : (float ) The true y values.
stop (int) : (float ) The predicted y values.
step (int) : (int) Positive integer. The spacing between the values. ex: start=1, stop=6, step=2:
Returns: (float ) An array of size stop-start
regression
Fields:
coeffs (array__float)
degrees (array__float)
type_linear (series__string)
type_quadratic (series__string)
type_cubic (series__string)
type_custom (series__string)
_squared_error (series__float)
X (array__float)
FunctionDiscreteCosineTransformLibrary "FunctionDiscreteCosineTransform"
Discrete Cosine Transform (DCT)
The Discrete Cosine Transform (DCT) is a mathematical algorithm that converts a series of samples of a signal, typically in the time domain, into another domain called the frequency or spectral domain. It's commonly used for data compression and image/video coding applications such as JPEG and MPEG standards.
The DCT works by multiplying the input sequence with specific cosine functions that are pre-defined and then summing up these products to obtain a new series of values, which represent the frequency components of the original signal. The main advantage of the DCT over other transforms like Fourier Transform is its ability to handle non-stationary signals (i.e., signals with varying statistical properties) more effectively due to its localized basis functions.
In simple terms, the DCT can be thought of as a way to break down an image or video into different frequency components and then compress them without losing too much information. This compression technique is essential for efficient transmission and storage of digital media files over the internet or on devices with limited memory capacity.
~Mixtral4x7b
___
Reference:
lcamtuf.substack.com
dct(data, len)
Discrete Cosine Transform.
Parameters:
data (array) : Data source.
len (int) : Length of the sampling window.
Returns: List with frequency domain transformed information.
dct(data, len)
Discrete Cosine Transform.
Parameters:
data (float) : Data source.
len (int) : Length of the sampling window.
Returns: List with frequency domain transformed information.
idct(data, len)
Inverse Discrete Cosine Transform.
Parameters:
data (array) : Data source.
len (int) : Length of the sampling window.
Returns: List with time domain transformed information.
idct(data, len)
Inverse Discrete Cosine Transform.
Parameters:
data (float) : Data source.
len (int) : Length of the sampling window.
Returns: List with time domain transformed information.
Kernels©2024, GoemonYae; copied from @jdehorty's "KernelFunctions" on 2024-03-09 to ensure future dependency compatibility. Will also add more functions to this script.
Library "KernelFunctions"
This library provides non-repainting kernel functions for Nadaraya-Watson estimator implementations. This allows for easy substition/comparison of different kernel functions for one another in indicators. Furthermore, kernels can easily be combined with other kernels to create newer, more customized kernels.
rationalQuadratic(_src, _lookback, _relativeWeight, startAtBar)
Rational Quadratic Kernel - An infinite sum of Gaussian Kernels of different length scales.
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_relativeWeight (simple float) : Relative weighting of time frames. Smaller values resut in a more stretched out curve and larger values will result in a more wiggly curve. As this value approaches zero, the longer time frames will exert more influence on the estimation. As this value approaches infinity, the behavior of the Rational Quadratic Kernel will become identical to the Gaussian kernel.
startAtBar (simple int)
Returns: yhat The estimated values according to the Rational Quadratic Kernel.
gaussian(_src, _lookback, startAtBar)
Gaussian Kernel - A weighted average of the source series. The weights are determined by the Radial Basis Function (RBF).
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
startAtBar (simple int)
Returns: yhat The estimated values according to the Gaussian Kernel.
periodic(_src, _lookback, _period, startAtBar)
Periodic Kernel - The periodic kernel (derived by David Mackay) allows one to model functions which repeat themselves exactly.
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_period (simple int) : The distance between repititions of the function.
startAtBar (simple int)
Returns: yhat The estimated values according to the Periodic Kernel.
locallyPeriodic(_src, _lookback, _period, startAtBar)
Locally Periodic Kernel - The locally periodic kernel is a periodic function that slowly varies with time. It is the product of the Periodic Kernel and the Gaussian Kernel.
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_period (simple int) : The distance between repititions of the function.
startAtBar (simple int)
Returns: yhat The estimated values according to the Locally Periodic Kernel.
NormalDistributionFunctionsLibrary "NormalDistributionFunctions"
The NormalDistributionFunctions library encompasses a comprehensive suite of statistical tools for financial market analysis. It provides functions to calculate essential statistical measures such as mean, standard deviation, skewness, and kurtosis, alongside advanced functionalities for computing the probability density function (PDF), cumulative distribution function (CDF), Z-score, and confidence intervals. This library is designed to assist in the assessment of market volatility, distribution characteristics of asset returns, and risk management calculations, making it an invaluable resource for traders and financial analysts.
meanAndStdDev(source, length)
Calculates and returns the mean and standard deviation for a given data series over a specified period.
Parameters:
source (float) : float: The data series to analyze.
length (int) : int: The lookback period for the calculation.
Returns: Returns an array where the first element is the mean and the second element is the standard deviation of the data series for the given period.
skewness(source, mean, stdDev, length)
Calculates and returns skewness for a given data series over a specified period.
Parameters:
source (float) : float: The data series to analyze.
mean (float) : float: The mean of the distribution.
stdDev (float) : float: The standard deviation of the distribution.
length (int) : int: The lookback period for the calculation.
Returns: Returns skewness value
kurtosis(source, mean, stdDev, length)
Calculates and returns kurtosis for a given data series over a specified period.
Parameters:
source (float) : float: The data series to analyze.
mean (float) : float: The mean of the distribution.
stdDev (float) : float: The standard deviation of the distribution.
length (int) : int: The lookback period for the calculation.
Returns: Returns kurtosis value
pdf(x, mean, stdDev)
pdf: Calculates the probability density function for a given value within a normal distribution.
Parameters:
x (float) : float: The value to evaluate the PDF at.
mean (float) : float: The mean of the distribution.
stdDev (float) : float: The standard deviation of the distribution.
Returns: Returns the probability density function value for x.
cdf(x, mean, stdDev)
cdf: Calculates the cumulative distribution function for a given value within a normal distribution.
Parameters:
x (float) : float: The value to evaluate the CDF at.
mean (float) : float: The mean of the distribution.
stdDev (float) : float: The standard deviation of the distribution.
Returns: Returns the cumulative distribution function value for x.
confidenceInterval(mean, stdDev, size, confidenceLevel)
Calculates the confidence interval for a data series mean.
Parameters:
mean (float) : float: The mean of the data series.
stdDev (float) : float: The standard deviation of the data series.
size (int) : int: The sample size.
confidenceLevel (float) : float: The confidence level (e.g., 0.95 for 95% confidence).
Returns: Returns the lower and upper bounds of the confidence interval.
ApproximateGaussianSmoothingLibrary "ApproximateGaussianSmoothing"
This library provides a novel smoothing function for time-series data, serving as an alternative to SMA and EMA. Additionally, it provides some statistical processing, using moving averages as expected values in statistics.
'Approximate Gaussian Smoothing' (AGS) is designed to apply weights to time-series data that closely resemble Gaussian smoothing weights. it is easier to calculate than the similar ALMA.
In case AGS is used as a moving average, I named it 'Approximate Gaussian Weighted Moving Average' (AGWMA).
The formula is:
AGWMA = (EMA + EMA(EMA) + EMA(EMA(EMA)) + EMA(EMA(EMA(EMA)))) / 4
The EMA parameter alpha is 5 / (N + 4) , using time period N (or length).
ma(src, length)
Calculate moving average using AGS (AGWMA).
Parameters:
src (float) : Series of values to process.
length (simple int) : Number of bars (length).
Returns: Moving average.
analyse(src, length)
Calculate mean and variance using AGS.
Parameters:
src (float) : Series of values to process.
length (simple int) : Number of bars (length).
Returns: Mean and variance.
analyse(dimensions, sources, length)
Calculate mean and variance covariance matrix using AGS.
Parameters:
dimensions (simple int) : Dimensions of sources to process.
sources (array) : Series of values to process.
length (simple int) : Number of bars (length).
Returns: Mean and variance covariance matrix.
trend(src, length)
Calculate intercept (LSMA) and slope using AGS.
Parameters:
src (float) : Series of values to process.
length (simple int) : Number of bars (length).
Returns: Intercept and slope.
FVG Detector LibraryLibrary "FVG Detector Library"
🔵 Introduction
To save time and improve accuracy in your scripts for identifying Fair Value Gaps (FVGs), you can utilize this library. Apart from detecting and plotting FVGs, one of the most significant advantages of this script is the ability to filter FVGs, which you'll learn more about below. Additionally, the plotting of each FVG continues until either a new FVG occurs or the current FVG is mitigated.
🔵 Definition
Fair Value Gap (FVG) refers to a situation where three consecutive candlesticks do not overlap. Based on this definition, the minimum conditions for detecting a fair gap in the ascending scenario are that the minimum price of the last candlestick should be greater than the maximum price of the third candlestick, and in the descending scenario, the maximum price of the last candlestick should be smaller than the minimum price of the third candlestick.
If the filter is turned off, all FVGs that meet at least the minimum conditions are identified. This mode is simplistic and results in a high number of identified FVGs.
If the filter is turned on, you have four options to filter FVGs :
1. Very Aggressive : In addition to the initial condition, another condition is added. For ascending FVGs, the maximum price of the last candlestick should be greater than the maximum price of the middle candlestick. Similarly, for descending FVGs, the minimum price of the last candlestick should be smaller than the minimum price of the middle candlestick. In this mode, a very small number of FVGs are eliminated.
2. Aggressive : In addition to the conditions of the Very Aggressive mode, in this mode, the size of the middle candlestick should not be small. This mode eliminates more FVGs compared to the Very Aggressive mode.
3. Defensive : In addition to the conditions of the Very Aggressive mode, in this mode, the size of the middle candlestick should be relatively large, and most of it should consist of the body. Also, for identifying ascending FVGs, the second and third candlesticks must be positive, and for identifying descending FVGs, the second and third candlesticks must be negative. In this mode, a significant number of FVGs are eliminated, and the remaining FVGs have a decent quality.
4. Very Defensive : In addition to the conditions of the Defensive mode, the first and third candlesticks should not resemble very small-bodied doji candlesticks. In this mode, the majority of FVGs are filtered out, and the remaining ones are of higher quality.
By default, we recommend using the Defensive mode.
🔵 How to Use
🟣 Parameters
To utilize this library, you need to provide four input parameters to the function.
"FVGFilter" determines whether you wish to apply a filter on FVGs or not. The possible inputs for this parameter are "On" and "Off", provided as strings.
"FVGFilterType" determines the type of filter to be applied to the found FVGs. These filters include four modes: "Very Defensive", "Defensive", "Aggressive", and "Very Aggressive", respectively exhibiting decreasing sensitivity and indicating a higher number of Fair Value Gaps (FVG).
The parameter "ShowDeFVG" is a Boolean value defined as either "true" or "false". If this value is "true", FVGs are shown during the Bullish Trend; however, if it is "false", they are not displayed.
The parameter "ShowSuFVG" is a Boolean value defined as either "true" or "false". If this value is "true", FVGs are displayed during the Bearish Trend; however, if it is "false", they are not displayed.
FVGDetector(FVGFilter, FVGFilterType, ShowDeFVG, ShowSuFVG)
Parameters:
FVGFilter (string)
FVGFilterType (string)
ShowDeFVG (bool)
ShowSuFVG (bool)
🟣 Import Library
You can use the "FVG Detector" library in your script using the following expression:
import TFlab/FVGDetectorLibrary/1 as FVG
🟣 Input Parameters
The descriptions related to the input parameters were provided in the "Parameter" section. In this section, for your convenience, the code related to the inputs is also included, and you can copy and paste it into your script.
PFVGFilter = input.string('On', 'FVG Filter', )
PFVGFilterType = input.string('Defensive', 'FVG Filter Type', )
PShowDeFVG = input.bool(true, ' Show Demand FVG')
PShowSuFVG = input.bool(true, ' Show Supply FVG')
🟣 Call Function
You can copy the following code into your script to call the FVG function. This code is based on the naming conventions provided in the "Input Parameter" section, so if you want to use exactly this code, you should have similar parameter names or have copied the "Input Parameter" values.
FVG.FVGDetector(PFVGFilter, PFVGFilterType, PShowDeFVG, PShowSuFVG)
TimeSeriesRecurrencePlotLibrary "TimeSeriesRecurrencePlot"
In descriptive statistics and chaos theory, a recurrence plot (RP) is a plot showing, for each moment i i in time, the times at which the state of a dynamical system returns to the previous state at `i`, i.e., when the phase space trajectory visits roughly the same area in the phase space as at time `j`.
```
A recurrence plot (RP) is a graphical representation used in the analysis of time series data and dynamical systems. It visualizes recurring states or events over time by transforming the original time series into a binary matrix, where each element represents whether two consecutive points are above or below a specified threshold. The resulting Recurrence Plot Matrix reveals patterns, structures, and correlations within the data while providing insights into underlying mechanisms of complex systems.
```
~starling7b
___
Reference:
en.wikipedia.org
github.com
github.com
github.com
github.com
juliadynamics.github.io
distance_matrix(series1, series2, max_freq, norm)
Generate distance matrix between two series.
Parameters:
series1 (float) : Source series 1.
series2 (float) : Source series 2.
max_freq (int) : Maximum frequency to inpect or the size of the generated matrix.
norm (string) : Norm of the distance metric, default=`euclidean`, options=`euclidean`, `manhattan`, `max`.
Returns: Matrix with distance values.
method normalize_distance(M)
Normalizes a matrix within its Min-Max range.
Namespace types: matrix
Parameters:
M (matrix) : Source matrix.
Returns: Normalized matrix.
method threshold(M, threshold)
Updates the matrix with the condition `M(i,j) > threshold ? 1 : 0`.
Namespace types: matrix
Parameters:
M (matrix) : Source matrix.
threshold (float)
Returns: Cross matrix.
rolling_window(a, b, sample_size)
An experimental alternative method to plot a recurrence_plot.
Parameters:
a (array) : Array with data.
b (array) : Array with data.
sample_size (int)
Returns: Recurrence_plot matrix.
TimeSeriesGrammianAngularFieldLibrary "TimeSeriesGrammianAngularField"
provides Grammian angular field and associated utility functions.
___
Reference:
*Time Series Classification: A review of Algorithms and Implementations*.
www.researchgate.net
method normalize(data, a, b)
Normalize the series to a optional range, usualy within `(-1, 1)` or `(0, 1)`.
Namespace types: array
Parameters:
data (array) : Sample data to normalize.
a (float) : Minimum target range value, `default=-1.0`.
b (float) : Minimum target range value, `default= 1.0`.
Returns: Normalized array within new range.
___
Reference:
*Time Series Classification: A review of Algorithms and Implementations*.
normalize_series(source, length, a, b)
Normalize the series to a optional range, usualy within `(-1, 1)` or `(0, 1)`.\
*Note that this may provide a different result than the array version due to rolling range*.
Parameters:
source (float) : Series to normalize.
length (int) : Number of bars to sample the range.
a (float) : Minimum target range value, `default=-1.0`.
b (float) : Minimum target range value, `default= 1.0`.
Returns: Normalized series within new range.
method polar(data)
Turns a normalized sample array into polar coordinates.
Namespace types: array
Parameters:
data (array) : Sampled data values.
Returns: Converted array into polar coordinates.
polar_series(source)
Turns a normalized series into polar coordinates.
Parameters:
source (float) : Source series.
Returns: Converted series into polar coordinates.
method gasf(data)
Gramian Angular Summation Field *`GASF`*.
Namespace types: array
Parameters:
data (array) : Sampled data values.
Returns: Matrix with *`GASF`* values.
method gasf_id(data)
Trig. identity of Gramian Angular Summation Field *`GASF`*.
Namespace types: array
Parameters:
data (array) : Sampled data values.
Returns: Matrix with *`GASF`* values.
Reference:
*Time Series Classification: A review of Algorithms and Implementations*.
method gadf(data)
Gramian Angular Difference Field *`GADF`*.
Namespace types: array
Parameters:
data (array) : Sampled data values.
Returns: Matrix with *`GADF`* values.
method gadf_id(data)
Trig. identity of Gramian Angular Difference Field *`GADF`*.
Namespace types: array
Parameters:
data (array) : Sampled data values.
Returns: Matrix with *`GADF`* values.
Reference:
*Time Series Classification: A review of Algorithms and Implementations*.
CCOMET_Scanner_LibraryLibrary "CCOMET_Scanner_Library"
- A Trader's Edge (ATE)_Library was created to assist in constructing CCOMET Scanners
Loc_tIDs_Col(_string, _firstLocation)
TickerIDs: You must form this single tickerID input string exactly as described in the scripts info panel (little gray 'i' that
is circled at the end of the settings in the settings/input panel that you can hover your cursor over this 'i' to read the
details of that particular input). IF the string is formed correctly then it will break up this single string parameter into
a total of 40 separate strings which will be all of the tickerIDs that the script is using in your CCOMET Scanner.
Locations: This function is used when there's a desire to print an assets ALERT LABELS. A set Location on the scale is assigned to each asset.
This is created so that if a lot of alerts are triggered, they will stay relatively visible and not overlap each other.
If you set your '_firstLocation' parameter as 1, since there are a max of 40 assets that can be scanned, the 1st asset's location
is assigned the value in the '_firstLocation' parameter, the 2nd asset's location is the (1st asset's location+1)...and so on.
Parameters:
_string (simple string) : (string)
A maximum of 40 Tickers (ALL joined as 1 string for the input parameter) that is formulated EXACTLY as described
within the tooltips of the TickerID inputs in my CCOMET Scanner scripts:
assets = input.text_area(tIDset1, title="TickerID (MUST READ TOOLTIP)", tooltip="Accepts 40 TICKERID's for each
copy of the script on the chart. TEXT FORMATTING RULES FOR TICKERID'S:
(1) To exclude the EXCHANGE NAME in the Labels, de-select the next input option.
(2) MUST have a space (' ') AFTER each TickerID.
(3) Capitalization in the Labels will match cap of these TickerID's.
(4) If your asset has a BaseCurrency & QuoteCurrency (ie. ADAUSDT ) BUT you ONLY want Labels
to show BaseCurrency(ie.'ADA'), include a FORWARD SLASH ('/') between the Base & Quote (ie.'ADA/USDT')", display=display.none)
_firstLocation (simple int) : (simple int)
Optional (starts at 1 if no parameter added).
Location that you want the first asset to print its label if is triggered to do so.
ie. loc2=loc1+1, loc3=loc2+1, etc.
Returns: Returns 40 output variables in the tuple (ie. between the ' ') with the TickerIDs, 40 variables for the locations for alert labels, and 40 Colors for labels/plots
TickeridForLabelsAndSecurity(_ticker, _includeExchange)
This function accepts the TickerID Name as its parameter and produces a single string that will be used in all of your labels.
Parameters:
_ticker (simple string) : (string)
For this parameter, input the varible named '_coin' from your 'f_main()' function for this parameter. It is the raw
Ticker ID name that will be processed.
_includeExchange (simple bool) : (bool)
Optional (if parameter not included in function it defaults to false ).
Used to determine if the Exchange name will be included in all labels/triggers/alerts.
Returns: ( )
Returns 2 output variables:
1st ('_securityTickerid') is to be used in the 'request.security()' function as this string will contain everything
TV needs to pull the correct assets data.
2nd ('lblTicker') is to be used in all of the labels in your CCOMET Scanner as it will only contain what you want your labels
to show as determined by how the tickerID is formulated in the CCOMET Scanner's input.
InvalID_LblSz(_barCnt, _close, _securityTickerid, _invalidArray, _tablePosition, _stackVertical, _lblSzRfrnce)
INVALID TICKERIDs: This is to add a table in the middle right of your chart that prints all the TickerID's that were either not formulated
correctly in the '_source' input or that is not a valid symbol and should be changed.
LABEL SIZES: This function sizes your Alert Trigger Labels according to the amount of Printed Bars the chart has printed within
a set time period, while also keeping in mind the smallest relative reference size you input in the 'lblSzRfrnceInput'
parameter of this function. A HIGHER % of Printed Bars(aka...more trades occurring for that asset on the exchange),
the LARGER the Name Label will print, potentially showing you the better opportunities on the exchange to avoid
exchange manipulation liquidations.
*** SHOULD NOT be used as size of labels that are your asset Name Labels next to each asset's Line Plot...
if your CCOMET Scanner includes these as you want these to be the same size for every asset so the larger ones dont cover the
smaller ones if the plots are all close to each other ***
Parameters:
_barCnt (float) : (float)
Get the 1st variable('barCnt') from the Security function's tuple and input it as this functions 1st input
parameter which will directly affect the size of the 2nd output variable ('alertTrigLabel') that is also outputted by this function.
_close (float) : (float)
Put your 'close' variable named '_close' from the security function here.
_securityTickerid (string) : (string)
Throughout the entire charts updates, if a '_close' value is never registered then the logic counts the asset as INVALID.
This will be the 1st TickerID variable (named _securityTickerid) outputted from the tuple of the TickeridForLabels()
function above this one.
_invalidArray (array) : (array string)
Input the array from the original script that houses all of the invalidArray strings.
_tablePosition (simple string) : (string)
Optional (if parameter not included, it defaults to position.middle_right). Location on the chart you want the table printed.
Possible strings include: position.top_center, position.top_left, position.top_right, position.middle_center,
position.middle_left, position.middle_right, position.bottom_center, position.bottom_left, position.bottom_right.
_stackVertical (simple bool) : (bool)
Optional (if parameter not included, it defaults to true). All of the assets that are counted as INVALID will be
created in a list. If you want this list to be prited as a column then input 'true' here, otherwise they will all be in a row.
_lblSzRfrnce (string) : (string)
Optional (if parameter not included, it defaults to size.small). This will be the size of the variable outputted
by this function named 'assetNameLabel' BUT also affects the size of the output variable 'alertTrigLabel' as it uses this parameter's size
as the smallest size for 'alertTrigLabel' then uses the '_barCnt' parameter to determine the next sizes up depending on the "_barCnt" value.
Returns: ( )
Returns 2 variables:
1st output variable ('AssetNameLabel') is assigned to the size of the 'lblSzRfrnceInput' parameter.
2nd output variable('alertTrigLabel') can be of variying sizes depending on the 'barCnt' parameter...BUT the smallest
size possible for the 2nd output variable ('alertTrigLabel') will be the size set in the 'lblSzRfrnceInput' parameter.
PrintedBarCount(_time, _barCntLength, _barCntPercentMin)
The Printed BarCount Filter looks back a User Defined amount of minutes and calculates the % of bars that have printed
out of the TOTAL amount of bars that COULD HAVE been printed within the same amount of time.
Parameters:
_time (int) : (int)
The time associated with the chart of the particular asset that is being screened at that point.
_barCntLength (int) : (int)
The amount of time (IN MINUTES) that you want the logic to look back at to calculate the % of bars that have actually
printed in the span of time you input into this parameter.
_barCntPercentMin (int) : (int)
The minimum % of Printed Bars of the asset being screened has to be GREATER than the value set in this parameter
for the output variable 'bc_gtg' to be true.
Returns: ( )
Returns 2 outputs:
1st is the % of Printed Bars that have printed within the within the span of time you input in the '_barCntLength' parameter.
2nd is true/false according to if the Printed BarCount % is above the threshold that you input into the '_barCntPercentMin' parameter.