Fourier Smoothed Hybrid Volume Spread AnalysisIndicator id:
USER;91bdff47320b4284a375f428f683b21e
(only relevant to those that use API requests)
MEANINGFUL DESCRIPTION:
The Fourier Smoothed Hybrid Volume Spread Analysis (FSHVSA) indicator is an innovative trading tool designed to fuse volume analysis with trend detection capabilities, offering traders a comprehensive view of market dynamics.
This indicator stands apart by integrating the principles of the Discrete Fourier Transform (DFT) and volume spread analysis, enhanced with a layer of Fourier smoothing to distill market noise and highlight trend directions with unprecedented clarity.
This smoothing process allows traders to discern the true underlying patterns in volume and price action, stripped of the distractions of short-term fluctuations and noise.
The core functionality of the FSHVSA revolves around the innovative combination of volume change analysis, spread determination (calculated from the open and close price difference), and the strategic use of the EMA (default 10) to fine-tune the analysis of spread by incorporating volume changes.
Trend direction is validated through a moving average (MA) of the histogram, which acts analogously to the Volume MA found in traditional volume indicators. This MA serves as a pivotal reference point, enabling traders to confidently engage with the market when the histogram's movement concurs with the trend direction, particularly when it crosses the Trend MA line, signalling optimal entry points.
It returns 0 when MA of the histogram and EMA of the Price Spread are not align.
HOW TO USE THE INDICATOR:
The FSHVSA plots a positive trend when a positive Volume smoothed Spread and EMA of Volume smoothed price is above 0, and a negative when negative Volume smoothed Spread and EMA of Volume smoothed price is below 0. When this conditions are not met it plots 0.
ORIGINALITY & USEFULNESS:
The FSHVSA is unique because it applies DFT for data smoothing, effectively filtering out the minor fluctuations and leaving traders with a clear picture of the market's true movements. The DFT's ability to break down market signals into constituent frequencies offers a granular view of market dynamics, highlighting the amplitude and phase of each frequency component. This, combined with the strategic application of Ehler's Universal Oscillator principles via a histogram, furnishes traders with a nuanced understanding of market volatility and noise levels, thereby facilitating more informed trading decisions.
DETAILED DESCRIPTION:
My detailed description of the indicator and use cases which I find very valuable.
What is the meaning of price spread?
In finance, a spread refers to the difference between two prices, rates, or yields. One of the most common types is the bid-ask spread, which refers to the gap between the bid (from buyers) and the ask (from sellers) prices of a security or asset.
We are going to use Open-Close spread.
What is Volume spread analysis?
Volume spread analysis (VSA) is a method of technical analysis that compares the volume per candle, range spread, and closing price to determine price direction.
What does this mean?
We need to have a positive Volume Price Spread and a positive Moving average of Volume price spread for a positive trend. OR via versa a negative Volume Price Spread and a negative Moving average of Volume price spread for a negative trend.
What if we have a positive Volume Price Spread and a negative Moving average of Volume Price Spread ?
It results in a neutral, not trending price action.
Thus the indicator returns 0.
In the next Image you can see that trend is negative on 4h, neutral on 12h and neutral on 1D. That means trend is negative .
I am sorry, the chart is a bit messy. The idea is to use the indicator over more than 1 Timeframe.
What is approximation and smoothing?
They are mathematical concepts for making a discrete set of numbers a
continuous curved line.
Fourier and Euler approximation of a spread are taken from aprox library.
Key Features:
Noise Reduction leverages Euler's White noise capabilities for effective Volume smoothing, providing a cleaner and more accurate representation of market dynamics.
Choose between the innovative Double Discrete Fourier Transform (DTF32) and Regular Open & Close price series.
Mathematical equations presented in Pinescript:
Fourier of the real (x axis) discrete:
x_0 = array.get(x, 0) + array.get(x, 1) + array.get(x, 2)
x_1 = array.get(x, 0) + array.get(x, 1) * math.cos( -2 * math.pi * _dir / 3 ) - array.get(y, 1) * math.sin( -2 * math.pi * _dir / 3 ) + array.get(x, 2) * math.cos( -4 * math.pi * _dir / 3 ) - array.get(y, 2) * math.sin( -4 * math.pi * _dir / 3 )
x_2 = array.get(x, 0) + array.get(x, 1) * math.cos( -4 * math.pi * _dir / 3 ) - array.get(y, 1) * math.sin( -4 * math.pi * _dir / 3 ) + array.get(x, 2) * math.cos( -8 * math.pi * _dir / 3 ) - array.get(y, 2) * math.sin( -8 * math.pi * _dir / 3 )
Fourier of the imaginary (y axis) discrete:
y_0 = array.get(x, 0) + array.get(x, 1) + array.get(x, 2)
y_1 = array.get(x, 0) + array.get(x, 1) * math.sin( -2 * math.pi * _dir / 3 ) + array.get(y, 1) * math.cos( -2 * math.pi * _dir / 3 ) + array.get(x, 2) * math.sin( -4 * math.pi * _dir / 3 ) + array.get(y, 2) * math.cos( -4 * math.pi * _dir / 3 )
y_2 = array.get(x, 0) + array.get(x, 1) * math.sin( -4 * math.pi * _dir / 3 ) + array.get(y, 1) * math.cos( -4 * math.pi * _dir / 3 ) + array.get(x, 2) * math.sin( -8 * math.pi * _dir / 3 ) + array.get(y, 2) * math.cos( -8 * math.pi * _dir / 3 )
Euler's Smooth with Discrete Furrier approximated Volume.
a = math.sqrt(2) * math.pi / _devided
b = math.cos(math.sqrt(2) * 180 / _devided)
c2 = 2 * math.pow(a, 2) * b
c3 = math.pow(a, 4)
c1 = 1 - 2 * math.pow(a, 2) * math.cos(b) + math.pow(a, 4)
filt := na(filt ) ? 0 : c1 * (w + nz(w )) / 2.0 + c2 * nz(filt ) + c3 * nz(filt )
Usecase:
First option:
Leverage the script to identify Bullish and Bearish trends, shown with green and red triangle.
Combine Different Timeframes to accurately determine market trend.
Second option:
Pull the data with API sockets to automate your trading journey.
plot(close, title="ClosePrice", display=display.status_line)
plot(open, title="OpenPrice", display=display.status_line)
plot(greencon ? 1 : redcon ? -1 : 0, title="position", display=display.status_line)
Use ClosePrice, OpenPrice and "position" titles to easily read and backtest your strategy utilising more than 1 Time Frame.
Indicator id:
USER;91bdff47320b4284a375f428f683b21e
(only relevant to those that use API requests)
Cerca negli script per "curve"
Price and Volume Stochastic Divergence [MW]Introduction
This indicator creates signals of interest for entering and exiting long and short positions on equities. It primarily uses up and down trends defined by the change in cumulative volume with some filtering provided by a short period exponential moving average (9 EMA by default).
Settings
Moving Average Period : The moving average over which the cumulative volume delta is calculated. Default: 14
Short Period EMA : The EMA used to represent price action, and is used to generate the EMA Delta line. Default: 27 (3*3*3)
Long Period EMA : The second EMA used to calculate the EMA Delta line. Default: 108 (2*2*3*3*3)
Stochastic K Value : The value used for stochastic curve smoothing. Default: 3
Dot Size : The diameter of the larger indicator. Default: 10
Dot Transparency : The transparency level of the outer ring of the primary BUY/SELL signal. Default: 50 (0 is opaque, 100 is transparent)
Band Distance from 0 to 100 : The upper and lower band distance. Default: 20
Calculations
The cumulative volume delta (CVD) is calculated using candle bodies and wicks. For a red candle, buying volume is calculated by multiplying the volume by the spread percentage of the average of the top and bottom wicks, while Selling Volume is calculated multiplying the volume by the spread percentage of the average of the top and bottom wicks - in addition to the spread percentage of the candle body.
For a green candle, buying volume is calculated by multiplying the volume by the spread percentage of the average of the top and bottom wicks - plus the spread percentage of the candle body - while Selling Volume is calculated using only the spread percentage average of the top and bottom wicks.
Once we have the CVD, we can then perform a stochastic calculation of the CVD value.
stochastic calculation = (current value - lowest value in period) / (highest value in period - lowest value in period)
We’ll do the same stochastic calculation for the short term EMA (27 EMA default) as well as for the difference between the short term and long term EMA.
When the stochastic CVD value is rising from zero and the short term EMA stochastic value equals 100, then it’s a major bullish signal. When the stochastic CVD value is falling from 100 and the short term EMA stochastic value equals 0, then it’s a major bearish signal.
Sometimes, after a bullish or bearish signal, the stochastic CVD will reverse direction triggering a new opposing signal.
How to Interpret
The CVD indicates when there is either more buying than selling or vice versa. A value over 50 for the stochastic CVD curve represents more buying taking place. A value below 50 represents more selling. One might intuitively believe that when there is more buying volume than selling volume that the price would follow suit. This is not always the case.
Most of the time buying volume will precede consistent price movement upwards, and selling volume will precede consistent price movement downwards. When this divergence occurs, the indicator generates a signal. When this divergence begins to fail, and buying or selling volume reverses, then another signal is generated indicating that the buying/selling impulse is headed back into the direction of price action.
These interactions are visually represented on the chart with the coral line that represents CVD, and the yellow line that represents the EMA, or the average price. When the coral line goes up and the yellow line stays down, that’s the BUY signal. When the coral line goes down and the yellow line stays up, that’s the sell signal. When the coral line switches direction, the chart generates another signal showing that volume is moving in a direction that supports the price.
The orange line represents the stochastic representation of the difference between the short EMA (27 by default) and the long EMA (108 by default). EMA differences is a method that can be used to define a trend. When a short term EMA is above a longer term EMA, that may represent a bullish trend. When it is below, that may represent a bearish trend. When all 3 lines are rising or falling in the same direction at the same time, it tends to indicate a movement that has the potential to continue.
Other Usage Notes and Limitations
It's important for traders to be aware of the limitations of any indicator and to use them as part of a broader, well-rounded trading strategy that includes risk management, fundamental analysis, and other tools that can help with reducing false signals, determining trend direction, and providing additional confirmation for a trade decision. Diversifying strategies and not relying solely on one type of indicator or analysis can help mitigate some of these risks.
This indicator can be paired with the MW Volume Impulse indicator if it is desired to see the actual buying and selling cumulative volume deltas. Also, in many cases, the BUY and SELL signals tend to correspond with Keltner Bands (ATR Bands) becoming extended. Lastly, volume weighted average price (VWAP) along with other macro events can impact price and negate signals. To view VWAP lines, you may choose to use the Multi VWAP or Multi VWAP for Gaps indicator to help ensure that the signals you see in this indicator are not being affected by VWAP lines.
ZenTrend Price CyclesZenTrend attempts to plot the cycles that occur as the price cycles between the top and bottom of long- and short-term price linear regression channels.
The indicator observes a fast (35-period) and a slow (100-period) linear regression channel and plots their slopes on an oscillator. When the slope of the fast channel crosses above or below the slope of the slow channel, a signal is plotted.
The red line is the slope of the fast channel; blue is the slope of the slow channel
A green dot and background indicates the slope of recent price action has crossed above the slope of long-term price action.
A red dot and background indicates the slope of recent price action has crossed below the slope of long-term price action.
A gray dot indicates the slope of recent price action is slowing. The difference between the long- and short-term slopes is narrowing.
Here are things I look for when observing price cycles
Where does the cross occur? Crosses high above or below the 'zero line' indicate a more extreme change in price channel slopes.
Flat line: crosses that occur while the lines are flat often indicate chop.
"Curve" of the line - a cross that occurs as the slope lines are starting to curve up/down indicates a sharper and more extreme change in price channel slope.
Volume Profile Histogram [SS]I usually (and by usually, I mean the past year xD) release a significant indicator as my Christmas gift to the community on Christmas Eve. Last year, it was the Z-Score buy and sell signal; this year, it's something a little more conventional. So here is this year’s gift—hope you like it! 🎁
Seems like everyone has their take on Volume Profiles (aka SVP or VSP). I decided to create one, and in true Steversteves fashion, you can expect to find all the goodies that come with most of my stuff, including a volume profile presented in a bell-curve/histogram style (chart above) and statistical frequency tables showing the cases by ranges:
And it wouldn't be a true Steversteves indicator without some kind of ATR thing:
So, what does it do?
At the end of the day, it is a form of an SVP indicator. However, it is meant to operate on a larger scale, sorting volume in a traditional bell-curve style. In addition to displaying volume, it breaks down buying vs. selling volume. Selling volume is classified as such when the open is greater than close, while buying is when close is greater than open. This breakdown allows you to see the distribution, by price range, of where selling and buying occur.
This permits the indicator to provide 2 Points of Control (POCs). A POC is defined as an area of high volume activity. Because buying and selling volumes are broken down into two, we can identify areas with high selling and areas with high buying. Sometimes they coincide, sometimes they differ.
If we look at SQQQ, for example:
We can see that the bearish point of control is one point below the bullish POC. This is interesting because it essentially shows where people may be "panic selling" or setting their stop-outs. If SQQQ drops below 18.8, then it's likely to trigger panic selling, as indicated by the histogram.
Conversely, we can observe that traders tend to position long between $18 and $24. The POC is noted in the stats table and also displayed on the chart. Bullish POC is shown in purple, bearish in yellow. These, of course, can be toggled off.
The Frequency Table:
The frequency table shows how many observations were obtained in each price range. The histogram illustrates the cumulative volume traded, while the frequency simply counts how many cases occurred over the lookback period.
ATR Range Analytics by Volume:
The indicator also has the ability to display range analytics by volume. When you toggle on the range analytics by volume option, a range chart will appear:
www.tradingview.com
The range chart goes from the minimum recorded volume to the maximum recorded volume in the period, showing the average range and direction associated with this volume. This is crucial to pay attention to because not all stocks behave the same way.
For example, in the chart above (AMD), we can see that low volume produces a general bearish bias, and high volume produces a general bullish bias. However, if we look at the range analytics for SPY:
Low volume has the inverse effect. Low volume is associated with a more bullish bias, and high volume indicates a more bearish bias. In the ATR chart, the threshold volume to transition from bullish bias to bearish bias is approximately > 78,607,268 traded shares.
The Stats Table:
The stats table can be toggled on or off. It simply displays the POCs and the time range for the VSP. The default time range is 1 trading year (252 days), assuming you are on the daily timeframe. However, you can use this on any timeframe.
The percentages displayed in the histogram is the cumulative percent of buying and selling volume independently. So when you see the percentage on the selling histogram, its the percent of cumulative selling only. Same for the buying.
And that's the indicator! I hope you enjoy it. Let me know your thoughts. I hope you all have safe holidays, a Merry Christmas for you North Americans, and a Happy Christmas for you UKers, and whatever else you celebrate/care about and do! Safe trades, everyone, and enjoy your holidays! 🎁🎄🎄🎄⭐⭐⭐ 🕎 🕎 🕎
savitzkyGolay, KAMA, HPOverview
This trading indicator integrates three distinct analytical tools: the Savitzky-Golay Filter, Kaufman Adaptive Moving Average (KAMA), and Hodrick-Prescott (HP) Filter. It is designed to provide a comprehensive analysis of market trends and potential trading signals.
Components
Hodrick-Prescott (HP) Filter
Purpose: Smooths out the price data to identify the underlying trend.
Parameters: Lambda: Controls the smoothness. Range: 50 to 1600.
Impact of Parameters:
Increasing Lambda: This makes the trend line more responsive to short-term market fluctuations, suitable for short-term analysis. A higher Lambda value decreases the degree of smoothing, making the trend line follow recent market movements more closely.
Decreasing Lambda: A lower Lambda value makes the trend line smoother and less responsive to short-term market fluctuations, ideal for longer-term trend analysis. Decreasing Lambda increases the degree of smoothing, thereby filtering out minor market movements and focusing more on the long-term trend.
Kaufman Adaptive Moving Average (KAMA):
Purpose: An adaptive moving average that adjusts to price volatility.
Parameters: Length, Fast Length, Slow Length: Define the sensitivity and adaptiveness of KAMA.
Impact of Parameters:
Adjusting Length affects the base period for efficiency ratio, altering the overall sensitivity.
Fast Length and Slow Length control the speed of KAMA’s adaptation. A smaller Fast Length makes KAMA more sensitive to price changes, while a larger Slow Length makes it less sensitive.
Savitzky-Golay Filter:
Purpose: Smooths the price data using polynomial regression.
Parameters: Window Size: Determines the size of the moving window (7, 9, 11, 15, 21).
Impact of Parameters:
A larger Window Size results in a smoother curve, which is more effective for identifying long-term trends but can delay reaction to recent market changes.
A smaller Window Size makes the curve more responsive to short-term price movements, suitable for short-term trading strategies.
General Impact of Parameters
Adjusting these parameters can significantly alter the signals generated by the indicator. Users should fine-tune these settings based on their trading style, the characteristics of the traded asset, and market conditions to optimize the indicator's performance.
Signal Logic
Buy Signal: The trend from the HP filter is below both the KAMA and the Savitzky-Golay SMA, and none of these indicators are flat.
Sell Signal: The trend from the HP filter is above both the KAMA and the Savitzky-Golay SMA, and none of these indicators are flat.
Usage
Due to the combination of smoothing algorithms and adaptability, this indicator is highly effective at identifying emerging trends for both initiating long and short positions.
IMPORTANT : Although the code and user settings incorporate measures to limit false signals due to lateral (sideways) movement, they do not completely eliminate such occurrences. Users are strongly advised to avoid signals that emerge during simultaneous lateral movements of all three indicators.
Despite the indicator's success in historical data analysis using its signals alone, it is highly recommended to use this code in combination with other indicators, patterns, and zones. This is particularly important for determining exit points from positions, which can significantly enhance trading results.
Limitations and Recommendations
The indicator has shown excellent performance on the weekly time frame (TF) with the following settings:
Savitzky-Golay (SG): 11
Hodrick-Prescott (HP): 100
Kaufman Adaptive Moving Average (KAMA): 20, 2, 30
For the monthly TF, the recommended settings are:
SG: 15
HP: 100
KAMA: 30, 2, 35
Note: The monthly TF is quite variable. With these settings, there may be fewer signals, but they tend to be more relevant for long-term investors. Based on a sample of 40 different stocks from various countries and sectors, most exhibited an average trade return in the thousands of percent.
It's important to note that while these settings have been successful in past performance, market conditions vary and past performance is not indicative of future results. Users are encouraged to experiment with these settings and adjust them according to their individual needs and market analysis.
As this is my first developed trading indicator, I am very open to and appreciative of any suggestions or comments. Your feedback is invaluable in helping me refine and improve this tool. Please feel free to share your experiences, insights, or any recommendations you may have.
OI Visible Range Ladder [Kioseff Trading]Hello!
This Script “OI Visible Range Ladder” calculates open interest profiles for the visible range alongside an OI ladder for the visible period!
Features
OI Profile Anchored to Visible Range
OI Ladder Anchored to Visible Range
Standard POC and Value Area Lines, in Addition to Separated POCs and Value Area Lines for each category of OI x Price
Configurable Value Area Targets
Curved Profiles
Up to 9999 Profile Rows per Visible Range
Stylistic Options for Profiles
Up to 9999 volume profile levels (Price levels) can be calculated for each profile, thanks to the new polyline feature, allowing for less aggregation / more precision of open interest at price.
The image above shows primary functionality!
Green profiles = Up OI / Up Price
Yellow profiles = Down OI / Up Price
Purple profiles = Up OI / Down Price
Red profiles = Down OI / Down Price
The image above shows POCs for each OI x Price category!
Profiles can be anchored on the left side for a more traditional look.
The indicator is robust enough to calculate on “small price periods”, or for a price period spanning your entire chart fully zoomed out!
That’s about it :D
This indicator is Part of a series titled “Bull vs. Bear” - a suite of profile-like indicators.
Thanks for checking this out!
If you have any suggestions please feel free to share!
Bull Vs Bear Visible Range VP [Kioseff Trading]Hello!
This Script “Bull vs Bear Visible Range VP” Calculates Bull & Bear Volume Profiles for the Visible Range Alongside a Delta Ladder for the Visible Period!
Features
Volume Profile Anchored to Visible Range
Delta Ladder Anchored to Visible Range
Bull vs Bear Profiles!
Standard Poc and Value Area Lines, in Addition to Separated POCs and Value Area Lines for Bull Profiles and Bear Profiles
Configurable Value Area Target
Curved Profiles
Up to 9999 Profile Rows per Visible Range
Stylistic Options for Profiles
This Script Generates Bull vs. Bear Volume Profiles for the Visible Range!
Up to 9999 Volume Profile Levels (Price Levels) Can Be Calculated for Each Profile, Thanks to the New Polyline Feature, Allowing For Less Aggregation / More Precision of Volume at Price and Volume Delta.
Bull vs Bear Profiles
The Image Above Shows Primary Functionality!
Green Profiles = Buying Volume
Red Profiles = Selling Volume
Bullish & Bearish Pocs for the Visible Range Are Displayable!
Profiles Can Be Anchored on the Left Side for a More Traditional Look.
The indicator is robust enough to calculate on "small price periods", or for a price period spanning your entire chart fully zoomed out!
That’s About It :D
This Indicator Is Part of a Series Titled “Bull vs. Bear” - A Suite of Profile-Like Indicators I Will Be Releasing Over Coming Days. Thanks for Checking This Out!
If You Have Any Suggestions Please Feel Free to Share!
Zig-Zag Open Interest Footprint [Kioseff Trading]Hello!
This script "Zig Zag Open Interest Footprint" calculates open interest x price values for zig zag trends!
Features
Open interest footprints anchored to zig zag trends
Summed OI x price level footprints
Total OI (for each category) for the entire trend shown
Standard POC lines, in addition to separated POC lines for each category of open interest x price possibility
Up to 9999 profile rows per zigzag trend
Stylistic options for profiles
Configurable zig zag - footprints generated for small to large trends
The zigzag indicator is configurable as normal; minor and major trend volume footprints are calculable. This indicator can be thought of as "Open Interest Footprint for Trends''.
Up to 9999 open interest levels (price levels) can be calculated for each profile, thanks to the new polyline feature, allowing for less aggregation / more precision of open interest at price.
Zig Zag OI Footprints
The image above shows primary functionality!
Green = Higher OI + Higher Price
Yellow = Lower OI + Higher Price
Purple = Higher OI + Lower Price
Red = Lower OI + Lower Price
Profiles are generated for each trend identified by the zigzag indicator.
The image above shows the indicator calculating open interest x price for specific price blocks on the footprint. Aggregate open interest for the identified trend is displayed over the profile!
Neon highlighted values correspond to the highest open interest change for the category. This is a configurable option :D
The image above shows POC lines for each category of open interest x price!
Additionally, you can select to show a single POV for footprint - the single level the greatest amount of OI change occurred.
The indicator is robust enough to calculate on "long zig zags" and "short zig zags"; curved profiles can also be used!
The image above shows key levels, each OI footprint, and summed OI values for the current trend!
That's about it :D
This indicator is part of a series titled "Bull vs. Bear" - a suite of profile-like indicators I will be releasing over the coming days. Thanks for checking this out!
If you have any suggestions please feel free to share!
Zig-Zag Volume Profile (Bull vs. Bear) [Kioseff Trading]Hello!
Thank you @Pinecoders and @TradingView for putting polylines in production and making this viable!!
This script "Zig Zag Volume Profile" implements the polyline feature for Pine Script!
Features
Volume Profile anchored to zig zag trends
Bull vs Bear profiles!
Delta x price level
Standard POC and value area lines, in addition to separated POCs and value area lines for bull profiles and bear profiles
Up to 9999 profile rows per zigzag trend
Stylistic options for profiles
Configurable zig zag - profiles generated for small to large trends
Polylines!
This script generates Bull vs. Bear volume profiles for zig zag trends!
The zigzag indicator is configurable as normal; minor and major trend volume profiles are calculable. This indicator can be thought of as "Volume Profile/Delta for Trends''.
Up to 9999 volume profile levels (price levels) can be calculated for each profile, thanks to the new polyline feature, allowing for less aggregation / more precision of volume at price and volume delta.
Zig Zag Bull Vs Bear Profiles
The image above shows primary functionality!
Green profiles = buying volume
Red profiles = selling volume
Profiles are generated for each trend identified by the zigzag indicator.
The image above shows the indicator calculating volume delta for specific price blocks on the profile. Aggregate volume delta for the identified trend is displayed over the profile!
The image above shows Bull Profile POC lines and value area lines. Bear Profile POC lines and value area lines are also shown!
All colors and transparencies are configurable to the user's liking :D
Additionally, you can select to have the profiles drawn on contrasting sides. Bull Profile on left and Bear Profile on right.
For a more traditional look - you can select to draw the Bull & Bear profiles on the same x-point.
The indicator is robust enough to calculate on "long zig zags" and "short zig zags"; curved profiles can also be used!
The image above exemplifies usage of the indicator!
Bull & Bear volume profiles are calculated for trends on the 30-second timeframe.
The image above shows a more "utilitarian" presentation of the profiles. Once more, line and linefill colors/transparencies are all customizable; the indicator can look however you would like it to!
The image above shows key levels, the Bull vs. Bear profile, and volume delta for the current trend!
That's about it :D
This indicator is part of a series titled "Bull vs. Bear" - a suite of profile-like indicators I will be releasing over coming days. Thanks for checking this out!
Of course, a big thank you to @RicardoSantos for his MathOperator library that I use in every script.
If you have any suggestions please feel free to share!
ZWAP (ZigZag Anchored VWAP) [Kioseff Trading]Hello!
Quick script showcasing the new polyline function for Pine Script!
Features
Up to 100 high/low pivot points auto anchored VWAP
Visible range auto anchored VWAP
Curved ZigZag (Adjustable!)
With the new polyline function, auto-anchored VWAP at specific price points is more viable.
When using line.new() only 500 lines can exist on the chart concurrently and, since VWAP is calculated on every update, a "proper" VWAP drawn using line.new() can extend 500 bars at most, to which no additional VWAP lines can be drawn after.
Of course, when using the plot() function a VWAP line will draw on every bar; however, this method isn't highly compatible with auto-anchoring VWAP lines.
However!
A polyline, from beginning to end irrespective of the number of coordinates used, constitutes 1 polyline; 100 can exist simultaneously with 10,000 xy coordinates per line.
The image above shows an attempt to draw the same auto-anchored VWAP lines using the line.new() function. Not an ideal outcome!
The image above shows the same attempt using the polyline.new() function!
Very nice (:
The image above shows the indicator auto anchoring to zig zag turning points.
Subsequent to a new anchoring, VWAP is calculated for the following bars - up to the current bar.
Thank you for checking this out; if you have any ideas to spice it up feel free to comment!
ROCkin RSIROCkin RSI Indicator
Overview
The "ROCkin RSI" indicator combines the traditional Relative Strength Index (RSI) with an innovative approach using the Rate of Change (ROC) to offer a new way to visualize and interpret market momentum. By averaging the slope of the RSI over time and allowing for different types of moving averages, this indicator aims to help traders identify trending and reversal patterns more efficiently.
Features
RSI Calculations: The core of the indicator is based on the standard Relative Strength Index, an oscillator that measures the speed and change of price movements. The RSI oscillates between 0 and 100 and is usually used to identify overbought or oversold conditions.
Rate of Change of Price (ROC): Instead of simply plotting the RSI, this indicator calculates the Rate of Change of the closing price, essentially looking at how steep the RSI curve is over a user-defined period.
Smoothing: To reduce noise and make the curve smoother, the slope of the RSI is averaged over a given number of periods, which can either be a Simple Moving Average (SMA) or an Exponential Moving Average (EMA).
Column Plots: The smoothed RSI slope is plotted as columns, where the color of the columns (red or green) indicates whether the slope is positive or negative.
Optional RSI Moving Average: The indicator also offers an optional feature to plot a moving average of the smoothed RSI slope, aiding in trend identification.
Inputs
RSI Periods: The number of periods used to calculate the RSI.
Slope Periods: The number of periods used for calculating the Rate of Change.
Average Periods: The number of periods used for smoothing the RSI slope.
Type of Average: Choose between EMA (Exponential Moving Average) and SMA (Simple Moving Average) for smoothing.
Show RSI Moving Average: Toggle this to either show or hide the moving average of the smoothed RSI slope.
Moving Average Period: The period used for calculating the RSI Moving Average.
Moving Average Type: Choose between EMA and SMA for the RSI Moving Average.
How to Interpret
Positive Slope (Red Columns): Indicates upward momentum in the RSI, which may imply a bullish trend.
Negative Slope (Green Columns): Indicates downward momentum in the RSI, suggesting a possible bearish trend.
RSI Moving Average: Acts as a signal line to confirm the trend. When the smoothed RSI slope is above its moving average, it confirms the bullish trend, and when it's below, it confirms the bearish trend.
Practical Use
Entry/Exit Signals: Consider entering a long position when the columns of the green histogram cross above the moving average. Conversely, consider entering a short position when the columns cross under when red. The higher the columns the more likely the trade will be a good one.
Fine-Tuning and Optimization
It's crucial to understand that the default settings might not be optimal for all trading scenarios. The effectiveness of the ROCkin RSI indicator can vary based on the asset you're trading, the market conditions, and your trading style. Therefore, it's highly recommended to play with the settings and study the historical performance on the chart to grasp how the indicator behaves.
By experimenting with different periods for RSI, the Rate of Change, and the moving averages, you can tailor the indicator to better suit your needs. Studying how the indicator would have performed in the past can help you understand its potential strengths and weaknesses. Once you've got a feel for how it operates, you can then optimize the settings to align with your trading strategy and risk tolerance.
Alxuse Supertrend 4EMA Buy and Sell for tutorialAll abilities of Supertrend, moreover :
Drawing 4 EMA band & the ability to change values, change colors, turn on/off show.
Sends Signal Sell and Buy in multi timeframe.
The ability used in the alert section and create customized alerts.
To receive valid alerts the replay section , the timeframe of the chart must be the same as the timeframe of the indicator.
Supertrend with a simple EMA Filter can improve the performance of the signals during a strong trend.
For detecting the continuation of the downward and upward trend we can use 4 EMA colors.
In the upward trend , the EMA lines are in order of green, blue, red, yellow from bottom to top.
In the downward trend, the EMA lines are in order of yellow, red, blue, green from bottom to top.
How it works:
x1 = MA1 < MA2 and MA2 < MA3 and MA3 < MA4 and ta.crossunder(MA3, MA4)
x2 = MA1 < MA2 and MA2 < MA3 and MA3 < MA4 and ta.crossunder(MA2, MA3)
x3 = MA1 < MA2 and MA2 < MA3 and MA3 < MA4 and ta.crossunder(MA1, MA2)
y1 = MA4 < MA3 and MA3 < MA2 and MA2 < MA1 and ta.crossover(MA3, MA4)
y2 = MA4 < MA3 and MA3 < MA2 and MA2 < MA1 and ta.crossover(MA2, MA3)
y3 = MA4 < MA3 and MA3 < MA2 and MA2 < MA1 and ta.crossover(MA1, MA2)
Red triangle = x1 or x2 or x3
Green triangle = y1 or y2 or y3
Long = BUY signal and followed by a Green triangle
Exit Long = SELL signal
Short = SELL signal and followed by a Red triangle
Exit Short = BUY signal
It is also possible to get help from the Stochastic RSI and MACD indicators for confirmation.
For receiving a signal with these two conditions or more conditions, i am making a video tutorial that I will release soon.
Supertrend
Definition
Supertrend is a trend-following indicator based on Average True Range (ATR). The calculation of its single line combines trend detection and volatility. It can be used to detect changes in trend direction and to position stops.
The basics
The Supertrend is a trend-following indicator. It is overlaid on the main chart and their plots indicate the current trend. A Supertrend can be used with varying periods (daily, weekly, intraday etc.) and on varying instruments.
The Supertrend has several inputs that you can adjust to match your trading strategy. Adjusting these settings allows you to make the indicator more or less sensitive to price changes.
For the Supertrend inputs, you can adjust atrLength and multiplier:
the atrLength setting is the lookback length for the ATR calculation;
multiplier is what the ATR is multiplied by to offset the bands from price.
When the price falls below the indicator curve, it turns red and indicates a downtrend. Conversely, when the price rises above the curve, the indicator turns green and indicates an uptrend. After each close above or below Supertrend, a new trend appears.
Summary
The Supertrend helps you make the right trading decisions. However, there are times when it generates false signals. Therefore, it is best to use the right combination of several indicators. Like any other indicator, Supertrend works best when used with other indicators such as MACD, Parabolic SAR, or RSI.
Exponential Moving Average
Definition
The Exponential Moving Average (EMA) is a specific type of moving average that points towards the importance of the most recent data and information from the market. The Exponential Moving Average is just like it’s name says - it’s exponential, weighting the most recent prices more than the less recent prices. The EMA can be compared and contrasted with the simple moving average.
Similar to other moving averages, the EMA is a technical indicator that produces buy and sell signals based on data that shows evidence of divergence and crossovers from general and historical averages. Additionally, the EMA tries to amplify the importance that the most recent data points play in a calculation.
It is common to use more than one EMA length at once, to provide more in-depth and focused data. For example, by choosing 10-day and 200-day moving averages, a trader is able to determine more from the results in a long-term trade, than a trader who is only analyzing one EMA length.
It’s best to use the EMA when for trending markets, as it shows uptrends and downtrends when a market is strong and weak, respectively. An experienced trader will know to look both at the line the EMA projects, as well as the rate of change that comes from each bar as it moves to the next data point. Analyzing these points and data streams correctly will help the trader determine when they should buy, sell, or switch investments from bearish to bullish or vice versa.
Short-term averages, on the other hand, is a different story when analyzing Exponential Moving Average data. It is most common for traders to quote and utilize 12- and 26-day EMAs in the short-term. This is because they are used to create specific indicators. Look into Moving Average Convergence Divergence (MACD) for more information. Similarly, the 50- and 200-day moving averages are most common for analyzing long-term trends.
Moving averages can be very useful for traders using technical analysis for profit. It is important to identify and realize, however, their shortcomings, as all moving averages tend to suffer from recurring lag. It is difficult to modify the moving average to work in your favor at times, often having the preferred time to enter or exit the market pass before the moving average even shows changes in the trend or price movement for that matter.
All of this is true, however, the EMA strives to make this easier for traders. The EMA is unique because it places more emphasis on the most recent data. Therefore, price movement and trend reversals or changes are closely monitored, allowing for the EMA to react quicker than other moving averages.
Limitations
Although using the Exponential Moving Average has a lot of advantages when analyzing market trends, it is also uncertain whether or not the use of most recent data points truly affects technical and market analysis. In addition, the EMA relies on historical data as its basis for operating and because news, events, and other information can change rapidly the indicator can misinterpret this information by weighting the current prices higher than when the event actually occurred.
Summary
The Exponential Moving Average (EMA) is a moving average and technical indicator that reflects and projects the most recent data and information from the market to a trader and relies on a base of historical data. It is one of many different types of moving averages and has an easily calculable formula.
The added features to the indicator are made for training, it is advisable to use it with caution in tradings.
Paytience DistributionPaytience Distribution Indicator User Guide
Overview:
The Paytience Distribution indicator is designed to visualize the distribution of any chosen data source. By default, it visualizes the distribution of a built-in Relative Strength Index (RSI). This guide provides details on its functionality and settings.
Distribution Explanation:
A distribution in statistics and data analysis represents the way values or a set of data are spread out or distributed over a range. The distribution can show where values are concentrated, values are absent or infrequent, or any other patterns. Visualizing distributions helps users understand underlying patterns and tendencies in the data.
Settings and Parameters:
Main Settings:
Window Size
- Description: This dictates the amount of data used to calculate the distribution.
- Options: A whole number (integer).
- Tooltip: A window size of 0 means it uses all the available data.
Scale
- Description: Adjusts the height of the distribution visualization.
- Options: Any integer between 20 and 499.
Round Source
- Description: Rounds the chosen data source to a specified number of decimal places.
- Options: Any whole number (integer).
Minimum Value
- Description: Specifies the minimum value you wish to account for in the distribution.
- Options: Any integer from 0 to 100.
- Tooltip: 0 being the lowest and 100 being the highest.
Smoothing
- Description: Applies a smoothing function to the distribution visualization to simplify its appearance.
- Options: Any integer between 1 and 20.
Include 0
- Description: Dictates whether zero should be included in the distribution visualization.
- Options: True (include) or False (exclude).
Standard Deviation
- Description: Enables the visualization of standard deviation, which measures the amount of variation or dispersion in the chosen data set.
- Tooltip: This is best suited for a source that has a vaguely Gaussian (bell-curved) distribution.
- Options: True (enable) or False (disable).
Color Options
- High Color and Low Color: Specifies colors for high and low data points.
- Standard Deviation Color: Designates a color for the standard deviation lines.
Example Settings:
Example Usage RSI
- Description: Enables the use of RSI as the data source.
- Options: True (enable) or False (disable).
RSI Length
- Description: Determines the period over which the RSI is calculated.
- Options: Any integer greater than 1.
Using an External Source:
To visualize the distribution of an external source:
Select the "Move to" option in the dropdown menu for the Paytience Distribution indicator on your chart.
Set it to the existing panel where your external data source is placed.
Navigate to "Pin to Scale" and pin the indicator to the same scale as your external source.
Indicator Logic and Functions:
Sinc Function: Used in signal processing, the sinc function ensures the elimination of aliasing effects.
Sinc Filter: A filtering mechanism which uses sinc function to provide estimates on the data.
Weighted Mean & Standard Deviation: These are statistical measures used to capture the central tendency and variability in the data, respectively.
Output and Visualization:
The indicator visualizes the distribution as a series of colored boxes, with the intensity of the color indicating the frequency of the data points in that range. Additionally, lines representing the standard deviation from the mean can be displayed if the "Standard Deviation" setting is enabled.
The example RSI, if enabled, is plotted along with its common threshold lines at 70 (upper) and 30 (lower).
Understanding the Paytience Distribution Indicator
1. What is a Distribution?
A distribution represents the spread of data points across different values, showing how frequently each value occurs. For instance, if you're looking at a stock's closing prices over a month, you may find that the stock closed most frequently around $100, occasionally around $105, and rarely around $110. Graphically visualizing this distribution can help you see the central tendencies, variability, and shape of your data distribution. This visualization can be essential in determining key trading points, understanding volatility, and getting an overview of the market sentiment.
2. The Rounding Mechanism
Every asset and dataset is unique. Some assets, especially cryptocurrencies or forex pairs, might have values that go up to many decimal places. Rounding these values is essential to generate a more readable and manageable distribution.
Why is Rounding Needed? If every unique value from a high-precision dataset was treated distinctly, the resulting distribution would be sparse and less informative. By rounding off, the values are grouped, making the distribution more consolidated and understandable.
Adjusting Rounding: The `Round Source` input allows users to determine the number of decimal places they'd like to consider. If you're working with an asset with many decimal places, adjust this setting to get a meaningful distribution. If the rounding is set too low for high precision assets, the distribution could lose its utility.
3. Standard Deviation and Oscillators
Standard deviation is a measure of the amount of variation or dispersion of a set of values. In the context of this indicator:
Use with Oscillators: When using oscillators like RSI, the standard deviation can provide insights into the oscillator's range. This means you can determine how much the oscillator typically deviates from its average value.
Setting Bounds: By understanding this deviation, traders can better set reasonable upper and lower bounds, identifying overbought or oversold conditions in relation to the oscillator's historical behavior.
4. Resampling
Resampling is the process of adjusting the time frame or value buckets of your data. In the context of this indicator, resampling ensures that the distribution is manageable and visually informative.
Resample Size vs. Window Size: The `Resample Resolution` dictates the number of bins or buckets the distribution will be divided into. On the other hand, the `Window Size` determines how much of the recent data will be considered. It's crucial to ensure that the resample size is smaller than the window size, or else the distribution will not accurately reflect the data's behavior.
Why Use Resampling? Especially for price-based sources, setting the window size around 500 (instead of 0) ensures that the distribution doesn't become too overloaded with data. When set to 0, the window size uses all available data, which may not always provide an actionable insight.
5. Uneven Sample Bins and Gaps
You might notice that the width of sample bins in the distribution is not uniform, and there can be gaps.
Reason for Uneven Widths: This happens because the indicator uses a 'resampled' distribution. The width represents the range of values in each bin, which might not be constant across bins. Some value ranges might have more data points, while others might have fewer.
Gaps in Distribution: Sometimes, there might be no data points in certain value ranges, leading to gaps in the distribution. These gaps are not flaws but indicate ranges where no values were observed.
In conclusion, the Paytience Distribution indicator offers a robust mechanism to visualize the distribution of data from various sources. By understanding its intricacies, users can make better-informed trading decisions based on the distribution and behavior of their chosen data source.
Standardized MACD Heikin-Ashi TransformedThe Standardized MACD Heikin-Ashi Transformed (St. MACD) is an advanced indicator designed to overcome the limitations of the traditional MACD. It offers a more robust and standardized measure of momentum, making it comparable across different timeframes and securities. By incorporating the Heikin-Ashi transformation, the St. MACD provides a smoother visualization of trends and potential reversals, enhancing its utility for traders seeking a clearer view of the underlying market direction.
Methodology:
The calculation of St. MACD begins with the traditional MACD, which computes the difference between two exponential moving averages (EMAs) of the price. To address the issue of non-comparability across assets, the St. MACD normalizes its values using the exponential average of the price's height. This normalization process ensures that the indicator's readings are not influenced by the absolute price levels, allowing for objective and quantitatively defined comparisons of momentum strength.
Furthermore, St. MACD utilizes the Heikin-Ashi transformation, which involves deriving candles from the price data. These Heikin-Ashi candles provide a smoother representation of trends and help filter out noise in the market. A predictive curve of Heikin-Ashi candles within the St. MACD turns blue or red, indicating the prevailing trend direction. This feature enables traders to easily identify trend shifts and make better informed trading decisions.
Advantages:
St. MACD offers several key advantages over the traditional MACD-
Standardization: By normalizing the indicator's values, St. MACD becomes comparable across different assets and timeframes. This makes it a valuable tool for traders analyzing various markets and seeking consistent momentum measurements.
Heikin-Ashi Transformation: The integration of the Heikin-Ashi transformation smoothes out the indicator's fluctuations and enhances trend visibility. Traders can more easily identify trends and potential reversal points, improving their market analysis.
Quantifiable Momentum: St. MACD's key levels represent the strength of momentum, providing traders with a quantifiable framework to gauge the intensity of market movements. This feature helps identify periods of increased or decreased momentum.
Utility:
The St. MACD indicator offers versatile utility for traders-
Trend Identification: Traders can use the color-coded predictive curve of Heikin-Ashi candles to swiftly determine the prevailing trend direction. This aids in identifying potential entry and exit points in the market.
Reversal Signals: Colored extremes within the St. MACD signal potential price reversals, alerting traders to potential turning points in the market. This assists in making timely decisions during market inflection points.
Overbought/Oversold Conditions: The histogram version of St. MACD can be used in conjunction with the bands to detect short-term overbought or oversold market conditions, allowing traders to adjust their strategies accordingly.
In conclusion, this tool addresses the limitations of the traditional MACD by providing a standardized and comparable momentum indicator. Its incorporation of the Heikin-Ashi transformation enhances trend visibility and assists traders in making more informed decisions. With its quantifiable momentum measurements and various utility features, the St. MACD is a valuable tool for traders seeking a clearer and more objective view of market trends and reversals.
Key Features:
Display Modes: MACD, Histogram or Hybrid
Reversion Triangles by adjustable thresholds
Bar Coloring Methods: MidLine, Candles, Signal Cross, Extremities, Reversions
Example Charts:
-Traditional limitations-
-Comparisons across time and securities-
-Showcase-
See Also:
-Other Heikin-Ashi Transforms-
MA Slope : New Method1 . Introduction
Hello, traders.
This indicator is designed to measure the slope of a moving average line.
I imagine many of you who use Pine Script have struggled with this; measuring the slope of a moving average line can be quite challenging.
Firstly, this is because while the x-axis is fixed to the 'number of candles', the price scale on the y-axis can be adjusted freely.
Secondly, while the concept of differentiation could simplify the measurement process, the resulting value will differ from the conventional derivative we are familiar with since 'delta x' is fixed to '1'.
Consequently, I've put a lot of thought into how to configure the x-axis and y-axis in order to measure a slope that aligns with our perception of 'slope'.
After some reflection, I, like many others, realized that many people measure the slope based on the pivot of the moving average line.
This indicator is the product of that reflection.
2. Description
A. Setting
First, select the moving average line for which you want to check the slope. While SMA is commonly used, T3 is set as the default because it best visualizes the slope.
If you check 'Show MA Slope Average Pivot Range?' in the input window, it displays the average of the recent 30 slope pivot highs and pivot lows.
In other words, it shows 'On average, this level of slope was produced in the recent 30 waves.'
B. Usage
A cross from 0 in the slope indicates a 'reversal in the slope of the curve', which is the most crucial value when observing the slope.
Thus, fundamentally, it's important to look at the points where the slope becomes "0". Furthermore, when the slope starts to curve after rising, it signifies a change in acceleration, suggesting an imminent slope reversal.
(Note that acceleration was omitted from the indicator representation due to its tendency to overly complicate the data.)
While a shorter length of the moving average line may provide more useful slope data for actual trading, a less smooth moving average line may cross around 0 too often, making it less useful.
Therefore, it's crucial to adjust the 'Smoothing Length' in the input values to find a value that you believe is appropriate.
3. Conclusion
I always contemplate how to find a value in Pine Script that is similar to the perceived slope.
I made this script thinking that it might be a novel approach, but there are still many areas that need improvement.
If you have any innovative ideas about the slope, please feel free to provide feedback anytime.
Thank you.
--------
1. 서론
트레이더 여러분 안녕하세요. 이 지표는 이동평균선의 기울기를 측정하는 지표입니다.
아마 파인스크립트를 다루는 많은 분들이 같은 고민을 하셨을 것 같은데, 이동평균선의 기울기를 측정하는 것은 매우 어렵습니다.
그 이유로 첫번째는 x축은 '캔들 갯수'로 고정되어있는 반면, y축의 가격 스케일은 유동적으로 바꿀 수 있기 때문입니다.
두번째로는, 미분 개념을 이용하면 훨씬 수월하게 구할 수 있을테지만, 델타x가 '1'로 고정되어있기 때문에 우리가 알고있는 미분과 다른 값이 나옵니다.
따라서 x축과 y축을 어떻게 하면 실제 우리가 인식하는 '기울기'에 가깝도록 구성할 수 있을지에 대해 고민해보았습니다.
고민해본 결과 저 역시 그러하고, 많은 사람들이 이동평균선의 피벗을 기준으로 기울기를 측정한다는 사실을 알게되었습니다.
이 지표는 그 고민의 결과물입니다.
2. 내용 설명
A. 셋팅
먼저, 기울기를 확인하고싶은 이동평균선을 선택해주세요.
일반적으로 SMA를 많이 보시겠지만, T3가 기울기로 표현할 때 가장 아름다운 모습이 나오기 때문에 기본 설정을 T3로 설정했습니다.
Input창에 있는 'Show MA Slope Average Pivot Range?'를 체크하면, 최근 30개의 기울기 피벗 하이와 피벗 로우의 평균을 보여줍니다.
즉, '평균적으로, 최근 30개의 파동에서는 이 정도의 기울기가 만들어졌다'라는 것을 보여줍니다
B. 사용법
기울기가 0에서 크로스 된다는 것은, "곡선의 기울기가 반전"된다는 것이기에 기울기를 봄에 있어서 가장 중요한 값입니다. 따라서 가장 기본적으로는, 기울기가 "0"이 되는 곳을 보는 것이 중요하고
또 기울기가 올라갔다 꺾이기 시작할 때는, 가속도가 바뀌고 있다는 뜻이므로, 곧 기울기가 반전될 것을 의미합니다.
(다만 가속도를 지표로 표현하기엔, 너무나도 데이터가 지저분해져서 생략하였습니다)
이동평균선의 길이를 짧게 할수록 더 실제 트레이딩에 유용한 기울기 데이터를 얻을 수 있으나,
부드럽지 못한 이동평균선은 기울기가 0 근처에서 크로스 되는 모습이 지나치게 많이 나올 것이기에 유용하지 않을 수 있습니다.
따라서, input값에 있는 'Smoothing Length'를 조절해가면서 자신이 생각하기에 맞는 값을 고르는 것이 중요합니다.
3. 맺음말
파인스크립트에서 어떻게하면 실제 인식하는 기울기와 유사한 값을 찾을 수 있을까를 항상 고민합니다. 나름 새로운 접근방법이라 생각해서 이렇게 스크립트로 만들었으나, 여전히 아쉬운 부분이 많이 존재합니다.
기울기에 대한 좋은 아이디어가 있다면 언제든 피드백 해주세요.
감사합니다.
Vector3Library "Vector3"
Representation of 3D vectors and points.
This structure is used to pass 3D positions and directions around. It also contains functions for doing common vector operations.
Besides the functions listed below, other classes can be used to manipulate vectors and points as well.
For example the Quaternion and the Matrix4x4 classes are useful for rotating or transforming vectors and points.
___
**Reference:**
- github.com
- github.com
- github.com
- www.movable-type.co.uk
- docs.unity3d.com
- referencesource.microsoft.com
- github.com
\
new(x, y, z)
Create a new `Vector3`.
Parameters:
x (float) : `float` Property `x` value, (optional, default=na).
y (float) : `float` Property `y` value, (optional, default=na).
z (float) : `float` Property `z` value, (optional, default=na).
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.new(1.1, 1, 1)
```
from(value)
Create a new `Vector3` from a single value.
Parameters:
value (float) : `float` Properties positional value, (optional, default=na).
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.from(1.1)
```
from_Array(values, fill_na)
Create a new `Vector3` from a list of values, only reads up to the third item.
Parameters:
values (float ) : `array` Vector property values.
fill_na (float) : `float` Parameter value to replace missing indexes, (optional, defualt=na).
Returns: `Vector3` Generated new vector.
___
**Notes:**
- Supports any size of array, fills non available fields with `na`.
___
**Usage:**
```
.from_Array(array.from(1.1, fill_na=33))
.from_Array(array.from(1.1, 2, 3))
```
from_Vector2(values)
Create a new `Vector3` from a `Vector2`.
Parameters:
values (Vector2 type from RicardoSantos/CommonTypesMath/1) : `Vector2` Vector property values.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.from:Vector2(.Vector2.new(1, 2.0))
```
___
**Notes:**
- Type `Vector2` from CommonTypesMath library.
from_Quaternion(values)
Create a new `Vector3` from a `Quaternion`'s `x, y, z` properties.
Parameters:
values (Quaternion type from RicardoSantos/CommonTypesMath/1) : `Quaternion` Vector property values.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.from_Quaternion(.Quaternion.new(1, 2, 3, 4))
```
___
**Notes:**
- Type `Quaternion` from CommonTypesMath library.
from_String(expression, separator, fill_na)
Create a new `Vector3` from a list of values in a formated string.
Parameters:
expression (string) : `array` String with the list of vector properties.
separator (string) : `string` Separator between entries, (optional, default=`","`).
fill_na (float) : `float` Parameter value to replace missing indexes, (optional, defualt=na).
Returns: `Vector3` Generated new vector.
___
**Notes:**
- Supports any size of array, fills non available fields with `na`.
- `",,"` Empty fields will be ignored.
___
**Usage:**
```
.from_String("1.1", fill_na=33))
.from_String("(1.1,, 3)") // 1.1 , 3.0, NaN // empty field will be ignored!!
```
back()
Create a new `Vector3` object in the form `(0, 0, -1)`.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.back()
```
front()
Create a new `Vector3` object in the form `(0, 0, 1)`.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.front()
```
up()
Create a new `Vector3` object in the form `(0, 1, 0)`.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.up()
```
down()
Create a new `Vector3` object in the form `(0, -1, 0)`.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.down()
```
left()
Create a new `Vector3` object in the form `(-1, 0, 0)`.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.left()
```
right()
Create a new `Vector3` object in the form `(1, 0, 0)`.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.right()
```
zero()
Create a new `Vector3` object in the form `(0, 0, 0)`.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.zero()
```
one()
Create a new `Vector3` object in the form `(1, 1, 1)`.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.one()
```
minus_one()
Create a new `Vector3` object in the form `(-1, -1, -1)`.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.minus_one()
```
unit_x()
Create a new `Vector3` object in the form `(1, 0, 0)`.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.unit_x()
```
unit_y()
Create a new `Vector3` object in the form `(0, 1, 0)`.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.unit_y()
```
unit_z()
Create a new `Vector3` object in the form `(0, 0, 1)`.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.unit_z()
```
nan()
Create a new `Vector3` object in the form `(na, na, na)`.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.nan()
```
random(max, min)
Generate a vector with random properties.
Parameters:
max (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Maximum defined range of the vector properties.
min (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Minimum defined range of the vector properties.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.random(.from(math.pi), .from(-math.pi))
```
random(max)
Generate a vector with random properties (min set to 0.0).
Parameters:
max (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Maximum defined range of the vector properties.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
.random(.from(math.pi))
```
method copy(this)
Copy a existing `Vector3`
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .one().copy()
```
method i_add(this, other)
Modify a instance of a vector by adding a vector to it.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Other Vector.
Returns: `Vector3` Updated source vector.
___
**Usage:**
```
a = .from(1) , a.i_add(.up())
```
method i_add(this, value)
Modify a instance of a vector by adding a vector to it.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
value (float) : `float` Value.
Returns: `Vector3` Updated source vector.
___
**Usage:**
```
a = .from(1) , a.i_add(3.2)
```
method i_subtract(this, other)
Modify a instance of a vector by subtracting a vector to it.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Other Vector.
Returns: `Vector3` Updated source vector.
___
**Usage:**
```
a = .from(1) , a.i_subtract(.down())
```
method i_subtract(this, value)
Modify a instance of a vector by subtracting a vector to it.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
value (float) : `float` Value.
Returns: `Vector3` Updated source vector.
___
**Usage:**
```
a = .from(1) , a.i_subtract(3)
```
method i_multiply(this, other)
Modify a instance of a vector by multiplying a vector with it.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Other Vector.
Returns: `Vector3` Updated source vector.
___
**Usage:**
```
a = .from(1) , a.i_multiply(.left())
```
method i_multiply(this, value)
Modify a instance of a vector by multiplying a vector with it.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
value (float) : `float` value.
Returns: `Vector3` Updated source vector.
___
**Usage:**
```
a = .from(1) , a.i_multiply(3)
```
method i_divide(this, other)
Modify a instance of a vector by dividing it by another vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Other Vector.
Returns: `Vector3` Updated source vector.
___
**Usage:**
```
a = .from(1) , a.i_divide(.forward())
```
method i_divide(this, value)
Modify a instance of a vector by dividing it by another vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
value (float) : `float` Value.
Returns: `Vector3` Updated source vector.
___
**Usage:**
```
a = .from(1) , a.i_divide(3)
```
method i_mod(this, other)
Modify a instance of a vector by modulo assignment with another vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Other Vector.
Returns: `Vector3` Updated source vector.
___
**Usage:**
```
a = .from(1) , a.i_mod(.back())
```
method i_mod(this, value)
Modify a instance of a vector by modulo assignment with another vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
value (float) : `float` Value.
Returns: `Vector3` Updated source vector.
___
**Usage:**
```
a = .from(1) , a.i_mod(3)
```
method i_pow(this, exponent)
Modify a instance of a vector by modulo assignment with another vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
exponent (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Exponent Vector.
Returns: `Vector3` Updated source vector.
___
**Usage:**
```
a = .from(1) , a.i_pow(.up())
```
method i_pow(this, exponent)
Modify a instance of a vector by modulo assignment with another vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
exponent (float) : `float` Exponent Value.
Returns: `Vector3` Updated source vector.
___
**Usage:**
```
a = .from(1) , a.i_pow(2)
```
method length_squared(this)
Squared length of the vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1)
Returns: `float` The squared length of this vector.
___
**Usage:**
```
a = .one().length_squared()
```
method magnitude_squared(this)
Squared magnitude of the vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `float` The length squared of this vector.
___
**Usage:**
```
a = .one().magnitude_squared()
```
method length(this)
Length of the vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `float` The length of this vector.
___
**Usage:**
```
a = .one().length()
```
method magnitude(this)
Magnitude of the vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `float` The Length of this vector.
___
**Usage:**
```
a = .one().magnitude()
```
method normalize(this, magnitude, eps)
Normalize a vector with a magnitude of 1(optional).
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
magnitude (float) : `float` Value to manipulate the magnitude of normalization, (optional, default=1.0).
eps (float)
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .new(33, 50, 100).normalize() // (x=0.283, y=0.429, z=0.858)
a = .new(33, 50, 100).normalize(2) // (x=0.142, y=0.214, z=0.429)
```
method to_String(this, precision)
Converts source vector to a string format, in the form `"(x, y, z)"`.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
precision (string) : `string` Precision format to apply to values (optional, default='').
Returns: `string` Formated string in a `"(x, y, z)"` format.
___
**Usage:**
```
a = .one().to_String("#.###")
```
method to_Array(this)
Converts source vector to a array format.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `array` List of the vector properties.
___
**Usage:**
```
a = .new(1, 2, 3).to_Array()
```
method to_Vector2(this)
Converts source vector to a Vector2 in the form `x, y`.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `Vector2` Generated new vector.
___
**Usage:**
```
a = .from(1).to_Vector2()
```
method to_Quaternion(this, w)
Converts source vector to a Quaternion in the form `x, y, z, w`.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Sorce vector.
w (float) : `float` Property of `w` new value.
Returns: `Quaternion` Generated new vector.
___
**Usage:**
```
a = .from(1).to_Quaternion(w=1)
```
method add(this, other)
Add a vector to source vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Other vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).add(.unit_z())
```
method add(this, value)
Add a value to each property of the vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
value (float) : `float` Value.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).add(2.0)
```
add(value, other)
Add each property of a vector to a base value as a new vector.
Parameters:
value (float) : `float` Value.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(2) , b = .add(1.0, a)
```
method subtract(this, other)
Subtract vector from source vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Other vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).subtract(.left())
```
method subtract(this, value)
Subtract a value from each property in source vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
value (float) : `float` Value.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).subtract(2.0)
```
subtract(value, other)
Subtract each property in a vector from a base value and create a new vector.
Parameters:
value (float) : `float` Value.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .subtract(1.0, .right())
```
method multiply(this, other)
Multiply a vector by another.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Other vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).multiply(.up())
```
method multiply(this, value)
Multiply each element in source vector with a value.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
value (float) : `float` Value.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).multiply(2.0)
```
multiply(value, other)
Multiply a value with each property in a vector and create a new vector.
Parameters:
value (float) : `float` Value.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .multiply(1.0, .new(1, 2, 1))
```
method divide(this, other)
Divide a vector by another.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Other vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).divide(.from(2))
```
method divide(this, value)
Divide each property in a vector by a value.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
value (float) : `float` Value.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).divide(2.0)
```
divide(value, other)
Divide a base value by each property in a vector and create a new vector.
Parameters:
value (float) : `float` Value.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .divide(1.0, .from(2))
```
method mod(this, other)
Modulo a vector by another.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Other vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).mod(.from(2))
```
method mod(this, value)
Modulo each property in a vector by a value.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
value (float) : `float` Value.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).mod(2.0)
```
mod(value, other)
Modulo a base value by each property in a vector and create a new vector.
Parameters:
value (float) : `float` Value.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .mod(1.0, .from(2))
```
method negate(this)
Negate a vector in the form `(zero - this)`.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .one().negate()
```
method pow(this, other)
Modulo a vector by another.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Other vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(2).pow(.from(3))
```
method pow(this, exponent)
Raise the vector elements by a exponent.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
exponent (float) : `float` The exponent to raise the vector by.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).pow(2.0)
```
pow(value, exponent)
Raise value into a vector raised by the elements in exponent vector.
Parameters:
value (float) : `float` Base value.
exponent (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` The exponent to raise the vector of base value by.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .pow(1.0, .from(2))
```
method sqrt(this)
Square root of the elements in a vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).sqrt()
```
method abs(this)
Absolute properties of the vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).abs()
```
method max(this)
Highest property of the vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `float` Highest value amongst the vector properties.
___
**Usage:**
```
a = .new(1, 2, 3).max()
```
method min(this)
Lowest element of the vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `float` Lowest values amongst the vector properties.
___
**Usage:**
```
a = .new(1, 2, 3).min()
```
method floor(this)
Floor of vector a.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .new(1.33, 1.66, 1.99).floor()
```
method ceil(this)
Ceil of vector a.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .new(1.33, 1.66, 1.99).ceil()
```
method round(this)
Round of vector elements.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .new(1.33, 1.66, 1.99).round()
```
method round(this, precision)
Round of vector elements to n digits.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
precision (int) : `int` Number of digits to round the vector elements.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .new(1.33, 1.66, 1.99).round(1) // 1.3, 1.7, 2
```
method fractional(this)
Fractional parts of vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1.337).fractional() // 0.337
```
method dot_product(this, other)
Dot product of two vectors.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Other vector.
Returns: `float` Dot product.
___
**Usage:**
```
a = .from(2).dot_product(.left())
```
method cross_product(this, other)
Cross product of two vectors.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Other vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).cross_produc(.right())
```
method scale(this, scalar)
Scale vector by a scalar value.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
scalar (float) : `float` Value to scale the the vector by.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).scale(2)
```
method rescale(this, magnitude)
Rescale a vector to a new magnitude.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
magnitude (float) : `float` Value to manipulate the magnitude of normalization.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(20).rescale(1)
```
method equals(this, other)
Compares two vectors.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
other (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Other vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).equals(.one())
```
method sin(this)
Sine of vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).sin()
```
method cos(this)
Cosine of vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).cos()
```
method tan(this)
Tangent of vector.
Namespace types: TMath.Vector3
Parameters:
this (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .from(1).tan()
```
vmax(a, b)
Highest elements of the properties from two vectors.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Vector.
b (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .vmax(.one(), .from(2))
```
vmax(a, b, c)
Highest elements of the properties from three vectors.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Vector.
b (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Vector.
c (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .vmax(.new(0.1, 2.5, 3.4), .from(2), .from(3))
```
vmin(a, b)
Lowest elements of the properties from two vectors.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Vector.
b (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .vmin(.one(), .from(2))
```
vmin(a, b, c)
Lowest elements of the properties from three vectors.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Vector.
b (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Vector.
c (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .vmin(.one(), .from(2), .new(3.3, 2.2, 0.5))
```
distance(a, b)
Distance between vector `a` and `b`.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
b (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Target vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = distance(.from(3), .unit_z())
```
clamp(a, min, max)
Restrict a vector between a min and max vector.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
min (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Minimum boundary vector.
max (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Maximum boundary vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .clamp(a=.new(2.9, 1.5, 3.9), min=.from(2), max=.new(2.5, 3.0, 3.5))
```
clamp_magnitude(a, radius)
Vector with its magnitude clamped to a radius.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.object, vector with properties that should be restricted to a radius.
radius (float) : `float` Maximum radius to restrict magnitude of vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .clamp_magnitude(.from(21), 7)
```
lerp_unclamped(a, b, rate)
`Unclamped` linearly interpolates between provided vectors by a rate.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
b (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Target vector.
rate (float) : `float` Rate of interpolation, range(0 > 1) where 0 == source vector and 1 == target vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .lerp_unclamped(.from(1), .from(2), 1.2)
```
lerp(a, b, rate)
Linearly interpolates between provided vectors by a rate.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
b (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Target vector.
rate (float) : `float` Rate of interpolation, range(0 > 1) where 0 == source vector and 1 == target vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = lerp(.one(), .from(2), 0.2)
```
herp(start, start_tangent, end, end_tangent, rate)
Hermite curve interpolation between provided vectors.
Parameters:
start (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Start vector.
start_tangent (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Start vector tangent.
end (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` End vector.
end_tangent (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` End vector tangent.
rate (int) : `float` Rate of the movement from `start` to `end` to get position, should be range(0 > 1).
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
s = .new(0, 0, 0) , st = .new(0, 1, 1)
e = .new(1, 2, 2) , et = .new(-1, -1, 3)
h = .herp(s, st, e, et, 0.3)
```
___
**Reference:** en.m.wikibooks.org
herp_2(a, b, rate)
Hermite curve interpolation between provided vectors.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
b (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Target vector.
rate (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Rate of the movement per component from `start` to `end` to get position, should be range(0 > 1).
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
h = .herp_2(.one(), .new(0.1, 3, 2), 0.6)
```
noise(a)
3D Noise based on Morgan McGuire @morgan3d
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = noise(.one())
```
___
**Reference:**
- thebookofshaders.com
- www.shadertoy.com
rotate(a, axis, angle)
Rotate a vector around a axis.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
axis (string) : `string` The plane to rotate around, `option="x", "y", "z"`.
angle (float) : `float` Angle in radians.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .rotate(.from(3), 'y', math.toradians(45.0))
```
rotate_x(a, angle)
Rotate a vector on a fixed `x`.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
angle (float) : `float` Angle in radians.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .rotate_x(.from(3), math.toradians(90.0))
```
rotate_y(a, angle)
Rotate a vector on a fixed `y`.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
angle (float) : `float` Angle in radians.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .rotate_y(.from(3), math.toradians(90.0))
```
rotate_yaw_pitch(a, yaw, pitch)
Rotate a vector by yaw and pitch values.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
yaw (float) : `float` Angle in radians.
pitch (float) : `float` Angle in radians.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .rotate_yaw_pitch(.from(3), math.toradians(90.0), math.toradians(45.0))
```
project(a, normal, eps)
Project a vector off a plane defined by a normal.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
normal (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` The normal of the surface being reflected off.
eps (float) : `float` Minimum resolution to void division by zero (default=0.000001).
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .project(.one(), .down())
```
project_on_plane(a, normal, eps)
Projects a vector onto a plane defined by a normal orthogonal to the plane.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
normal (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` The normal of the surface being reflected off.
eps (float) : `float` Minimum resolution to void division by zero (default=0.000001).
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .project_on_plane(.one(), .left())
```
project_to_2d(a, camera_position, camera_target)
Project a vector onto a two dimensions plane.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
camera_position (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Camera position.
camera_target (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Camera target plane position.
Returns: `Vector2` Generated new vector.
___
**Usage:**
```
a = .project_to_2d(.one(), .new(2, 2, 3), .zero())
```
reflect(a, normal)
Reflects a vector off a plane defined by a normal.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
normal (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` The normal of the surface being reflected off.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .reflect(.one(), .right())
```
angle(a, b, eps)
Angle in degrees between two vectors.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
b (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Target vector.
eps (float) : `float` Minimum resolution to void division by zero (default=1.0e-15).
Returns: `float` Angle value in degrees.
___
**Usage:**
```
a = .angle(.one(), .up())
```
angle_signed(a, b, axis)
Signed angle in degrees between two vectors.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
b (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Target vector.
axis (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Axis vector.
Returns: `float` Angle value in degrees.
___
**Usage:**
```
a = .angle_signed(.one(), .left(), .down())
```
___
**Notes:**
- The smaller of the two possible angles between the two vectors is returned, therefore the result will never
be greater than 180 degrees or smaller than -180 degrees.
- If you imagine the from and to vectors as lines on a piece of paper, both originating from the same point,
then the /axis/ vector would point up out of the paper.
- The measured angle between the two vectors would be positive in a clockwise direction and negative in an
anti-clockwise direction.
___
**Reference:**
- github.com
angle2d(a, b)
2D angle between two vectors.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
b (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Target vector.
Returns: `float` Angle value in degrees.
___
**Usage:**
```
a = .angle2d(.one(), .left())
```
transform_Matrix(a, M)
Transforms a vector by the given matrix.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
M (matrix) : `matrix` A 4x4 matrix. The transformation matrix.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
mat = matrix.new(4, 0)
mat.add_row(0, array.from(0.0, 0.0, 0.0, 1.0))
mat.add_row(1, array.from(0.0, 0.0, 1.0, 0.0))
mat.add_row(2, array.from(0.0, 1.0, 0.0, 0.0))
mat.add_row(3, array.from(1.0, 0.0, 0.0, 0.0))
b = .transform_Matrix(.one(), mat)
```
transform_M44(a, M)
Transforms a vector by the given matrix.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
M (M44 type from RicardoSantos/CommonTypesMath/1) : `M44` A 4x4 matrix. The transformation matrix.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .transform_M44(.one(), .M44.new(0,0,0,1,0,0,1,0,0,1,0,0,1,0,0,0))
```
___
**Notes:**
- Type `M44` from `CommonTypesMath` library.
transform_normal_Matrix(a, M)
Transforms a vector by the given matrix.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
M (matrix) : `matrix` A 4x4 matrix. The transformation matrix.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
mat = matrix.new(4, 0)
mat.add_row(0, array.from(0.0, 0.0, 0.0, 1.0))
mat.add_row(1, array.from(0.0, 0.0, 1.0, 0.0))
mat.add_row(2, array.from(0.0, 1.0, 0.0, 0.0))
mat.add_row(3, array.from(1.0, 0.0, 0.0, 0.0))
b = .transform_normal_Matrix(.one(), mat)
```
transform_normal_M44(a, M)
Transforms a vector by the given matrix.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector.
M (M44 type from RicardoSantos/CommonTypesMath/1) : `M44` A 4x4 matrix. The transformation matrix.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .transform_normal_M44(.one(), .M44.new(0,0,0,1,0,0,1,0,0,1,0,0,1,0,0,0))
```
___
**Notes:**
- Type `M44` from `CommonTypesMath` library.
transform_Array(a, rotation)
Transforms a vector by the given Quaternion rotation value.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector. The source vector to be rotated.
rotation (float ) : `array` A 4 element array. Quaternion. The rotation to apply.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .transform_Array(.one(), array.from(0.2, 0.2, 0.2, 1.0))
```
___
**Reference:**
- referencesource.microsoft.com
transform_Quaternion(a, rotation)
Transforms a vector by the given Quaternion rotation value.
Parameters:
a (Vector3 type from RicardoSantos/CommonTypesMath/1) : `Vector3` Source vector. The source vector to be rotated.
rotation (Quaternion type from RicardoSantos/CommonTypesMath/1) : `array` A 4 element array. Quaternion. The rotation to apply.
Returns: `Vector3` Generated new vector.
___
**Usage:**
```
a = .transform_Quaternion(.one(), .Quaternion.new(0.2, 0.2, 0.2, 1.0))
```
___
**Notes:**
- Type `Quaternion` from `CommonTypesMath` library.
___
**Reference:**
- referencesource.microsoft.com
Quinn-Fernandes Fourier Transform of Filtered Price [Loxx]Down the Rabbit Hole We Go: A Deep Dive into the Mysteries of Quinn-Fernandes Fast Fourier Transform and Hodrick-Prescott Filtering
In the ever-evolving landscape of financial markets, the ability to accurately identify and exploit underlying market patterns is of paramount importance. As market participants continuously search for innovative tools to gain an edge in their trading and investment strategies, advanced mathematical techniques, such as the Quinn-Fernandes Fourier Transform and the Hodrick-Prescott Filter, have emerged as powerful analytical tools. This comprehensive analysis aims to delve into the rich history and theoretical foundations of these techniques, exploring their applications in financial time series analysis, particularly in the context of a sophisticated trading indicator. Furthermore, we will critically assess the limitations and challenges associated with these transformative tools, while offering practical insights and recommendations for overcoming these hurdles to maximize their potential in the financial domain.
Our investigation will begin with a comprehensive examination of the origins and development of both the Quinn-Fernandes Fourier Transform and the Hodrick-Prescott Filter. We will trace their roots from classical Fourier analysis and time series smoothing to their modern-day adaptive iterations. We will elucidate the key concepts and mathematical underpinnings of these techniques and demonstrate how they are synergistically used in the context of the trading indicator under study.
As we progress, we will carefully consider the potential drawbacks and challenges associated with using the Quinn-Fernandes Fourier Transform and the Hodrick-Prescott Filter as integral components of a trading indicator. By providing a critical evaluation of their computational complexity, sensitivity to input parameters, assumptions about data stationarity, performance in noisy environments, and their nature as lagging indicators, we aim to offer a balanced and comprehensive understanding of these powerful analytical tools.
In conclusion, this in-depth analysis of the Quinn-Fernandes Fourier Transform and the Hodrick-Prescott Filter aims to provide a solid foundation for financial market participants seeking to harness the potential of these advanced techniques in their trading and investment strategies. By shedding light on their history, applications, and limitations, we hope to equip traders and investors with the knowledge and insights necessary to make informed decisions and, ultimately, achieve greater success in the highly competitive world of finance.
█ Fourier Transform and Hodrick-Prescott Filter in Financial Time Series Analysis
Financial time series analysis plays a crucial role in making informed decisions about investments and trading strategies. Among the various methods used in this domain, the Fourier Transform and the Hodrick-Prescott (HP) Filter have emerged as powerful techniques for processing and analyzing financial data. This section aims to provide a comprehensive understanding of these two methodologies, their significance in financial time series analysis, and their combined application to enhance trading strategies.
█ The Quinn-Fernandes Fourier Transform: History, Applications, and Use in Financial Time Series Analysis
The Quinn-Fernandes Fourier Transform is an advanced spectral estimation technique developed by John J. Quinn and Mauricio A. Fernandes in the early 1990s. It builds upon the classical Fourier Transform by introducing an adaptive approach that improves the identification of dominant frequencies in noisy signals. This section will explore the history of the Quinn-Fernandes Fourier Transform, its applications in various domains, and its specific use in financial time series analysis.
History of the Quinn-Fernandes Fourier Transform
The Quinn-Fernandes Fourier Transform was introduced in a 1993 paper titled "The Application of Adaptive Estimation to the Interpolation of Missing Values in Noisy Signals." In this paper, Quinn and Fernandes developed an adaptive spectral estimation algorithm to address the limitations of the classical Fourier Transform when analyzing noisy signals.
The classical Fourier Transform is a powerful mathematical tool that decomposes a function or a time series into a sum of sinusoids, making it easier to identify underlying patterns and trends. However, its performance can be negatively impacted by noise and missing data points, leading to inaccurate frequency identification.
Quinn and Fernandes sought to address these issues by developing an adaptive algorithm that could more accurately identify the dominant frequencies in a noisy signal, even when data points were missing. This adaptive algorithm, now known as the Quinn-Fernandes Fourier Transform, employs an iterative approach to refine the frequency estimates, ultimately resulting in improved spectral estimation.
Applications of the Quinn-Fernandes Fourier Transform
The Quinn-Fernandes Fourier Transform has found applications in various fields, including signal processing, telecommunications, geophysics, and biomedical engineering. Its ability to accurately identify dominant frequencies in noisy signals makes it a valuable tool for analyzing and interpreting data in these domains.
For example, in telecommunications, the Quinn-Fernandes Fourier Transform can be used to analyze the performance of communication systems and identify interference patterns. In geophysics, it can help detect and analyze seismic signals and vibrations, leading to improved understanding of geological processes. In biomedical engineering, the technique can be employed to analyze physiological signals, such as electrocardiograms, leading to more accurate diagnoses and better patient care.
Use of the Quinn-Fernandes Fourier Transform in Financial Time Series Analysis
In financial time series analysis, the Quinn-Fernandes Fourier Transform can be a powerful tool for isolating the dominant cycles and frequencies in asset price data. By more accurately identifying these critical cycles, traders can better understand the underlying dynamics of financial markets and develop more effective trading strategies.
The Quinn-Fernandes Fourier Transform is used in conjunction with the Hodrick-Prescott Filter, a technique that separates the underlying trend from the cyclical component in a time series. By first applying the Hodrick-Prescott Filter to the financial data, short-term fluctuations and noise are removed, resulting in a smoothed representation of the underlying trend. This smoothed data is then subjected to the Quinn-Fernandes Fourier Transform, allowing for more accurate identification of the dominant cycles and frequencies in the asset price data.
By employing the Quinn-Fernandes Fourier Transform in this manner, traders can gain a deeper understanding of the underlying dynamics of financial time series and develop more effective trading strategies. The enhanced knowledge of market cycles and frequencies can lead to improved risk management and ultimately, better investment performance.
The Quinn-Fernandes Fourier Transform is an advanced spectral estimation technique that has proven valuable in various domains, including financial time series analysis. Its adaptive approach to frequency identification addresses the limitations of the classical Fourier Transform when analyzing noisy signals, leading to more accurate and reliable analysis. By employing the Quinn-Fernandes Fourier Transform in financial time series analysis, traders can gain a deeper understanding of the underlying financial instrument.
Drawbacks to the Quinn-Fernandes algorithm
While the Quinn-Fernandes Fourier Transform is an effective tool for identifying dominant cycles and frequencies in financial time series, it is not without its drawbacks. Some of the limitations and challenges associated with this indicator include:
1. Computational complexity: The adaptive nature of the Quinn-Fernandes Fourier Transform requires iterative calculations, which can lead to increased computational complexity. This can be particularly challenging when analyzing large datasets or when the indicator is used in real-time trading environments.
2. Sensitivity to input parameters: The performance of the Quinn-Fernandes Fourier Transform is dependent on the choice of input parameters, such as the number of harmonic periods, frequency tolerance, and Hodrick-Prescott filter settings. Choosing inappropriate parameter values can lead to inaccurate frequency identification or reduced performance. Finding the optimal parameter settings can be challenging, and may require trial and error or a more sophisticated optimization process.
3. Assumption of stationary data: The Quinn-Fernandes Fourier Transform assumes that the underlying data is stationary, meaning that its statistical properties do not change over time. However, financial time series data is often non-stationary, with changing trends and volatility. This can limit the effectiveness of the indicator and may require additional preprocessing steps, such as detrending or differencing, to ensure the data meets the assumptions of the algorithm.
4. Limitations in noisy environments: Although the Quinn-Fernandes Fourier Transform is designed to handle noisy signals, its performance may still be negatively impacted by significant noise levels. In such cases, the identification of dominant frequencies may become less reliable, leading to suboptimal trading signals or strategies.
5. Lagging indicator: As with many technical analysis tools, the Quinn-Fernandes Fourier Transform is a lagging indicator, meaning that it is based on past data. While it can provide valuable insights into historical market dynamics, its ability to predict future price movements may be limited. This can result in false signals or late entries and exits, potentially reducing the effectiveness of trading strategies based on this indicator.
Despite these drawbacks, the Quinn-Fernandes Fourier Transform remains a valuable tool for financial time series analysis when used appropriately. By being aware of its limitations and adjusting input parameters or preprocessing steps as needed, traders can still benefit from its ability to identify dominant cycles and frequencies in financial data, and use this information to inform their trading strategies.
█ Deep-dive into the Hodrick-Prescott Fitler
The Hodrick-Prescott (HP) filter is a statistical tool used in economics and finance to separate a time series into two components: a trend component and a cyclical component. It is a powerful tool for identifying long-term trends in economic and financial data and is widely used by economists, central banks, and financial institutions around the world.
The HP filter was first introduced in the 1990s by economists Robert Hodrick and Edward Prescott. It is a simple, two-parameter filter that separates a time series into a trend component and a cyclical component. The trend component represents the long-term behavior of the data, while the cyclical component captures the shorter-term fluctuations around the trend.
The HP filter works by minimizing the following objective function:
Minimize: (Sum of Squared Deviations) + λ (Sum of Squared Second Differences)
Where:
1. The first term represents the deviation of the data from the trend.
2. The second term represents the smoothness of the trend.
3. λ is a smoothing parameter that determines the degree of smoothness of the trend.
The smoothing parameter λ is typically set to a value between 100 and 1600, depending on the frequency of the data. Higher values of λ lead to a smoother trend, while lower values lead to a more volatile trend.
The HP filter has several advantages over other smoothing techniques. It is a non-parametric method, meaning that it does not make any assumptions about the underlying distribution of the data. It also allows for easy comparison of trends across different time series and can be used with data of any frequency.
Another significant advantage of the HP Filter is its ability to adapt to changes in the underlying trend. This feature makes it particularly well-suited for analyzing financial time series, which often exhibit non-stationary behavior. By employing the HP Filter to smooth financial data, traders can more accurately identify and analyze the long-term trends that drive asset prices, ultimately leading to better-informed investment decisions.
However, the HP filter also has some limitations. It assumes that the trend is a smooth function, which may not be the case in some situations. It can also be sensitive to changes in the smoothing parameter λ, which may result in different trends for the same data. Additionally, the filter may produce unrealistic trends for very short time series.
Despite these limitations, the HP filter remains a valuable tool for analyzing economic and financial data. It is widely used by central banks and financial institutions to monitor long-term trends in the economy, and it can be used to identify turning points in the business cycle. The filter can also be used to analyze asset prices, exchange rates, and other financial variables.
The Hodrick-Prescott filter is a powerful tool for analyzing economic and financial data. It separates a time series into a trend component and a cyclical component, allowing for easy identification of long-term trends and turning points in the business cycle. While it has some limitations, it remains a valuable tool for economists, central banks, and financial institutions around the world.
█ Combined Application of Fourier Transform and Hodrick-Prescott Filter
The integration of the Fourier Transform and the Hodrick-Prescott Filter in financial time series analysis can offer several benefits. By first applying the HP Filter to the financial data, traders can remove short-term fluctuations and noise, effectively isolating the underlying trend. This smoothed data can then be subjected to the Fourier Transform, allowing for the identification of dominant cycles and frequencies with greater precision.
By combining these two powerful techniques, traders can gain a more comprehensive understanding of the underlying dynamics of financial time series. This enhanced knowledge can lead to the development of more effective trading strategies, better risk management, and ultimately, improved investment performance.
The Fourier Transform and the Hodrick-Prescott Filter are powerful tools for financial time series analysis. Each technique offers unique benefits, with the Fourier Transform being adept at identifying dominant cycles and frequencies, and the HP Filter excelling at isolating long-term trends from short-term noise. By combining these methodologies, traders can develop a deeper understanding of the underlying dynamics of financial time series, leading to more informed investment decisions and improved trading strategies. As the financial markets continue to evolve, the combined application of these techniques will undoubtedly remain an essential aspect of modern financial analysis.
█ Features
Endpointed and Non-repainting
This is an endpointed and non-repainting indicator. These are crucial factors that contribute to its usefulness and reliability in trading and investment strategies. Let us break down these concepts and discuss why they matter in the context of a financial indicator.
1. Endpoint nature: An endpoint indicator uses the most recent data points to calculate its values, ensuring that the output is timely and reflective of the current market conditions. This is in contrast to non-endpoint indicators, which may use earlier data points in their calculations, potentially leading to less timely or less relevant results. By utilizing the most recent data available, the endpoint nature of this indicator ensures that it remains up-to-date and relevant, providing traders and investors with valuable and actionable insights into the market dynamics.
2. Non-repainting characteristic: A non-repainting indicator is one that does not change its values or signals after they have been generated. This means that once a signal or a value has been plotted on the chart, it will remain there, and future data will not affect it. This is crucial for traders and investors, as it offers a sense of consistency and certainty when making decisions based on the indicator's output.
Repainting indicators, on the other hand, can change their values or signals as new data comes in, effectively "repainting" the past. This can be problematic for several reasons:
a. Misleading results: Repainting indicators can create the illusion of a highly accurate or successful trading system when backtesting, as the indicator may adapt its past signals to fit the historical price data. This can lead to overly optimistic performance results that may not hold up in real-time trading.
b. Decision-making uncertainty: When an indicator repaints, it becomes challenging for traders and investors to trust its signals, as the signal that prompted a trade may change or disappear after the fact. This can create confusion and indecision, making it difficult to execute a consistent trading strategy.
The endpoint and non-repainting characteristics of this indicator contribute to its overall reliability and effectiveness as a tool for trading and investment decision-making. By providing timely and consistent information, this indicator helps traders and investors make well-informed decisions that are less likely to be influenced by misleading or shifting data.
Inputs
Source: This input determines the source of the price data to be used for the calculations. Users can select from options like closing price, opening price, high, low, etc., based on their preferences. Changing the source of the price data (e.g., from closing price to opening price) will alter the base data used for calculations, which may lead to different patterns and cycles being identified.
Calculation Bars: This input represents the number of past bars used for the calculation. A higher value will use more historical data for the analysis, while a lower value will focus on more recent price data. Increasing the number of past bars used for calculation will incorporate more historical data into the analysis. This may lead to a more comprehensive understanding of long-term trends but could also result in a slower response to recent price changes. Decreasing this value will focus more on recent data, potentially making the indicator more responsive to short-term fluctuations.
Harmonic Period: This input represents the harmonic period, which is the number of harmonics used in the Fourier Transform. A higher value will result in more harmonics being used, potentially capturing more complex cycles in the price data. Increasing the harmonic period will include more harmonics in the Fourier Transform, potentially capturing more complex cycles in the price data. However, this may also introduce more noise and make it harder to identify clear patterns. Decreasing this value will focus on simpler cycles and may make the analysis clearer, but it might miss out on more complex patterns.
Frequency Tolerance: This input represents the frequency tolerance, which determines how close the frequencies of the harmonics must be to be considered part of the same cycle. A higher value will allow for more variation between harmonics, while a lower value will require the frequencies to be more similar. Increasing the frequency tolerance will allow for more variation between harmonics, potentially capturing a broader range of cycles. However, this may also introduce noise and make it more difficult to identify clear patterns. Decreasing this value will require the frequencies to be more similar, potentially making the analysis clearer, but it might miss out on some cycles.
Number of Bars to Render: This input determines the number of bars to render on the chart. A higher value will result in more historical data being displayed, but it may also slow down the computation due to the increased amount of data being processed. Increasing the number of bars to render on the chart will display more historical data, providing a broader context for the analysis. However, this may also slow down the computation due to the increased amount of data being processed. Decreasing this value will speed up the computation, but it will provide less historical context for the analysis.
Smoothing Mode: This input allows the user to choose between two smoothing modes for the source price data: no smoothing or Hodrick-Prescott (HP) smoothing. The choice depends on the user's preference for how the price data should be processed before the Fourier Transform is applied. Choosing between no smoothing and Hodrick-Prescott (HP) smoothing will affect the preprocessing of the price data. Using HP smoothing will remove some of the short-term fluctuations from the data, potentially making the analysis clearer and more focused on longer-term trends. Not using smoothing will retain the original price fluctuations, which may provide more detail but also introduce noise into the analysis.
Hodrick-Prescott Filter Period: This input represents the Hodrick-Prescott filter period, which is used if the user chooses to apply HP smoothing to the price data. A higher value will result in a smoother curve, while a lower value will retain more of the original price fluctuations. Increasing the Hodrick-Prescott filter period will result in a smoother curve for the price data, emphasizing longer-term trends and minimizing short-term fluctuations. Decreasing this value will retain more of the original price fluctuations, potentially providing more detail but also introducing noise into the analysis.
Alets and signals
This indicator featues alerts, signals and bar coloring. You have to option to turn these on/off in the settings menu.
Maximum Bars Restriction
This indicator requires a large amount of processing power to render on the chart. To reduce overhead, the setting "Number of Bars to Render" is set to 500 bars. You can adjust this to you liking.
█ Related Indicators and Libraries
Goertzel Cycle Composite Wave
Goertzel Browser
Fourier Spectrometer of Price w/ Extrapolation Forecast
Fourier Extrapolator of 'Caterpillar' SSA of Price
Normalized, Variety, Fast Fourier Transform Explorer
Real-Fast Fourier Transform of Price Oscillator
Real-Fast Fourier Transform of Price w/ Linear Regression
Fourier Extrapolation of Variety Moving Averages
Fourier Extrapolator of Variety RSI w/ Bollinger Bands
Fourier Extrapolator of Price w/ Projection Forecast
Fourier Extrapolator of Price
STD-Stepped Fast Cosine Transform Moving Average
Variety RSI of Fast Discrete Cosine Transform
loxfft
Rate Of Change [Hyperbolic]Rate Of Change just got fixed!
Do note that you have to activate the "exotic calculations" inside the ROC-H settings.
A hyperbolic curve now transforms price. No more infinities on your indicators!
You may use the "exotic" function, that is embedded in my script in your own scripts.
This formula basically transforms the input (which may be zero or negative) into a strictly positive one.
While the mathematicians out there would opt for alternative formulae (like the exponential for negative numbers), I used the hyperbolic curve for continuity purposes. Feel free to build upon my calculations, and make them even better!
Tread lightly, for this is hallowed ground.
-Father Grigori
P.S. I cannot lock the source code. Science and knowledge belongs to humanity. Knowledge must not be up for sale.
Channel Lookback: Average Moving Price (CLAMP)How it works
This is a confirmation indicator based on moving averages. It compares the current price to a previous candle N periods ago, then smooths the result.
What makes this indicator novel is that it takes the smoothed curve and compares it to the previous value to see whether the slope is increasing or decreasing. Combined with a zero-cross baseline channel, we can compare the relative position of the curve, slope, or closing price to create entry signals. There are several hardcoded conditions that it checks, but this is easily changed.
Markets
The default values are best used on the SPY daily chart. With backtesting, it seemed to perform fairly well during the last year. It seems to be more accurate during choppy/bear markets, and very inaccurate during a trending market.
Eg, if you look at the period of growth that occurred during 2020, it basically said to keep shorting for months at a time. Not good. If you look at other markets (such as gold or uranium), it worked, but only if you inversed what the signal told you to do (eg going long when it says to go short). This is something you will have to test yourself, since every system and market is different. Please don't use this indicator by itself.
How to use
When combined with other indicators, this tells you whether to go long (green), go short (red), or no trade (gray). It is meant to be used as a confirmation indicator, so it will help verify other trade signals.
You can sometimes ignore the first grey circle and reuse the previous colored signal. There are a few markets (such as gold) I noticed this was helpful on; this will depend on your own trade rules and indicator system.
If you enable "simple mode" on the settings, it will draw only the final signal (long, short, no signal). I included this because it helps to reduce visual clutter.
CMO with ATR and LagF Filtering - RevNR - 12-27-22Rev NR of the CMO ATR, with LagF Filtering - Released 12-27-22 by @Hockeydude84
This code takes Chande Momentum Oscillator (CMO), adds a coded ATR option and then filters the result through a Laguerre Filter (LagF) to reduce erroneous signals.
This code also has an option for self adjusting alpha on the Lag, via a lookback table and monitoring the price rate of change (ROC) in the lookback length.
Faster ROC will allow the LagF to move faster, slower price action will slow down LagF reaction. Pausing of signals is also present based on Rate of Change of the LagF Curve
Aggressive signals and Base signaling is allowed - aggressive bases signals on increase/decrease of previous LagF curve value point, Base is greater or less than 0
Original Code credits; Lost some of this due to time and multiple script manipulations, I believe the CMO origin code is from @TradingView House Code, and the LagF from @KıvançÖzbilgiç
[blackcat] L3 Bull Bear GameLevel 3
Background
A bull bear banker fund game trajectories.
Function
This technical indicator draws a track diagram of the long-short power comparison through a custom trend line. The red curve represents the long line, and the green curve represents the short line. When the red line crosses the green line, it means that the upward momentum is sufficient, the whale is controlling the market, and the rise is imminent, which is a buy signal. When the red line enters the strong zone, it means that the whale is in control of the stock, and the stock is about to enter the stage of pulling up. On the contrary, if the green line turns upwards, it means that the whale is washing or retreating, and we must quickly reduce or clear the position. Sometimes when the indicator sends out a long entry signal, because the whale still has to go through a round of washing, I introduced a golden pit inflection point filter scheme, which can filter out these signals, so only when the signals appear at the same time is a long entry point signal.
Remarks
Feedbacks are appreciated.
VIDYA Trend StrategyOne of the most common messages I get is people reaching out asking for quantitative strategies that trade cryptocurrency. This has compelled me to write this script and article, to help provide a quantitative/technical perspective on why I believe most strategies people write for crypto fail catastrophically, and how one might build measures within their strategies that help reduce the risk of that happening. For those that don't trade crypto, know that these approaches are applicable to any market.
I will start off by qualifying up that I mainly trade stocks and ETFs, and I believe that if you trade crypto, you should only be playing with money you are okay with losing. Most published crypto strategies I have seen "work" when the market is going up, and fail catastrophically when it is not. There are far more people trying to sell you a strategy than there are people providing 5-10+ year backtest results on their strategies, with slippage and commissions included, showing how they generated alpha and beat buy/hold. I understand that this community has some really talented people that can create some really awesome things, but I am saying that the vast majority of what you find on the internet will not be strategies that create alpha over the long term.
So, why do so many of these strategies fail?
There is an assumption many people make that cryptocurrency will act just like stocks and ETFs, and it does not. ETF returns have more of a Gaussian probability distribution. Because of this, ETFs have a short term mean reverting behavior that can be capitalized on consistently. Many technical indicators are built to take advantage of this on the equities market. Many people apply them to crypto. Many of those people are drawn down 60-70% right now while there are mean reversion strategies up YTD on equities, even though the equities market is down. Crypto has many more "tail events" that occur 3-4+ standard deviations from the mean.
There is a correlation in many equities and ETF markets for how long an asset continues to do well when it is currently doing well. This is known as momentum, and that correlation and time-horizon is different for different assets. Many technical indicators are built based on this behavior, and then people apply them to cryptocurrency with little risk management assuming they behave the same and and on the same time horizon, without pulling in the statistics to verify if that is actually the case. They do not.
People do not take into account the brokerage commissions and slippage. Brokerage commissions are particularly high with cryptocurrency. The irony here isn't lost to me. When you factor in trading costs, it blows up most short-term trading strategies that might otherwise look profitable.
There is an assumption that it will "always come back" and that you "HODL" through the crash and "buy more." This is why Three Arrows Capital, a $10 billion dollar crypto hedge fund is now in bankruptcy, and no one can find the owners. This is also why many that trade crypto are drawn down 60-70% right now. There are bad risk practices in place, like thinking the martingale gambling strategy is the same as dollar cost averaging while also using those terms interchangeably. They are not the same. The 1st will blow up your trade account, and the 2nd will reduce timing risk. Many people are systematically blowing up their trade accounts/strategies by using martingale and calling it dollar cost averaging. The more risk you are exposing yourself too, the more important your risk management strategy is.
There is an odd assumption some have that you can buy anything and win with technical/quantitative analysis. Technical analysis does not tell you what you should buy, it just tells you when. If you are running a strategy that is going long on an asset that lost 80% of its value in the last year, then your strategy is probably down. That same strategy might be up on a different asset. One might consider a different methodology on choosing assets to trade.
Lastly, most strategies are over-fit, or curve-fit. The more complicated and more parameters/settings you have in your model, the more likely it is just fit to historical data and will not perform similar in live trading. This is one of the reasons why I like simple models with few parameters. They are less likely to be over-fit to historical data. If the strategy only works with 1 set of parameters, and there isn't a range of parameters around it that create alpha, then your strategy is over-fit and is probably not suitable for live trading.
So, what can I do about all of this!?
I created the VIDYA Trend Strategy to provide an example of how one might create a basic model with a basic risk management strategy that might generate long term alpha on a volatile asset, like cryptocurrency. This is one (of many) risk management strategies that can reduce the volatility of your returns when trading any asset. I chose the Variable Index Dynamic Average (VIDYA) for this example because it's calculation filters out some market noise by taking into account the volatility of the underlying asset. I chose a trend following strategy because regressions are capturing behaviors that are not just specific to the equities market.
The more volatile an asset, the more you have to back-off the short term price movement to effectively trend-follow it. Otherwise, you are constantly buying into short term trends that don't represent the trend of the asset, then they reverse and loose money. This is why I am applying a trend following strategy to a 4 hour chart and not a 4 minute chart. It is also important to note that following these long term trends on a volatile asset exposes you to additional risk. So, how might one mitigate some of that risk?
One of the ways of reducing timing risk is scaling into a trade. This is different from "doubling down" or "trippling down." It is really a basic application of dollar cost averaging to reduce timing risk, although DCA would typically happen over a longer time period. If it is really a trend you are following, it will probably still be a trend tomorrow. Trend following strategies have lower win rates because the beginning of a trend often reverses. The more volatile the asset, the more likely that is to happen. However, we can reduce risk of buying into a reversal by slowly scaling into the trend with a small % of equity per trade.
Our example "VIDYA Trend Strategy" executes this by looking at a medium-term, volatility adjusted trend on a 4 hour chart. The script scales into it with 4% of the account equity every 4-hours that the trend is still up. This means you become fully invested after 25 trades/bars. It also means that early in the trade, when you might be more likely to experience a reversal, most of your account equity is not invested and those losses are much smaller. The script sells 100% of the position when it detects a trend reversal. The slower you scale into a trade, the less volatile your equity curve will be. This model also includes slippage and commissions that you can adjust under the "settings" menu.
This fundamental concept of reducing timing risk by scaling into a trade can be applied to any market.
Disclaimer: This is not financial advice. Open-source scripts I publish in the community are largely meant to spark ideas that can be used as building blocks for part of a more robust trade management strategy. If you would like to implement a version of any script, I would recommend making significant additions/modifications to the strategy & risk management functions. If you don’t know how to program in Pine, then hire a Pine-coder. We can help!
Relative Aggregate Strength OscillatorCredits to
@wolneyyy - "Mean Deviation Detector - Throw Out All Other Indicators"
And
@algomojo - "Responsive Coppock Curve"
And the default Relative Strength Index
The candles are the average of the MFI ,CCI ,MOM and RSI values presented as candles, they seemed similar enough in style to me so I created candles out of each and the took the sum of all the candle's OHLC values and divided by 4 to get an average.
In the Background we have @wolneyyy's - "Mean Deviation Detector - Throw Out All Other Indicators" in blue
along with @algomojo's - "Responsive Coppock Curve" in red and green.