(IK) Base Break BuyThis strategy first calculates areas of support (bases), and then enters trades if that support is broken. The idea is to profit off of retracement. Dollar-cost-averaging safety orders are key here. This strategy takes into account a .1% commission, and tests are done with an initial capital of 100.00 USD. This only goes long.
The strategy is highly customizable. I've set the default values to suit ETH/USD 15m. If you're trading this on another ticker or timeframe, make sure to play around with the settings. There is an explanation of each input in the script comments. I found this to be profitable across most 'common sense' values for settings, but tweaking led to some pretty promising results. I leaned more towards high risk/high trade volume.
Always remember though: historical performance is no guarantee of future behavior . Keep settings within your personal risk tolerance, even if it promises better profit. Anyone can write a 100% profitable script if they assume price always eventually goes up.
Check the script comments for more details, but, briefly, you can customize:
-How many bases to keep track of at once
-How those bases are calculated
-What defines a 'base break'
-Order amounts
-Safety order count
-Stop loss
Here's the basic algorithm:
-Identify support.
--Have previous candles found bottoms in the same area of the current candle bottom?
--Is this support unique enough from other areas of support?
-Determine if support is broken.
--Has the price crossed under support quickly and with certainty?
-Enter trade with a percentage of initial capital.
-Execute safety orders if price continues to drop.
-Exit trade at profit target or stop loss.
Take profit is dynamic and calculated on order entry. The bigger the 'break', the higher your take profit percentage. This target percentage is based on average position size, so as safety orders are filled, and average position size comes down, the target profit becomes easier to reach.
Stop loss can be calculated one of two ways, either a static level based on initial entry, or a dynamic level based on average position size. If you use the latter (default), be aware, your real losses will be greater than your stated stop loss percentage . For example:
-stop loss = 15%, capital = 100.00, safety order threshold = 10%
-you buy $50 worth of shares at $1 - price average is $1
-you safety $25 worth of shares at $0.9 - price average is $0.966
-you safety $25 worth of shares at $0.8. - price average is $0.925
-you get stopped out at 0.925 * (1-.15) = $0.78625, and you're left with $78.62.
This is a realized loss of ~21.4% with a stop loss set to 15%. The larger your safety order threshold, the larger your real loss in comparison to your stop loss percentage, and vice versa.
Indicator plots show the calculated bases in white. The closest base below price is yellow. If that base is broken, it turns purple. Once a trade is entered, profit target is shown in silver and stop loss in red.
Cerca negli script per "TAKE"
The Bayesian Q OscillatorFirst of all the biggest thanks to @tista and @KivancOzbilgic for publishing their open source public indicators Bayesian BBSMA + nQQE Oscillator. And a mighty round of applause for @MarkBench for once again being my superhero pinescript guy that puts these awesome combination Ideas and ES stradegies in my head together. Now let me go ahead and explain what we have here.
I am gonna call it the Bayesian Q Oscillator I suppose. The goal of the script is to solve an issue both indicators on their own suffer from. QQE signals are not new and often the problem has always been false signals for them. They are good for scalping but the difference between a quality move and a small to nearly nonexistent move following a signal is not so clear. Kivanc made his normalized version to help reduce this problem by adding colors to his histogram type verision that would essentially represent if price was a trending move or in a ranging structure. As you can see I have kept this Idea but instead opted for lines as the oscillator. two yellow line (default color) is a ranging sideways area and when there is red or green it is trending up or down. I wanted to take this to the next level with combining the Bayesian probability oscillator that tista put together.
The Bayesian indicator is the opposite for its issue as it is a probability indicator that shows which candle or price movement is more likely to come next. Red rising means possibly down move soon and green means up soon. I will not go into the complex details of this indicator but will suggest others take a look at his and others to understand the idea behind them. The point I am driving at is that it show probabilities or likelyhood without the most effecient signal device to match it. This original was line form and now it is background filled colors.
The idea. is that you can potentially get some stronger and more accurate reversal signals with these two paired together. when you see a sell signal or cross with the towering or rising red... maybe it is a good jump potentially. The same for green. At the same time it is a double added filter effect from just having yellow represent it is ranging... but now if you get a buy signal (example) and have yellow lines (example) along wi5h a red rising or mountain color background... it not only is an indication of ranging, but also that there is potentially even a counter move coming based on the probabilities. Also if you get into a good trade and see dual yellow qqe crosses with no color represented by the bayesian background... it is possible it might only be noise.
I have found them to work decently in the 1 hour timframe. Let me know your experience.
I hope everyone takes a look at the originals to understand them. Full credit goes to those guys for this to be here. Let me know how it is working out for you.
Here are the original links.
bayesian
Normalized QQE
Larry Williams Strategies IndicatorThis indicator is a trend following indicator. It plots some of the trend following strategies described by Larry Williams in his book 'Long Term Secrets to Short Term Trading'. Below are types of trend following strategies you can trade using this indicator. These are notes taken directly from Larry Williams' book.
Short Term Low Strategy
Short Term Low - Any daily low with higher lows on each side of it.
Intermediate Term Low – Any short term low with higher short term lows on each side of it.
Long Term Low – Any intermediate term low with higher intermediate term lows on each side of it.
Conceptual pattern for best buying opportunity is when forming an intermediate term low higher than the last intermediate term low.
This setup can be used on all time frames. However since Larry Williams usually trades the daily chart, the daily chart is probably the best timeframe to trade using this strategy.
Entry point – High of the day that has a higher high on the right side of it.
(My interpretation: price crossing above the high of the previous day is the buy signal)
Target – Markets have a strong tendency to rally above the last intermediate term high by the same amount it moved from the last intermediate term high to the lowest point prior to advancing to new highs.
Trailing Stop – Set stop to most recent short term low, move up as new short term lows are formed. Can also use formation of next intermediate term high as an exit point.
A 'run' to the upside is over when price fails to move higher the next day and falls below the prior day's low.
Short Term High Strategy
Short Term High - Any daily high with lower highs on each side of it.
Intermediate Term High – Any short term high with lower short term highs on each side of it.
Long Term High – Any intermediate term high with lower intermediate term highs on each side of it.
Conceptual pattern for best selling opportunity is when forming an intermediate term high lower than the last intermediate term high.
This setup can be used on all time frames. However since Larry Williams usually trades the daily chart, the daily chart is probably the best timeframe to trade using this strategy.
Entry point – Low of the day that has a lower low on the right side of it.
(My interpretation: price crossing below the low of the previous day is the sell short signal)
Target – Markets have a strong tendency to fall below the last intermediate term low by the same amount it moved from the last intermediate term low to the highest point prior to declining to new lows.
Trailing Stop – Set stop to most recent short term high, move down as new short term highs are formed. Can also use formation of next intermediate term low as an exit point.
A 'run' to the downside is over when price fails to move lower the next day and rises above the prior day's high.
Trend Reversals
A trend change from down to up occurs when a short term high is exceeded on the upside, a trend change from up to down is identified by price going below the most recent low.
Can take these signals to make trades, but it is best to filter them with a confirmation or edge such as Trading Day of the Week, Trading Day of the Month, trendlines, etc. to cut down on false signals.
Three Bar High/Low System
Calculate a three bar moving average of the highs and a three bar moving average of the lows.
Strategy is to buy at the at the price of the three bar moving average of the lows - if the trend is positive according to the swing point trend identification technique - and take profits at the three bar moving average of the highs.
Selling is just the opposite. Sell short at the three bar moving average of the highs and take profits at the three bar moving average of the lows, using the trend identification technique above for confirmation.
This strategy can work on any timeframe, but was described as a daytrading system by Larry Williams.
PRIME - ShadoW ZoneZ with RSI LevelsIn This experimental study, we've taken RSI data, Volume Profile, and Trend analysis, combining them into one unique package that will allow a trader to analyze market trend lines and their proposed channels, trend momentum through candle color augmentation similar to "Pulse", and Visible Volume index price levels on chart for the current sequence. Below are explanations of each function within the system.
The Semafor is used to spot future multi-level Supports and Resistance zones.
It is also useful to spot HL or LL or HH or LH zones at different Depth settings.
The red zones are the extreme places where the market has a higher chance of reversing while the green zones have the lowest setting with lower chances of the market reversal
Automatic Trend Lines
The indicator takes in 2 timeframes to detect High and Low values from which to draw the trend lines of each timeframe.
As the values change with price movement, the lines are updated. They are color coded for uptrend and downtrend based on the direction of each individual line. Trend lines can be set up to color with only the default value on the configurations panel.
- Toggle on/off Color Coded
- Change Default, Uptrend, Downtrend color
- Change Line Width
- Change Line Style
- Toggle on/off Line Extensions
- Change Extended Line Width
- Change Extended Line Style
- Toggle On/Off labels for 7 data points of each timeframe
Automatic Trend Sights
This is a neat feature that may help you get a better feel for the direction the current movement is heading towards in correlation with the short or medium length timeframe trends. The sight draws a line from the middle vertical point of the trend coordinates towards the current price. They are toggled off by default but can be enabled in the configurations panel.
- Toggle on/off sight on each timeframe
- Change Width
- Change Line Style
Support & Resistance Levels, the main aim of the study. Level calculations are based on Relative Strength Index ( RSI ) threshold levels of oversold/overbought and bull/bear zones, where all threshold values are customizable through the user dialog box. Background of the levels can be colored optionally.
RSI Weighted Colored Bars and/or Mark Overbought/Oversold Bars , Bar colors can be painted to better emphasis RSI values. Darker colors when the oscillator is in oversold/overbought zones, light colors when oscillator readings are below/above the bull/bear zone respectively, and remain unchanged otherwise. Besides the colors, with “Display RSI Overbought/Oversold Price Bars” option little triangle shapes can be plotted on top or bottom of the bars when RSI is in oversold/overbought zones .
Disclaimer:
Trading success is all about following your trading strategy and the indicators should fit within your trading strategy, and not to be traded upon solely
The script is for informational and educational purposes only. Use of the script does not constitute professional and/or financial advice. You alone have the sole responsibility of evaluating the script output and risks associated with the use of the script. In exchange for using the script, you agree not to hold dgtrd TradingView user liable for any possible claim for damages arising from any decision you make based on use of the script
Monte Carlo Range Forecast [DW]This is an experimental study designed to forecast the range of price movement from a specified starting point using a Monte Carlo simulation.
Monte Carlo experiments are a broad class of computational algorithms that utilize random sampling to derive real world numerical results.
These types of algorithms have a number of applications in numerous fields of study including physics, engineering, behavioral sciences, climate forecasting, computer graphics, gaming AI, mathematics, and finance.
Although the applications vary, there is a typical process behind the majority of Monte Carlo methods:
-> First, a distribution of possible inputs is defined.
-> Next, values are generated randomly from the distribution.
-> The values are then fed through some form of deterministic algorithm.
-> And lastly, the results are aggregated over some number of iterations.
In this study, the Monte Carlo process used generates a distribution of aggregate pseudorandom linear price returns summed over a user defined period, then plots standard deviations of the outcomes from the mean outcome generate forecast regions.
The pseudorandom process used in this script relies on a modified Wichmann-Hill pseudorandom number generator (PRNG) algorithm.
Wichmann-Hill is a hybrid generator that uses three linear congruential generators (LCGs) with different prime moduli.
Each LCG within the generator produces an independent, uniformly distributed number between 0 and 1.
The three generated values are then summed and modulo 1 is taken to deliver the final uniformly distributed output.
Because of its long cycle length, Wichmann-Hill is a fantastic generator to use on TV since it's extremely unlikely that you'll ever see a cycle repeat.
The resulting pseudorandom output from this generator has a minimum repetition cycle length of 6,953,607,871,644.
Fun fact: Wichmann-Hill is a widely used PRNG in various software applications. For example, Excel 2003 and later uses this algorithm in its RAND function, and it was the default generator in Python up to v2.2.
The generation algorithm in this script takes the Wichmann-Hill algorithm, and uses a multi-stage transformation process to generate the results.
First, a parent seed is selected. This can either be a fixed value, or a dynamic value.
The dynamic parent value is produced by taking advantage of Pine's timenow variable behavior. It produces a variable parent seed by using a frozen ratio of timenow/time.
Because timenow always reflects the current real time when frozen and the time variable reflects the chart's beginning time when frozen, the ratio of these values produces a new number every time the cache updates.
After a parent seed is selected, its value is then fed through a uniformly distributed seed array generator, which generates multiple arrays of pseudorandom "children" seeds.
The seeds produced in this step are then fed through the main generators to produce arrays of pseudorandom simulated outcomes, and a pseudorandom series to compare with the real series.
The main generators within this script are designed to (at least somewhat) model the stochastic nature of financial time series data.
The first step in this process is to transform the uniform outputs of the Wichmann-Hill into outputs that are normally distributed.
In this script, the transformation is done using an estimate of the normal distribution quantile function.
Quantile functions, otherwise known as percent-point or inverse cumulative distribution functions, specify the value of a random variable such that the probability of the variable being within the value's boundary equals the input probability.
The quantile equation for a normal probability distribution is μ + σ(√2)erf^-1(2(p - 0.5)) where μ is the mean of the distribution, σ is the standard deviation, erf^-1 is the inverse Gauss error function, and p is the probability.
Because erf^-1() does not have a simple, closed form interpretation, it must be approximated.
To keep things lightweight in this approximation, I used a truncated Maclaurin Series expansion for this function with precomputed coefficients and rolled out operations to avoid nested looping.
This method provides a decent approximation of the error function without completely breaking floating point limits or sucking up runtime memory.
Note that there are plenty of more robust techniques to approximate this function, but their memory needs very. I chose this method specifically because of runtime favorability.
To generate a pseudorandom approximately normally distributed variable, the uniformly distributed variable from the Wichmann-Hill algorithm is used as the input probability for the quantile estimator.
Now from here, we get a pretty decent output that could be used itself in the simulation process. Many Monte Carlo simulations and random price generators utilize a normal variable.
However, if you compare the outputs of this normal variable with the actual returns of the real time series, you'll find that the variability in shocks (random changes) doesn't quite behave like it does in real data.
This is because most real financial time series data is more complex. Its distribution may be approximately normal at times, but the variability of its distribution changes over time due to various underlying factors.
In light of this, I believe that returns behave more like a convoluted product distribution rather than just a raw normal.
So the next step to get our procedurally generated returns to more closely emulate the behavior of real returns is to introduce more complexity into our model.
Through experimentation, I've found that a return series more closely emulating real returns can be generated in a three step process:
-> First, generate multiple independent, normally distributed variables simultaneously.
-> Next, apply pseudorandom weighting to each variable ranging from -1 to 1, or some limits within those bounds. This modulates each series to provide more variability in the shocks by producing product distributions.
-> Lastly, add the results together to generate the final pseudorandom output with a convoluted distribution. This adds variable amounts of constructive and destructive interference to produce a more "natural" looking output.
In this script, I use three independent normally distributed variables multiplied by uniform product distributed variables.
The first variable is generated by multiplying a normal variable by one uniformly distributed variable. This produces a bit more tailedness (kurtosis) than a normal distribution, but nothing too extreme.
The second variable is generated by multiplying a normal variable by two uniformly distributed variables. This produces moderately greater tails in the distribution.
The third variable is generated by multiplying a normal variable by three uniformly distributed variables. This produces a distribution with heavier tails.
For additional control of the output distributions, the uniform product distributions are given optional limits.
These limits control the boundaries for the absolute value of the uniform product variables, which affects the tails. In other words, they limit the weighting applied to the normally distributed variables in this transformation.
All three sets are then multiplied by user defined amplitude factors to adjust presence, then added together to produce our final pseudorandom return series with a convoluted product distribution.
Once we have the final, more "natural" looking pseudorandom series, the values are recursively summed over the forecast period to generate a simulated result.
This process of generation, weighting, addition, and summation is repeated over the user defined number of simulations with different seeds generated from the parent to produce our array of initial simulated outcomes.
After the initial simulation array is generated, the max, min, mean and standard deviation of this array are calculated, and the values are stored in holding arrays on each iteration to be called upon later.
Reference difference series and price values are also stored in holding arrays to be used in our comparison plots.
In this script, I use a linear model with simple returns rather than compounding log returns to generate the output.
The reason for this is that in generating outputs this way, we're able to run our simulations recursively from the beginning of the chart, then apply scaling and anchoring post-process.
This allows a greater conservation of runtime memory than the alternative, making it more suitable for doing longer forecasts with heavier amounts of simulations in TV's runtime environment.
From our starting time, the previous bar's price, volatility, and optional drift (expected return) are factored into our holding arrays to generate the final forecast parameters.
After these parameters are computed, the range forecast is produced.
The basis value for the ranges is the mean outcome of the simulations that were run.
Then, quarter standard deviations of the simulated outcomes are added to and subtracted from the basis up to 3σ to generate the forecast ranges.
All of these values are plotted and colorized based on their theoretical probability density. The most likely areas are the warmest colors, and least likely areas are the coolest colors.
An information panel is also displayed at the starting time which shows the starting time and price, forecast type, parent seed value, simulations run, forecast bars, total drift, mean, standard deviation, max outcome, min outcome, and bars remaining.
The interesting thing about simulated outcomes is that although the probability distribution of each simulation is not normal, the distribution of different outcomes converges to a normal one with enough steps.
In light of this, the probability density of outcomes is highest near the initial value + total drift, and decreases the further away from this point you go.
This makes logical sense since the central path is the easiest one to travel.
Given the ever changing state of markets, I find this tool to be best suited for shorter term forecasts.
However, if the movements of price are expected to remain relatively stable, longer term forecasts may be equally as valid.
There are many possible ways for users to apply this tool to their analysis setups. For example, the forecast ranges may be used as a guide to help users set risk targets.
Or, the generated levels could be used in conjunction with other indicators for meaningful confluence signals.
More advanced users could even extrapolate the functions used within this script for various purposes, such as generating pseudorandom data to test systems on, perform integration and approximations, etc.
These are just a few examples of potential uses of this script. How you choose to use it to benefit your trading, analysis, and coding is entirely up to you.
If nothing else, I think this is a pretty neat script simply for the novelty of it.
----------
How To Use:
When you first add the script to your chart, you will be prompted to confirm the starting date and time, number of bars to forecast, number of simulations to run, and whether to include drift assumption.
You will also be prompted to confirm the forecast type. There are two types to choose from:
-> End Result - This uses the values from the end of the simulation throughout the forecast interval.
-> Developing - This uses the values that develop from bar to bar, providing a real-time outlook.
You can always update these settings after confirmation as well.
Once these inputs are confirmed, the script will boot up and automatically generate the forecast in a separate pane.
Note that if there is no bar of data at the time you wish to start the forecast, the script will automatically detect use the next available bar after the specified start time.
From here, you can now control the rest of the settings.
The "Seeding Settings" section controls the initial seed value used to generate the children that produce the simulations.
In this section, you can control whether the seed is a fixed value, or a dynamic one.
Since selecting the dynamic parent option will change the seed value every time you change the settings or refresh your chart, there is a "Regenerate" input built into the script.
This input is a dummy input that isn't connected to any of the calculations. The purpose of this input is to force an update of the dynamic parent without affecting the generator or forecast settings.
Note that because we're running a limited number of simulations, different parent seeds will typically yield slightly different forecast ranges.
When using a small number of simulations, you will likely see a higher amount of variance between differently seeded results because smaller numbers of sampled simulations yield a heavier bias.
The more simulations you run, the smaller this variance will become since the outcomes become more convergent toward the same distribution, so the differences between differently seeded forecasts will become more marginal.
When using a dynamic parent, pay attention to the dispersion of ranges.
When you find a set of ranges that is dispersed how you like with your configuration, set your fixed parent value to the parent seed that shows in the info panel.
This will allow you to replicate that dispersion behavior again in the future.
An important thing to note when settings alerts on the plotted levels, or using them as components for signals in other scripts, is to decide on a fixed value for your parent seed to avoid minor repainting due to seed changes.
When the parent seed is fixed, no repainting occurs.
The "Amplitude Settings" section controls the amplitude coefficients for the three differently tailed generators.
These amplitude factors will change the difference series output for each simulation by controlling how aggressively each series moves.
When "Adjust Amplitude Coefficients" is disabled, all three coefficients are set to 1.
Note that if you expect volatility to significantly diverge from its historical values over the forecast interval, try experimenting with these factors to match your anticipation.
The "Weighting Settings" section controls the weighting boundaries for the three generators.
These weighting limits affect how tailed the distributions in each generator are, which in turn affects the final series outputs.
The maximum absolute value range for the weights is . When "Limit Generator Weights" is disabled, this is the range that is automatically used.
The last set of inputs is the "Display Settings", where you can control the visual outputs.
From here, you can select to display either "Forecast" or "Difference Comparison" via the "Output Display Type" dropdown tab.
"Forecast" is the type displayed by default. This plots the end result or developing forecast ranges.
There is an option with this display type to show the developing extremes of the simulations. This option is enabled by default.
There's also an option with this display type to show one of the simulated price series from the set alongside actual prices.
This allows you to visually compare simulated prices alongside the real prices.
"Difference Comparison" allows you to visually compare a synthetic difference series from the set alongside the actual difference series.
This display method is primarily useful for visually tuning the amplitude and weighting settings of the generators.
There are also info panel settings on the bottom, which allow you to control size, colors, and date format for the panel.
It's all pretty simple to use once you get the hang of it. So play around with the settings and see what kinds of forecasts you can generate!
----------
ADDITIONAL NOTES & DISCLAIMERS
Although I've done a number of things within this script to keep runtime demands as low as possible, the fact remains that this script is fairly computationally heavy.
Because of this, you may get random timeouts when using this script.
This could be due to either random drops in available runtime on the server, using too many simulations, or running the simulations over too many bars.
If it's just a random drop in runtime on the server, hide and unhide the script, re-add it to the chart, or simply refresh the page.
If the timeout persists after trying this, then you'll need to adjust your settings to a less demanding configuration.
Please note that no specific claims are being made in regards to this script's predictive accuracy.
It must be understood that this model is based on randomized price generation with assumed constant drift and dispersion from historical data before the starting point.
Models like these not consider the real world factors that may influence price movement (economic changes, seasonality, macro-trends, instrument hype, etc.), nor the changes in sample distribution that may occur.
In light of this, it's perfectly possible for price data to exceed even the most extreme simulated outcomes.
The future is uncertain, and becomes increasingly uncertain with each passing point in time.
Predictive models of any type can vary significantly in performance at any point in time, and nobody can guarantee any specific type of future performance.
When using forecasts in making decisions, DO NOT treat them as any form of guarantee that values will fall within the predicted range.
When basing your trading decisions on any trading methodology or utility, predictive or not, you do so at your own risk.
No guarantee is being issued regarding the accuracy of this forecast model.
Forecasting is very far from an exact science, and the results from any forecast are designed to be interpreted as potential outcomes rather than anything concrete.
With that being said, when applied prudently and treated as "general case scenarios", forecast models like these may very well be potentially beneficial tools to have in the arsenal.
FIBS S/R IndicatorHello,
I've decided to publish a new script. The previous version of this script was removed by admins for breaking community rules.
So I present to you the Fibonacci Support / Resistance.
1. How does it work
Ratio plots
I first take the input of pivot look back and search for pivots high and low.
And then it takes a second look back to search highest high and lowest low to establish the top bottom range.
Then using the top and bottom I plot ratios provided as input. Defaults to most relevant 5 ratios I've found (Fibonacci):
Ratio 0 = 0 - can't be changed
Ratio 1 = 0.5
Ratio 2 = 0.618
Ratio 3 = 1
Ratio 4 = 1.618
Ratio 5 = 2.618
Any changes done to these ratios should be in order, otherwise conditions could get messed up. So R1 needs to the lowest and R5 the highest.
Also the same ratios are used in reverse as negative ratios.
There is a option to plot all ratios but gets really confusing for me but maybe for you it works. By default there are certain conditions set so that as we go up new resistance ratio get displayed and as we go down we see new resistance plots.
Trendlines
I've also added some automatic trendline plots with breakout warning labels based on the pivots high and low. Start and end for trendlines can be changed via inputs.
Labels can be deactivated via input. On a older version the trendlines and labels where not removed from the chart but I felt like there was to much information.
Overcooked/Undercooked
I've also added some fills and background colors that indicate if the price action is over R5 or under Negative R5 ratios. This usually indicates some "overcooking" or "undecooking".
I've notices that after "crossunder"/"crossover" top bottom ratios it goes in consolidation or it dumps. So then I plot a bgcolor to signal that.
2. How to use it
Using plot lines we can determine where we have support and resistance. I found that the best way to use the default ratios values is on the 1H chart. Very good for trading on crypto because of current situation in the market where there is a lot of new people entering the space and volatility and sentiment make swings respect the Fibonacci ratios.
3. Examples
For instance lets look at BINANCE:BTCUSDT .
On the left we see that the price action between 20 and 21 February was "overcooked". So after we got the signal that we "crossunder" the R5 the signal was triggered and we got a small red candle followed by a small dip and after that we got a small bounce and a dump.
If we also look at MF-RSI we can also see we got multiple bear divs.
Lets entertain the idea that we went short at ~57.1k as soon as we get signaled and it starts dumping.
Where does it stop ?
We can see it went all the way down to Negative R5 ratio. Normally that should signal "undercooking" but this was not triggered as it did not close under it (signaled in green).
We can also see that previous support now becomes resistance (signaled in red).
If we take a look at BINANCE:ETHUSDT , we do see that the "undercooking" was triggered here.
I will be publishing a more detailed Idea with examples of using this on the BINANCE:BTCUSDT chart in combination with Volume and other technical analysis.
Use with caution, this is not 100% signal indicator as the markets do what they want. But by using this in combination with other indicators like MF-RSI, EMAs and regular patterns we can get some targets for Support/Resistance.
I'm trying to create a strategy based on this indicator but I'm not getting very good results. Best results were on the 15 min chart with gross profits around ~50%.
Please try to play around with the inputs and let me know if you find something interesting, maybe I can incorporate new features in the indicator.
You can find the MF-RSI indicator here
BOSCILLATOR. A BOSS OSCILLATORI would like to first say I do not the indicator pieces. Would like to personally give thanks and credit to @MarkBench for coding this indicator and helping to get my vision for this system finally able to be published and used by anyone. I would also like the thank @lazy bear and @ChrisMoody for their bringing the Firefly oscillator and the SCHAFF TREND and the PPO price percentage oscillator to trading view. and @scilentor for his version of Godmode with LSMA . Thanks to @Shizaru for bringing Frama moving average (which we have adopted into the PPO as one of the base selections for the first time, as well as the ALMA ). Divergences have also been added. and components of the firefly have been removed such as the histogram. I have added two oscillators in the picture. The bottom is the standard settings. The above is how I prefer mine to look after tweaking the settings.
Before I get into explaining how its used. I want to say all the indicators are open and none privately owned or at least owned by indivduals who brought them to trading view. Any due permission is granted at my disclocsure. I also want to say this is not your typical mashup of indicators as the is a very clear way to view and use this specifically. Also I want to say original tools from their original scripts are also improved. For example the PPO being used we have added the FRAMA and ALMA moving average basis option which it did not have before. And now everything has clear divergences and some other minor changers. but here are the rules and examples.
THE BOSCILLATOR - A MULTI-LEVEL CONFLUENCE/CONFIRMATION FILTER VISUALIZATION
Some shorthand
(Main oscillator - firefly)
(background wave thing - PPO )
(the red vertical up and down line with red and green dots - STC )
(the blue, yellow and red dots - warning dots)
WHO IS THIS INDICATOR FOR? - This indicator itself is not meant to be a signal giver to buy or sell right now even though it could be and some of the original scripts are used as such. This indicator is actually meant to be a VISUAL CONFIRMATION & FILTER for trades taken in other methods outside of this indicator. What are some of those methods that may benefit from having this? Pivot point traders, FIB traders, Bollinger band traders, Moving average traders.. just to name a few. This indicator itself is meant to in a quick glance allow the trade to see the condition of many different elements outside of the main price and chart, and determine if that trade looks like it has too much risk, or if that trade looks suitable. It also provides a series of confirmations that could be used for adding to a position at different levels at the trade's discretion.
OPTIMAL CONDITIONS FOR CONSIDERING A SHORT = The PPO is orange/red + the STC is at the TOP + the Firefly is above the midline. The warning dots are being printed at the top. There is regular or hidden bearish divergence present.
OPTIMAL CONDITIONS FOR CONSIDERING A LONG = The PPO is light/dark green + the STC is at the BOTTOM + the Firefly is below the midline. The warning dots are being printed at the bottom. There is regular or hidden bullish divergence present.
Triggers for scaling/adding into your position = Keeping in mind that this oscillator on its own is not meant to be the sole reason for taking a trade, here are some triggers you will see for getting into position (preferably with the optimal conditions being met) The firefly flips from a green line into a red slide and vise versa. The firefly crosses the midline up or down. The STC begins going up/down and triggers a green or red dot while crossing one of the levels at 20 or 80. The warning dots being made begin to be printed lower/higher than the dot before last. The PPO shift from one color to the next in the favored direction of the trade you wish to make.
Signs for taking profit and protecting your trade = The Dots begin to print, the PPO changes colors at the top or bottom. the STC arrived are the top.
FILTER SITUATIONS TO AVOID TRADES = Wise to not take a trade if the PPO and the firefly do not agree. For example - if the PPO is showing green yet the firefly is still red may be an indication that it is getting a bit late for you to enter the trade. Same with opposing divergences and warning dots contradicting the trade you are looking at. The STC being already on the bottom or top may be a small indication that trade may already have been a little too ripe, but on its own is not always the case.
When selecting the PPO settings and moving average you are going to want it to be in favor of what you are trying to accomplish.IF you are one low time frames and trying to swing or scalp trade... chances are you want a reactive MA setting that iss responsive. Iwould recommend the HULL, ALMA, TEMA DEMA. For the Higher time frame the EMA or the T3 WDma can be wuite patient and helpful for a constant reminder of caution
Some notes - for the swing and scalp trading... in my experience the PPO moving average basis sees more responsive changes with the FRAMA , ALMA , HULL settings. for entering a trade, at least a couple of your triggers being present increases the success rate by a lot.
This chart illustrates the usefullness of having a Zero lag function for the firefly. The firefly should not be taken for signals or trades itself. However it is the most precise finder of divergences within the system. It is aways good to flip on and off zero lag just to take a quick look for divergences you might have mixed.
In this chart illustrates the general visual look and order of events to guide you along your way. Starts with the PPO turning green or red or orange which is potentially time to get out of your current trade. Then it switched colors when reversal begins and that is when you want to at the same time see the STC, the firefly, and lower caution dots coming in around the same area (highlighted in blue squares). Now near the end you see a red box. This is a filter aspect. The PPO is green, yet the others are saying down/short. This does not mean it must be a long, however it is great warning to maybe avoid getting to bearish for the downside in that time. You want the PPO and line up with the others and it should be visually apparent that they all want to go the same way.
Here is a list of some key elements (before changes this script made) of parts this oscillator includes. My original publication of my oscillation setup was blocked by the mods here.. this one however includes a large variety of items that have been altered from their original formats and a well-explained trading system to use it with.
// Firefly
Firefly Oscillator
// PPO
PPO PercentileRank Mkt Tops & Bottoms (@PuppyTherapy)
// Divergence
Divergence Indicator (any oscillator)
// Godmode
Godmode3.2+LSMA
// Schaff
Schaff Trend Cycle
// Frama
(FRAMA) Fractal Adaptive Moving Average
Better OBVOBV with William C. Garrett's Approximation
In the classical OBV (On-Balance Volume) indicator, it simply takes the idea from traditional tape reading - treat the "up tick" as Buy, "down tick" as Sell, and it assumes no change in price as neutral* (*which is not the case in tape reading).
When it comes to interpret the daily volume as such, errors will add up cumulatively. For example, there are days when a Doji Star with high volume just merely one cent higher than yesterday price and the whole day volume would be taken as a BUY Volume....
Here is a gentlemen, William C. Garrett, attempted to break down the daily volume into two parts in his book - "Torque Analysis of Stock Market Cycle".
Published indicator has two modes: Cumulative and Time Segmented. Time Segmented Volume (TSV) - performs a MACD operation on the Garrett Money Flow.
Note on Divergence:
When using a indicator as Time Segmented Money Flow, divergence would surely occur on and off. This is where Wyckoff 3rd principle comes into play - "Effort vs Result" that is not matching. Meaning that the cumulation of shares goes in one direction while the price goes another direction.
On Balance Volume FieldsThe On Balance Volume (OBV) indicator was developed by Joseph E. Granville and published first in his book "New key to stock market profits" in 1963. It uses volume to determine momentum of an asset. The base concept of OBV is - in simple terms - you take a running total of the volume and either add or subtract the current timeframe volume if the market goes up or down. The simplest use cases only use the line build that way to confirm direction of price, but the possibilities and applications of OBV go far beyond that and are (at least to my knowledge) not found in existing indicators available on this platform.
If you are interested to get a deeper understanding of OBV, I recommend the lecture of the above mentioned book by Granville. All the features described below are taken directly from the book or are inspired by it (deviations will be marked accordingly). If you have no prior experience with OBV, I recommend to start simple and read an easy introduction (e.g. On-Balance Volume (OBV) Definition from Investopedia) and start applying the basic concepts first before heading into the more advanced analysis of OBV fields and trends.
Markets and Timeframes
As the OBV is "just" a momentum indicator, it should be applicable to any market and timeframe.
As a long term investor, my experience is limited to the longer timeframes (primarily daily), which is also how Granville applies it. But that is most likely due to the time it was developed and the lack of lower timeframe data at that point in time. I don't see why it wouldn't be applicable to any timeframe, but cannot speak from experience here so do your own research and let me know. Likewise, I invest in the crypto markets almost exclusively and hence this is where my experience with this indicator comes from.
Feature List
As a general note before starting into the description of the individual features: I use the colors and values of the default settings of the indicator to describe it. The general look and feel obviously can be customized (and I highly recommend doing so, as this is a very visual representation of volume, and it should suit your way of looking at a chart) and I also tried to make the individual features as customizable as possible.
Also, all additions to the OBV itself can be turned off so that you're left with just the OBV line (although if that's what you want, I recommend a version of the indicator with less overhead).
Fields
Fields are defined as successive UPs or DOWNs on the OBV. An UP is any OBV reading above the last high pivot and subsequently a DOWN is any reading below the last low pivot. An UP-field is the time from the first UP after a DOWN-field to the first DOWN (not including). The same goes for a DOWN field but vice versa.
The field serves the same purpose as the OBV itself. To indicate momentum direction. I haven't found much use for the fields themselves other than serving as a more smoothed view on the current momentum. The real power of the fields emerges when starting to determine larger trends of off them (as you will see soon).
Therefor the fields are displayed on the indicator as background colors (UP = green, DOWN = red), but only very faint to not distract too much from the other parts of the indicator.
Major Volume Trend
The major volume trend - from which Granville says, it's the one that tends to precede price - is determined as the succession of the highest highs and lowest lows of UP and DOWN fields. It is represented by the colors of the numbers printed on the highs and lows of the fields.
The trend to be "Rising" is defined as the highest high of an UP field being higher than the highest high of the last UP field and the lowest low of the last DOWN field being higher than the lowest low of the prior DOWN field. And vice versa for a "Falling" trend. If the trend does not have a rising or falling pattern, it is said to be "Doubtful". The colors are indicated as follows:
Rising = green
Falling = red
Doubtful = blue
ZigZag Swing count
The swing count is determined by counting the number of swings within a trend (as described above) and is represented by the numbers above the highs and lows of the fields. It determines the length and thus strength of a trend.
In general there are two ways to determine the count. The first one is by counting the swings between pivots and the second one by counting the swings between highs and lows of fields. This indicator represents the SECOND one as it represents the longer term trend (which I'm more interested in as it denotes a longer term perspective).
However, the ZigZag count has three applications on the OBV. The "simple ZigZag" is a count of three swings which mainly tells you that the shorter term momentum of the market has changed and the current trend is weakening. This doesn't mean it will reverse. A count of three downs is still healthy if it occurs on a strong uptrend (and vice versa) and it should primarily serve as a sign of caution. If the count increases beyond three, the last trend is weakening considerably, and you should probably take action.
The second count to look out for is five swings - the "compound ZigZag". If this goes hand in hand with breaking a major support/resistance on the OBV it can offer a buying/selling opportunity in the direction of the trend. Otherwise, there's a good chance that this is a reversal signal.
The third count is nine. To quote Granville directly: "there is a very strong tendency FOR MAJOR REVERSAL OF REND AFTER THE NINTH SWING" (emphasis by the author). This is something I look out for and get cautious about, although I have found signal to be weak in an overextended market. I have observed counts of 10 and even 12 which did not result in a major reversal and the market trended further after a short period of time. This is still a major sign of caution and should not be taken lightly.
Moving average
Although Granville talks only briefly about averages and the only mention of a specific one is the 10MA, I found moving averages to be a very valuable addition to my analysis of the OBV movements.
The indicator uses three Exponential Moving Averages. A long term one to determine the general direction and two short term ones to determine the momentum of the trend. Especially for the latter two, keep in mind that those are very indirect as they are indicators of an indicator anyway and I they should not necessarily be used as support or resistance (although that might sometimes be helpful). I recommend paying most attention to the longterm average as I've found it to be very accurate when determining the longterm trend of a market (even better than the same indicator on the price).
If the OBV is above the long term average, the space between OBV and average is filled green and filled red if below. The colors and defaults for the averages are:
long term, 144EMA, green
short term 1, 21EMA, blue
short term 2, 55EMA, red
Divergences
This is a very rudimentary adaption of the standard TradingView "Divergence Indicator". I find it helpful to have these on the radar, but do not actively use them (as in having a strategy based on OBV/price divergence). This is something that I would eventually pick up in a later version of the indicator if there is any demand for it, or I find the time to look into strategies based on this.
Comparison line
A small but very helpful addition to the indicator is a horizontal line that traces the current OBV value in real time, which makes it very easy to compare the current value of the OBV to historic values (which is a study I can highly recommend).
String Manipulation Framework [PineCoders FAQ]█ OVERVIEW
This script provides string manipulation functions to help Pine coders.
█ FUNCTIONS PROVIDED
f_strLeft(_str, _n)
Function returning the leftmost `_n` characters in `_str`.
f_strRight(_str, _n)
Function returning the rightmost `_n` characters in `_str`.
f_strMid(_str, _from, _to)
Function returning the substring of `_str` from character position `_from` to `_to` inclusively.
f_strLeftOf(_str, _of)
Function returning the sub-string of `_str` to the left of the `_of` separating character.
f_strRightOf(_str, _of)
Function returning the sub-string of `_str` to the right of the `_of` separating character.
f_strCharPos(_str, _chr)
Function returning the position of the first occurrence of `_chr` in `_str`, where the first character position is 0. Returns -1 if the character is not found.
f_strReplace(_src, _pos, _str)
Function that replaces a character at position `_pos` in the `_src` string with the `_str` character or string.
f_tickFormat()
Function returning a format string usable with `tostring()` to round a value to the symbol's tick precision.
f_tostringPad(_val, _fmt)
Function returning a string representation of a numeric `_val` using a special `_fmt` string allowing all strings to be of the same width, to help align columns of values.
`f_tostringPad()`
Using the functions should be straightforward, but `f_tostringPad()` requires more explanations. Its purpose is to help coders produce columns of fixed-width string representations of numbers which can be used to produce columns of numbers that vertically align neatly in labels, something that comes in handy when, for example, you need to center columns, yet still produce numbers of various lengths that nonetheless align.
While the formatting string used with this function resembles the one used in tostring() , it has a few additional characteristics:
• The question mark (" ? ") is used to indicate that padding is needed.
• If negative numbers must be handled by the function, the first character of the formatting string must be a minus sign ("-"),
otherwise the unary minus sign of negative numbers will be stripped out.
• You will produce more predictable results by using "0" rather than "#" in the formatting string.
You can experiment with `f_tostringPad()` formatting strings by changing the one used in the script's inputs and see the results on the chart.
These are some valid examples of formatting strings that can be used with `f_tostringPad()`:
"???0": forces strings to be four units wide, in all-positive "int" format.
"-???0": forces strings to be four units wide, plus room for a unary minus sign in the first position, in "int" format.
"???0.0": forces strings to be four units wide to the left of the point, all-positive, with a decimal point and then a mantissa rounded to a single digit.
"-???0.0?": same as above, but adds a unary minus sign for negative values, and adds a space after the single-digit mantissa.
"?????????0.0": forces the left part of the float to occupy the space of 10 digits, with a decimal point and then a mantissa rounded to a single digit.
█ CHART
The information displayed by this indicator uses the values in the script's Inputs, so you can use them to play around.
The chart shows the following information:
• Column 0 : The numeric input values in a centered column, converted to strings using tostring() without a formatting argument.
• Column 1 : Shows the values formatted using `f_tostringPad()` with the formatting string from the inputs.
• Column 2 : Shows the values formatted using `f_tostringPad()` but with only the part of the formatting string left of the decimal point, if it contains one.
• Column 3 : Shows the values formatted using `f_tostringPad()` but with the part of the formatting string left of the decimal point,
to which is added the right part of the `f_tostringPad()` formatting string, to obtain the precision in ticks of the symbol the chart is on.
• Column 4 : Shows the result of using the other string manipulation functions in the script on the source string supplied in the inputs.
It also demonstrates how to split up a label in two distinct parts so that you can vertically align columns when the leftmost part contains strings with varying lengths.
You will see in our code how we construct this column in two steps.
█ LIMITATIONS
The Pine runtime is optimized for number crunching. Too many string manipulations will take a toll on the performance of your scripts, as can readily be seen with the running time of this script. To minimize the impact of using string manipulation functions in your scripts, consider limiting their calculation to the first or last bar of the dataset when possible. This can be achieved by using the var keyword when declaring variables containing the result of your string manipulations, or by enclosing blocks of code in if blocks using barstate.isfirst or barstate.islast .
█ NOTES
To understand the challenges we face when trying to align strings vertically, it is useful to know that:
• As is the case in many other places in the TadingView UI and other docs, the Pine runtime uses the MS Trebuchet font to display label text.
• Trebuchet uses proportionally-spaced letters (a "W" takes more horizontal space than an "I"), but fixed-space digits (a "1" takes the same horizontal space as a "3").
Digits all use a figure space width, and it is this property that allows us to align numbers vertically.
The fact that letters are proportionally spaced is the reason why we can't vertically align columns using a "legend" + ":" `+ value structure when the "legend" part varies in width.
• The unary minus sign is the width of a punctuation space . We use this property to pad the beginning of numbers
when you use a "-" as the first character of the `f_tostringPad()` formatting string.
Our script was written using the PineCoders Coding Conventions for Pine .
The description was formatted using the techniques explained in the How We Write and Format Script Descriptions PineCoders publication.
█ THANKS
Thanks to LonesomeTheBlue for the `f_strReplace()` function.
Look first. Then leap.
Modified Smoothed Heiken AshiThis code is based on Smoothed HA candle which will work on all chart types
condition for BUY:
1. When close crosses Smoothed HA
2.Close should be in side upper band
3.BBW must be greater than the average
vice versa for sell
this code takes data from HA chart so that it can be applied on all chart type.
Bollinger band and Bollinger band width conditions added for removal of unwanted signals
Alert added so that you can apply alert and check it in real time performance
thanks to The Secret Mindset You tube channel from where I got the idea to convert this into a pine script indicator
smooth HA taken from "Smoothed Heiken Ashi Candles v1" at //@jackvmk
RSI of Ultimate Oscillator [SHORT Selling] StrategyThis is SHORT selling strategy with Ultimate Oscillator. Instead of drectly using the UO oscillator , I have used RSI on UO (as I did in my previous strategies )
Ultimator Oscillator settings are 5, 10 and 15
RSI of UO setting is 5
Short Sell
==========
I have used moving averages from WilliamAlligator indicator --- settings are 10(Lips), 20(teeth) and 50 (Jaw)
when Lips , Teeth and Jaw are aligned to downtrend (that means Lips < Teeth < Jaw )
Look for RSIofUO dropping below 60 ( setting parameter is Sell Line )
Partial Exit
==========
When RSIofUO crossing up Oversold line i.e 30
Cover Short / Exit
=================
When RSIofUO crosisng above overbought line i.e 70
StopLoss
========
StopLoss defaulted to 3 % , Though it is mentioned in settings , it has not been not used to calcuate and StopLoss Exit... Reason is, when RSIofUO already crossed 60 line (for SHORTING) , then it would take more efforts go up beynd 60. There is saying price takes stairs to climb up but it takes elevator to go down. I have not purely depend on this to exit stop loss, however noticed the trades in this stratgey did not get out with loss higher than when RSIofUO reaching 70 level.
Note
======
Williams Alligator is not drawn from the script. It is manually added to chart for illustration purpose. Please add it when you are using this strategy , whch woould give an idea how the strategy is taking Short Trades.
This is tested on Hourly chart for SPY
Bar color changes to purple when the strategy is in SHORT trade
Warning
========
For the eductional purposes only
Pyramiding Entries On Early Trends (by Coinrule)Pyramiding the entries in a trading strategy may be risky but at the same time very profitable with a proper risk management approach. This strategy seeks to spot early signs of uptrends and increase the position's size while the right conditions persist.
Each trade comes with its stop-loss and take-profit to enforce a proportional risk/reward profile.
The strategy uses a mix of Moving Average based setups to define the buy-signal.
The Moving Average (200) is above the Moving Average (100), which prevents from buying when the uptrend is already in its late stages
The Moving Average (9) is above the Moving Average (100), indicating that the coin is not in a downtrend.
The price crossing above the Moving Average (9) confirms the potential upside used to fire the buy order.
Each entry comes with a stop-loss and a take-profit in a ratio of 1-to-1. After over 400 backtests, we opted for a 3% TP and 3% SL, which provides the best results.
The strategy is optimized on a 1-hour time frame.
The Advantages of this strategy are:
It offers the possibility of adjusting the size of the position proportionally to the confidence in the possibilities that an uptrend will eventually form.
Low drawdowns. On average, the percentage of trades in profit is above 60%, and the stop-loss equal to the take-profit reduces the overall risk.
This strategy returned good returns both with trading pairs with Fiat/stable coins and with BTC. Considering the mixed trends that cryptocurrencies experienced during 2020 vs BTC, this strengthens the strategy's reliability.
The strategy assumes each order to trade 20% of the available capital and pyramids the entries up to 7 times.
A trading fee of 0.1% is taken into account. The fee is aligned to the base fee applied on Binance, which is the largest cryptocurrency exchange.
Joseph Nemeth Heiken Ashi Renko MTF StrategyFor Educational Purposes. Results can differ on different markets and can fail at any time. Profit is not guaranteed. This only works in a few markets and in certain situations. Changing the settings can give better or worse results for other markets.
Nemeth is a forex trader that came up with a multi-time frame heiken ashi based strategy that he showed to an older audience crowd on a speaking event video. He seems to boast about his strategy having high success results and makes an astonishing claim that looking at heiken ashi bars instead of regular candlestick bar charts can show the direction of the trend better and simpler than many other slower non-price based indicators. He says pretty much every indicator is about the same and the most important indicator is price itself. He is pessimistic about the markets and seems to think it is rigged and there is a sort of cabal that created rules to favor themselves, such as the inability of traders to hedge in one broker account, and that to win you have to take advantage of the statistics involved in the game. He believes fundamentals, chart patterns such as cup and handle and head and shoulders, and fibonacci numbers don't matter, only price matters. The foundation of his trading strategy is based around heiken ashi bars because they show a statistical pattern that can supposedly be taken advantage of by them repeating around seventy or so percent of the time, and then combines this idea with others based on the lower time frames involved.
The first step he uses is to identify the trend direction in the higher time frame(daily or 4 hourly) using the color of the heiken ashi bar itself. If it is green then take only long position after the bar completes, if it is red then take only short position. Next, on a lower time frame(1 hour or 30 minutes) look for the slope of the 20 exponential moving average to be sloping upward if going long or the slope of the ema to be sloping downward if going short(the price being above the moving average can work too if it's too hard to visualize the slope). Then look for the last heiken ashi bar, similarly to the first step, if it is green take long position, if it is red take short position. Finally the entry indicator itself will decide the entry on the lowest time frame. Nemeth recommends using MACD or CCI or possibly combine the two indicators on a 5 min or 15 min or so time frame if one does not have access to renko or range bars. If renko bars are available, then he recommends a 5 or 10 tick bar for the size(although I'm not sure if it's really possible to remove the time frame from renko bars or if 5 or 10 ticks is universal enough for everything). The idea is that renko bars paint a bar when there is price movement and it's important to have movement in the market, plus it's a simple indicator to use visually. The exit strategy is when the renko or the lowest time frame indicator used gives off an exit signal or if the above conditions of the higher time frames are not being met(he was a bit vague on this). Enter trades with only one-fifth of your capital because the other fifths will be used in case the trades go against you by applying a hedging technique he calls "zero zone recovery". He is somewhat vague about the full workings(perhaps because he uses his own software to automate his strategy) but the idea is that the second fifth will be used to hedge a trade that isn't going well after following the above, and the other fifths will be used to enter on another entry condition or if the other hedges fail also. Supposedly this helps the trader always come out with a profit in a sort of bushido-like trading tactic of never accepting defeat. Some critics argue that this is simply a ploy by software automation to boost their trade wins or to sell their product. The other argument against this strategy is that trading while the heiken ashi bar has not completed yet can jack up the backtest results, but when it comes to trading in real time, the strategy can end up repainting, so who knows if Nemeth isn't involving repainting or not, however he does mention the trades are upon completion of the bar(it came from an audience member's question). Lastly, the 3 time frames in ascending or descending fashion seem to be spaced out by about factors of 4 if you want to trade other time frames other than 5/15min,30min/1hour, or 4hour/daily(he mentioned the higher time frame should be atleast a dozen times higher than the lower time frame).
Personally I have not had luck getting the seventy+ percent accuracy that he talks about, whether in forex or other things. I made the default on renko bars to an ATR size 1 setting because it looks like the most universal option if the traditional mode box size is too hard to guess, and I made it so that you can switch between ATR and Traditional mode just in case. I don't think the strategy repaints because I think TV set a default on the multi-time frame aspects of their code to not re-paint, but I could be wrong so you might want to watch out for that. The zero zone recovery technique is included in the code but I commented it out and/or remove it because TV does not let you apply hedging properly, as far as I know. If you do use a proper hedging strategy with this, you'll find a very interesting bushido type of trading style involved with the Japanese bars that can boost profits and win rates of around possibly atleast seventy percent on every trade but unfortunately I was not able to test this part out properly because of the limitation on hedging here, and who knows if the hedging part isn't just a plot to sell his product. If his strategy does involve the repainting feature of the heiken ashi bars then it's possible he might have been preaching fools-gold but it's hard to say because he did mention it is upon completion of the bars. If you find out if this strategy works or doesn't work or find out a good setting that I somehow didn't catch, please feel free to let me know, will gladly appreciate it. We are all here to make some money!
Finnie's HL BREAKOUTFirst the indicators takes a range, by default it is 22 candles, then finds the highest and lowest points of said range. At this point your left with lines that follow your support and resistance in the given range (take a look by change the 100 ema in settings to 1). To take things a step further I took a 100 candle ema of the highest highest and lowest lows to not only smooth things out, but also to provide visual ques for breakouts, when closing price is above the top band the asset is considered to be breaking out.
Plot Break-even PriceThis indicator simply plots your entry price and the break-even point (green line). Area between the entry price and the break-even point will “eat” you profit by exchange fees. You can use the green line to lock your break-even point. I do not recommend using this strategy for trading, because the entry logic is based on SMA crosses. However, this script could be used within you own strategy to plot the break-even point.
For example, there is 0.1% Maker fee and 0.1% Taker fee at Binance spot exchange. You need to sum up those two fees to calculate the break-even point. Every exit above/below the green line will guarantee a profit (in our case it means 0.2% above the entry price for long position and 0.2% below the entry price for short position).
Polynomial Regression Bands + Channel [DW]This is an experimental study designed to calculate polynomial regression for any order polynomial that TV is able to support.
This study aims to educate users on polynomial curve fitting, and the derivation process of Least Squares Moving Averages (LSMAs).
I also designed this study with the intent of showcasing some of the capabilities and potential applications of TV's fantastic new array functions.
Polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modeled as a polynomial of nth degree (order).
For clarification, linear regression can also be described as a first order polynomial regression. The process of deriving linear, quadratic, cubic, and higher order polynomial relationships is all the same.
In addition, although deriving a polynomial regression equation results in a nonlinear output, the process of solving for polynomials by least squares is actually a special case of multiple linear regression.
So, just like in multiple linear regression, polynomial regression can be solved in essentially the same way through a system of linear equations.
In this study, you are first given the option to smooth the input data using the 2 pole Super Smoother Filter from John Ehlers.
I chose this specific filter because I find it provides superior smoothing with low lag and fairly clean cutoff. You can, of course, implement your own filter functions to see how they compare if you feel like experimenting.
Filtering noise prior to regression calculation can be useful for providing a more stable estimation since least squares regression can be rather sensitive to noise.
This is especially true on lower sampling lengths and higher degree polynomials since the regression output becomes more "overfit" to the sample data.
Next, data arrays are populated for the x-axis and y-axis values. These are the main datasets utilized in the rest of the calculations.
To keep the calculations more numerically stable for higher periods and orders, the x array is filled with integers 1 through the sampling period rather than using current bar numbers.
This process can be thought of as shifting the origin of the x-axis as new data emerges.
This keeps the axis values significantly lower than the 10k+ bar values, thus maintaining more numerical stability at higher orders and sample lengths.
The data arrays are then used to create a pseudo 2D matrix of x power sums, and a vector of x power*y sums.
These matrices are a representation the system of equations that need to be solved in order to find the regression coefficients.
Below, you'll see some examples of the pattern of equations used to solve for our coefficients represented in augmented matrix form.
For example, the augmented matrix for the system equations required to solve a second order (quadratic) polynomial regression by least squares is formed like this:
(∑x^0 ∑x^1 ∑x^2 | ∑(x^0)y)
(∑x^1 ∑x^2 ∑x^3 | ∑(x^1)y)
(∑x^2 ∑x^3 ∑x^4 | ∑(x^2)y)
The augmented matrix for the third order (cubic) system is formed like this:
(∑x^0 ∑x^1 ∑x^2 ∑x^3 | ∑(x^0)y)
(∑x^1 ∑x^2 ∑x^3 ∑x^4 | ∑(x^1)y)
(∑x^2 ∑x^3 ∑x^4 ∑x^5 | ∑(x^2)y)
(∑x^3 ∑x^4 ∑x^5 ∑x^6 | ∑(x^3)y)
This pattern continues for any n ordered polynomial regression, in which the coefficient matrix is a n + 1 wide square matrix with the last term being ∑x^2n, and the last term of the result vector being ∑(x^n)y.
Thanks to this pattern, it's rather convenient to solve the for our regression coefficients of any nth degree polynomial by a number of different methods.
In this script, I utilize a process known as LU Decomposition to solve for the regression coefficients.
Lower-upper (LU) Decomposition is a neat form of matrix manipulation that expresses a 2D matrix as the product of lower and upper triangular matrices.
This decomposition method is incredibly handy for solving systems of equations, calculating determinants, and inverting matrices.
For a linear system Ax=b, where A is our coefficient matrix, x is our vector of unknowns, and b is our vector of results, LU Decomposition turns our system into LUx=b.
We can then factor this into two separate matrix equations and solve the system using these two simple steps:
1. Solve Ly=b for y, where y is a new vector of unknowns that satisfies the equation, using forward substitution.
2. Solve Ux=y for x using backward substitution. This gives us the values of our original unknowns - in this case, the coefficients for our regression equation.
After solving for the regression coefficients, the values are then plugged into our regression equation:
Y = a0 + a1*x + a1*x^2 + ... + an*x^n, where a() is the ()th coefficient in ascending order and n is the polynomial degree.
From here, an array of curve values for the period based on the current equation is populated, and standard deviation is added to and subtracted from the equation to calculate the channel high and low levels.
The calculated curve values can also be shifted to the left or right using the "Regression Offset" input
Changing the offset parameter will move the curve left for negative values, and right for positive values.
This offset parameter shifts the curve points within our window while using the same equation, allowing you to use offset datapoints on the regression curve to calculate the LSMA and bands.
The curve and channel's appearance is optionally approximated using Pine's v4 line tools to draw segments.
Since there is a limitation on how many lines can be displayed per script, each curve consists of 10 segments with lengths determined by a user defined step size. In total, there are 30 lines displayed at once when active.
By default, the step size is 10, meaning each segment is 10 bars long. This is because the default sampling period is 100, so this step size will show the approximate curve for the entire period.
When adjusting your sampling period, be sure to adjust your step size accordingly when curve drawing is active if you want to see the full approximate curve for the period.
Note that when you have a larger step size, you will see more seemingly "sharp" turning points on the polynomial curve, especially on higher degree polynomials.
The polynomial functions that are calculated are continuous and differentiable across all points. The perceived sharpness is simply due to our limitation on available lines to draw them.
The approximate channel drawings also come equipped with style inputs, so you can control the type, color, and width of the regression, channel high, and channel low curves.
I also included an input to determine if the curves are updated continuously, or only upon the closing of a bar for reduced runtime demands. More about why this is important in the notes below.
For additional reference, I also included the option to display the current regression equation.
This allows you to easily track the polynomial function you're using, and to confirm that the polynomial is properly supported within Pine.
There are some cases that aren't supported properly due to Pine's limitations. More about this in the notes on the bottom.
In addition, I included a line of text beneath the equation to indicate how many bars left or right the calculated curve data is currently shifted.
The display label comes equipped with style editing inputs, so you can control the size, background color, and text color of the equation display.
The Polynomial LSMA, high band, and low band in this script are generated by tracking the current endpoints of the regression, channel high, and channel low curves respectively.
The output of these bands is similar in nature to Bollinger Bands, but with an obviously different derivation process.
By displaying the LSMA and bands in tandem with the polynomial channel, it's easy to visualize how LSMAs are derived, and how the process that goes into them is drastically different from a typical moving average.
The main difference between LSMA and other MAs is that LSMA is showing the value of the regression curve on the current bar, which is the result of a modelled relationship between x and the expected value of y.
With other MA / filter types, they are typically just averaging or frequency filtering the samples. This is an important distinction in interpretation. However, both can be applied similarly when trading.
An important distinction with the LSMA in this script is that since we can model higher degree polynomial relationships, the LSMA here is not limited to only linear as it is in TV's built in LSMA.
Bar colors are also included in this script. The color scheme is based on disparity between source and the LSMA.
This script is a great study for educating yourself on the process that goes into polynomial regression, as well as one of the many processes computers utilize to solve systems of equations.
Also, the Polynomial LSMA and bands are great components to try implementing into your own analysis setup.
I hope you all enjoy it!
--------------------------------------------------------
NOTES:
- Even though the algorithm used in this script can be implemented to find any order polynomial relationship, TV has a limit on the significant figures for its floating point outputs.
This means that as you increase your sampling period and / or polynomial order, some higher order coefficients will be output as 0 due to floating point round-off.
There is currently no viable workaround for this issue since there isn't a way to calculate more significant figures than the limit.
However, in my humble opinion, fitting a polynomial higher than cubic to most time series data is "overkill" due to bias-variance tradeoff.
Although, this tradeoff is also dependent on the sampling period. Keep that in mind. A good rule of thumb is to aim for a nice "middle ground" between bias and variance.
If TV ever chooses to expand its significant figure limits, then it will be possible to accurately calculate even higher order polynomials and periods if you feel the desire to do so.
To test if your polynomial is properly supported within Pine's constraints, check the equation label.
If you see a coefficient value of 0 in front of any of the x values, reduce your period and / or polynomial order.
- Although this algorithm has less computational complexity than most other linear system solving methods, this script itself can still be rather demanding on runtime resources - especially when drawing the curves.
In the event you find your current configuration is throwing back an error saying that the calculation takes too long, there are a few things you can try:
-> Refresh your chart or hide and unhide the indicator.
The runtime environment on TV is very dynamic and the allocation of available memory varies with collective server usage.
By refreshing, you can often get it to process since you're basically just waiting for your allotment to increase. This method works well in a lot of cases.
-> Change the curve update frequency to "Close Only".
If you've tried refreshing multiple times and still have the error, your configuration may simply be too demanding of resources.
v4 drawing objects, most notably lines, can be highly taxing on the servers. That's why Pine has a limit on how many can be displayed in the first place.
By limiting the curve updates to only bar closes, this will significantly reduce the runtime needs of the lines since they will only be calculated once per bar.
Note that doing this will only limit the visual output of the curve segments. It has no impact on regression calculation, equation display, or LSMA and band displays.
-> Uncheck the display boxes for the drawing objects.
If you still have troubles after trying the above options, then simply stop displaying the curve - unless it's important to you.
As I mentioned, v4 drawing objects can be rather resource intensive. So a simple fix that often works when other things fail is to just stop them from being displayed.
-> Reduce sampling period, polynomial order, or curve drawing step size.
If you're having runtime errors and don't want to sacrifice the curve drawings, then you'll need to reduce the calculation complexity.
If you're using a large sampling period, or high order polynomial, the operational complexity becomes significantly higher than lower periods and orders.
When you have larger step sizes, more historical referencing is used for x-axis locations, which does have an impact as well.
By reducing these parameters, the runtime issue will often be solved.
Another important detail to note with this is that you may have configurations that work just fine in real time, but struggle to load properly in replay mode.
This is because the replay framework also requires its own allotment of runtime, so that must be taken into consideration as well.
- Please note that the line and label objects are reprinted as new data emerges. That's simply the nature of drawing objects vs standard plots.
I do not recommend or endorse basing your trading decisions based on the drawn curve. That component is merely to serve as a visual reference of the current polynomial relationship.
No repainting occurs with the Polynomial LSMA and bands though. Once the bar is closed, that bar's calculated values are set.
So when using the LSMA and bands for trading purposes, you can rest easy knowing that history won't change on you when you come back to view them.
- For those who intend on utilizing or modifying the functions and calculations in this script for their own scripts, I included debug dialogues in the script for all of the arrays to make the process easier.
To use the debugs, see the "Debugs" section at the bottom. All dialogues are commented out by default.
The debugs are displayed using label objects. By default, I have them all located to the right of current price.
If you wish to display multiple debugs at once, it will be up to you to decide on display locations at your leisure.
When using the debugs, I recommend commenting out the other drawing objects (or even all plots) in the script to prevent runtime issues and overlapping displays.
Volatility GuppyBased on my previous script "Turtle N Normalized," this script plots the CM SuperGuppy on the value of N to identify changing trends in the volatility of any instrument.
Turtle rules taken from an online PDF:
"The Turtles used a concept that Richard Dennis and Bill Eckhardt called N to represent the underlying volatility of a particular market.
N is simply the 20-day exponential moving average of the True Range, which is now more commonly known as the ATR. Conceptually, N represents the average range in price movement that a particular market makes in a single day, accounting for opening gaps. N was measured in the same points as the underlying contract.
The Turtles built positions in pieces which we called Units. Units were sized so that 1 N represented 1% of the account equity. Thus, a unit for a given market or commodity can be calculated using the following formula:
Unit = 1% of Account/(N x Dollars per Point)"
To normalize the Unit formula, this script instead takes the value of (close/N). Dollars per point = 1 for stocks and crypto, but will change depending on the contract specifications for individual futures .
"Since the Turtles used the Unit as the base measure for position size, and since those units were volatility risk adjusted, the Unit was a measure of both the risk of a position, and of the entire portfolio of positions."
When the EMA's are green, volatility is decreasing.
When the EMA's are red, volatility is increasing.
When the EMA's are grey, the trend is changing.
Turtle N NormalizedSimple script that calculates the normalized value of N. Rules taken from an online PDF containing the original Turtle system:
"The Turtles used a volatility-based constant percentage risk position sizing algorithm. The Turtles used a concept that Richard Dennis and Bill Eckhardt called N to represent the underlying volatility of a particular market.
N is simply the 20-day exponential moving average of the True Range, which is now more commonly known as the ATR. Conceptually, N represents the average range in price movement that a particular market makes in a single day, accounting for opening gaps. N was measured in the same points as the underlying contract.
The Turtles built positions in pieces which we called Units. Units were sized so that 1 N represented 1% of the account equity. Thus, a unit for a given market or commodity can be calculated using the following formula:
Unit = 1% of Account/(N x Dollars per Point)"
To normalize the Unit formula, this script instead takes the value of (close/N). Dollars per point = 1 for stocks and crypto, but will change depending on the contract specifications for individual futures.
"Since the Turtles used the Unit as the base measure for position size, and since those units were volatility risk adjusted, the Unit was a measure of both the risk of a position, and of the entire portfolio of positions."
When the value of N is high, volatility is low and you should be more risk-on.
When the value of N is low, volatility is high and you should be more risk-off.
PineScript v4 - Forex Pin-Bar Trading StrategyPineScript v4, forex trading robot based on the commonly used bullish / bearish pin-bar piercing the moving averages strategy.
I coded this robot to stress-test the PineScript v4 language to see how advanced it is, and whether I could port a forex trading strategy from MT4 to TradingView.
In my opinion, PineScript v4 is still not a professional coding language; for example you cannot use IF-statements to modify the contents of global variables; this makes complex robot behaviour difficult to implement. In addition, it is unclear if the programmer can use nested IF-ELSE, or nested FOR within IF.
The sequence of program execution is also unclear, and although complex order entry and exit appears to function properly, I am not completely comfortable with it.
Recommended Chart Settings:
Asset Class: Forex
Time Frame: H1
Long Entry Conditions:
a) Moving Average up trend, fast crosses above slow
b) Presence of a Bullish Pin Bar
c) Pin Bar pierces either Moving Average
d) Moving Averages must be sloping up, angle threshold (optional)
Short Entry Conditions:
a) Moving Average down trend, fast crosses below slow
b) Presence of a Bearish Pin Bar
c) Pin Bar pierces either Moving Average
d) Moving Averages must be sloping down, angle threshold (optional)
Exit Conditions:
a) Stoploss level is hit
b) Takeprofit level is hit
c) Moving Averages cross-back (optional)
Default Robot Settings:
Equity Risk (%): 3 //how much account balance to risk per trade
Stop Loss (x*ATR, Float): 2.1 //stoploss = x * ATR, you can change x
Risk : Reward (1 : x*SL, Float): 3.1 //takeprofit = x * stop_loss_distance, you can change x
Fast MA (Period): 20 //fast moving average period
Slow MA (Period): 50 //slow moving average period
ATR (Period): 14 //average true range period
Use MA Slope (Boolean): true //toggle the requirement of the moving average slope
Bull Slope Angle (Deg): 1 //angle above which, moving average is considered to be sloping up
Bear Slope Angle (Deg): -1 //angle below which, moving average is considered to be sloping down
Exit When MA Re-Cross (Boolean): true //toggle, close trade if moving average crosses back
Cancel Entry After X Bars (Period): 3 //cancel the order after x bars not triggered, you can change x
Backtest Results (2019 to 2020, H1, Default Settings):
EURJPY - 111% profit, 2.631 profit factor, 16.43% drawdown
EURUSD - 103% profit, 2.899 profit factor, 14.95% drawdown
EURAUD - 76.75% profit, 1.8 profit factor, 17.99% drawdown
NZDUSD - 64.62% profit, 1.727 profit factor, 19.14% drawdown
GBPUSD - 58.73% profit, 1.663 profit factor, 15.44% downdown
AUDJPY - 48.71% profit, 1.635 profit factor, 11.81% drawdown
USDCHF - 30.72% profit, 1.36 profit factor, 22.63% drawdown
AUDUSD - 8.54% profit, 1.092 profit factor, 19.86% drawdown
EURGBP - 0.03% profit, 1.0 profit factor, 29.66% drawdown
USDJPY - 1.96% loss, 0.972 profit factor, 28.37% drawdown
USDCAD - 6.36% loss, 0.891 profit factor, 21.14% drawdown
GBPJPY - 28.27% loss, 0.461 profit factor, 39.13% drawdown
To reduce the possibility of curve-fitting, this robot was backtested on 12 popular forex currencies, as shown above. The robot was profitable on 8 out of 12 currencies, breakeven on 1, and made a loss on 3.
The default robot settings could be over-fitting for the EUR, as we can see out-sized performance for the EUR pairs, with the exception of the EURGBP. We can see that GBPJPY made the largest loss, so these two pairs could be related.
Risk Warning:
This is a forex trading strategy that involves high risk of equity loss, and backtest performance will not equal future results. You agree to use this script at your own risk.
Momentum Acceleration by DGTItalian physicist Galileo Galilei is usually credited with being the first to measure speed by considering the distance covered and the time it takes. Galileo defined speed as the distance covered during a period of time. In equation form, that is v = Δd / Δt where v is speed, Δd is change in distance, and Δt is change in time. The Greek symbol for delta, a triangle (Δ), means change.
Is the speed getting faster or slower?
Acceleration will be the answer, acceleration is defined as the rate of change of speed over a set period of time, meaning something is getting faster or slower. Mathematically expressed, acceleration denoted as a is a = Δv / Δt , where Δv is the change in speed and Δt is the change in time.
How to apply in trading
Lets think about Momentum, Rate of Return, Rate of Change all are calculated in almost same approach with Speed
Momentum measures change in price over a specified time period,
Rate of Change measures percent change in price over a specified time period,
Rate of Return measures the net gain or loss over a specified time period,
And Speed measures change in distance over a specified time period
So we may state that measuring the change in distance is also measuring the change in price over a specified time period which is length, hence
speed can be calculated as (source – source )/length and acceleration becomes (speed – speed )/length
In this study acceleration is used as signal line and result plotted as arrows demonstrating bull or bear direction where direction changes can be considered as trading setups
Just a little fun, since we deal with speed the short name of the study is named after famous cartoon character Speedy Gonzales
Trading success is all about following your trading strategy and the indicators should fit within your trading strategy, and not to be traded upon solely
Disclaimer: The script is for informational and educational purposes only. Use of the script does not constitutes professional and/or financial advice. You alone the sole responsibility of evaluating the script output and risks associated with the use of the script. In exchange for using the script, you agree not to hold dgtrd TradingView user liable for any possible claim for damages arising from any decision you make based on use of the script
MAFIA CANDLESMafia Candles is a Exhaustion bar count and candle count indicator, Using the Leledc Candles and 1-3 counting candle play gives you a pretty good idea where a so called "top" will be or a so called "bottom" will be!
In this example, getting the transparent round circles ( either lime or red ) would mean that the move will be a good size move!
EXAMPLE=1 You see a down trend and then the Mafia Candles Flashes a Green Dot on the forming new red candle. This is where in theory you might want to consider going long on the market!
EXAMPLE=2 If you see a RED $ symbol, after a uptrend, this means in theory, there might be room for a short play or room for a small pullback in the price!
THE CIRCLES(RED OR LIME COLORED) ARE INDICATING BIGGER MOVES!
THE $ SYMBOLS (RED OR LIME COLORED) ARE INDICATING SMALLER PULLBACKS OR SMALLER PUMPS IN PRICE!
RED IS CONSIDERED TO BE A SELL!
LIME COLOR IS CONSIDERED TO BE A BUY!
AS MUCH IS BASED OF THE 1-3 CANDLE COUNT AND THE LEDLEC CANDLE DEVIATION STRATEGY, LET ME EXPLAIN THE THEORY ON BOTH THE 1-3 CANDLE COUNT AND THE LELEDC STRATEGY I COMBINE TO BRING YOU THIS ADDITION OF THE INDICATOR....
LELEDC THEORY USAGE...
An Exhaustion Bar is a bar which signals
the exhaustion of the trend in the current direction. In other words an
exhaustion bar is “A bar of last seller” in case of a downtrend and “A bar of
last buyer”in case of an uptrend.
Having said that when a party cannot take the price further in their direction,naturally the other party comes in , takes charge and reverses the direction of the trend.
TO EASIER UNDERSTAND I GIVE YOU A EASY EXAMPLE OF WHAT AN LELEDC EXHAUSTION BAR IS...
1. A wide range bar ( a bar with
long body!!!).
2. A long wick at the bottom of
the bar and no or negligible wick at the top of the bar in case of “Bear exhaustion bar” and
a long wick at the top and no or
negligible wick at the bottom of the bar in case of
“Bull exhuation bar”!!!
3. Extreme volume and.....
4. Bar forming at a key support or resistance
area including a Round Number (RN) and Big Round Number ( BRN ).THE PSYCHOLOGY BEHIND THIS!!!
Now let's assume that we have a group
of people,say 100 people who decides to go for a casual running. After running for few KM's few of
them will say “I am exhausted. I cannot run further”. They will quit running.
After running further, another bunch of runners will say “I am exhausted. I can’t run
further” and they also will quit running.
This goes on and on and then there will be a stage where only few will be left in the running. Now a stage will come where the last person left in the running will say “I
am exhausted” and he stops running. That means no one is left now in the
running.This means all are exhausted in the running.
The same way an exhaustion bar works and if we can figure out that
exhaustion bar with all the tools available on hand, we will be in a big trade
for sure!!.The reason is an exhaustion bar is formed at exact tops and bottoms most of the times.In forex with wide variety of pairs available at the counter ,one can trade this technique to make lifetime gains.
NOW LET ME EXPLAIN THE 1-3 CANDLE CORRECTION COUNT THEORY WHICH IS USED TO GET THE SUM UP SIGNALS FROM THIS INDICATOR FROM ITS INPUT LEVELS!!!
1-3 CANDLES....
The 1-3 Candlestick pattern is basically like sequential, aka a candle counting system!
1-3 CANDLE COUNT means you count the number of bullish=green candles or the bearish=red candles!
3 BULL/GREEN CANDLES in a row, each closing its close higher than the previous one before it is the 1-3 candle top count idea!
lets say you get 3 red bear candles, each candle after the first closes its body below the previous red candle before it, then you see 3 red candles with each closing lower bodies lower than the previous candle, THATS A POSSIBLE SIGN OF BEARISH EXHAUSTION, AND YOU MIGHT HAVE SOME BULLS STEP IN TO TAKE THE PRICE UP AFTER THE IMMEDIATE DOWNFALL OF THOSE 3 RED CANDLES!!
PLEASE IF ANYONE HAS QUESTIONS OR NEEDS ANY FURTHER EXPLANATION, DONT HESISITATE TO MESSAGE ME! CHALRES KNIGHT IS THE ORIGINAL AUTHOR OF THE 1-3 CANDLE COUNT AND THE LELEDC EXHAUSTION BAR INDICATOR ON METE-TRADER! R.IP CHARLES F KNIGHT!!! WE LOVE YOU AND MISS YOU BROTHER!
CHARLES KNIGHT PASSED DOWN ALL OF HIS INDICATORS AND SCRIPTS IN ORIGINAL CODE TO MYSELF WHEN HE PASSED AWAY AND I WILL CONTINUE TO HONOR HIS MEMORY BY ENHANCING HIS ORIGINAL SOURCE CODED SCRIPTS TO ENHANCE THE LIFE FOR ALL TRADERS!
CHARLIE LOVED WHEN I WOULD PUT MY OWN SWING ON HIS INDICATORS! HE TAUGHT ME EVERYTHING I KNOW AND I KNOW ONE DAY I WILL SEE HIM AGAIN!
TRADE IN PARADISE CHARLIE!!!
THE BEST TRADER IN THE WORLD!!!
EMA Slope Trend Follower StrategyThis strategy is based on the slope of the EMA130.
Over that slope, the script calculates two EMAs (9,21) which are used to generate the main entry and exit signal.
In particular, the strategy enters a LONG position when EMA9 > EMA21. On the contrary, it closes the LONG and opens a SHORT when EMA9 < EMA21.
When the slope of the EMA130 is rising, it means that the price is accelerating upwards, fueling an uptrend. Conversely, when the slope is falling, it means that the price is slowing down, falling into a possible downtrend.
Calculating and analyzing two EMAs (fast and slow) over the slope of a medium length EMA instead of the price anticipates a lot the signal. In this way, the strategy never miss a trend.
In order to minimize false positives (entering useless positions), I included two filters, which can be optionally turned on:
- Trend Filter: When the price is above EMA200, the strategy opens ONLY LONG positions. If price < EMA200, only shorts allowed. If the slope gives a long signal and price is below EMA200, for example, the eventual SHORT position is closed, but the LONG entry is postponed to the moment when both conditions (slope uptrending and price > ema200) are met.
I recommend always turning on this filter, as it dramatically decreases drawdown.
- Volatility Filter: When the standard deviation of the last 20 candles is below its 50 samples moving average, no positions are opened, as market is going sideways. The purpose of this filter is to prevent false positives (positions which open and close in a matter of candles due to false signals in sideways market).
I recommend turning on this filter only on low time frames.
This strategy works great on medium time frames (like 4h, 6h, daily), since it spends way less in fees, opening less positions.
It works good on low TFs too (up to 1h, didn't test lower ones), provided Volatility filter is turned on and parameters are set according to the asset.
Commission included in calculations: 0.06% (it's the taker commission on BitMEX with the 10% discount obtainable with any referral link)
Slippage included in calculations: 2 ticks (BitMEX has very liquid order books, and slippage doesn't happen very often unless a huge position size is used).