Popular Posts

Tuesday, July 4, 2017

An attempt at an anti-rocketsurgery explanation of Kalman Filter

This post is for providing some intuition for people who are interested in Kalman Filters, trying to learn what it is but lost their way in equations and for dummies explanations involving matrix notation!

Wikipedia definition:

Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by using Bayesian inference and estimating a joint probability distribution over the variables for each time-frame. The filter is named after Rudolf E. Kálmán, one of the primary developers of its theory.

Quantopian Definition:

The Kalman filter is an algorithm that uses noisy observations of a system over time to estimate the parameters of the system (some of which are unobservable) and predict future observations. At each time step, it makes a prediction, takes in a measurement, and updates itself based on how the prediction and measurement compare.

Use in trading:

Options are endless as you may have several measurements and predictions involving uncertainty. These days, for retail quants, a common use is for updating the hedge ratio in mean reversion strategies.

My explanation:

There is a guy called Rich who owns 5.000 cows. This guy is very detail oriented and he always wants to know the most accurate location of his cows. He knows that he can never be exact about the location unless he sees with his own eyes, so he wants to hear two things, where the cow is and how many meters this estimation can be off. For this purpose he has hired three cowboys. The old cowboy, the spotter and Mr. Kalman.

Old cowboy knows everything about the cows but always sits in the house and says things like if a cow has left the farm 2 hours ago and it is raining, she should be near the big rock by the river now. The spotter, instead of sitting in the house, goes on a hill next to the farm and tells where the cow is. Mr. Kalman listens these guys and makes the final decision on the cows location.

Old cowboy knows the past very well, so he has this general understanding of the behavior and the daily cycle of a cow. The spotter only counts on what he sees. However, Mr. Kalman knows that both are not exact in their assessments and he needs a method for balancing the information to come up with a good estimate of the cows location.

At the first day of the work, Mr Kalman knows that the cow was next to the big rock by the river one hour ago but he also knows that he is not certain. She may as well be 300 meters to the north of the big rock.

Mr. Kalman: Ok old cowboy, we know that cow was around the big rock, tell me where she is now.
Old Cowboy: She is at 100 meters to the north of the big rock
Mr. Kalman: Are you certain?
Old Cowboy: I may be off by 50 meters, she may be at 50 meters north to the rock
Mr Kalman: Thank you Old Cowboy
Mr Kalman: Ok spotter, tell me where the cow is
Spotter: She is at 500 meters to the north of the big rock
Mr Kalman: Are you certain?
Spotter: I may be off by 100 meters, she may be at 400 meters to the north of the rock
Mr Kalman: Thank you spotter

Mr. Kalman decides making the assumption that the spotter generally corrects the old cowboy as the spotter actually sees where the cow is. He may have a bad eye, there may be fog etc, but it is still a reflection of the reality. Then he thinks “if the spotter was 100% sure of the cows position, I should have taken it as a fact and ignore the old cowboy, however, as the spotter is not sure, I should find a way to incorporate the old cowboys estimation and make my final decision”

He comes up with a way of calculating the share of the old cowboy in total uncertainty:

Old cowboys uncertainty / (Old cowboys uncertainty + Spotters uncertainty) = 50 / (50 + 100) = 0.33

He says, “hmm, the old cowboy is more certain than the spotter, I can use this ratio as my trust in the spotters estimate, if the spotter was 100% certain and hence spotters uncertainty was zero, then my trust in spotters estimate would have been 1, which is not the case here” Then he looks for a way to adjust spotters estimate taking into consideration his trust in the spotters estimate.

He thinks again “If I say the position of the cow was next to the big rock one hour ago and the spotter says that now it is 500 meters to the north and I do not trust him 100%, I may multiply 500 with my trust in the spotters estimate which is 0.33". So he comes to the conclusion that the cow is 165 meters to the north of the big rock.

Then he says “wait a minute, what about my uncertainty of the cows position one hour ago, it was 300 meters, I need to calculate this as well taking into consideration the new information from the old man and the spotter. He says, “If the spotter was 100% certain, I might have consider to be certain and remove all uncertainty but this is not the case, the spotter is bringing in more uncertainty than the old man. As my trust in the spotters estimate is 0.33, this is what I should be taking out of the uncertainty one hour ago, so 0.67 of the uncertainty should stay, which is 200 meters”

Mr. Kalman then goes and reports to his boss, “the cow is 165 meters north to the big rock by the river with an uncertainty of 200 meters.


- The big rock is the estimate of the previous state (the location of the cow)
- 300 is the error of the previous state estimation
- Old Guy is the model of the cows motion and errors of this old guy are normally distributed, he is mostly consistent in his uncertainty in the long run
- Old guy is the transition model or you can call the prediction
- Spotter is the noisy sensor doing the measurement
- Spotters is the observation model or you can call the correction
- Mr. Kalman’s trust in spotters estimate is the Kalman Gain(KG), 0.33 in our case
- 165 is the estimate of the current state
    - New Position = Old position + KG(spotters estimate - old position)
    - 0 + 0.33(500 - 0) = 165
- 200 is the error of the current state
    - Uncertainty now = (1 - trust in spotters estimate) x uncertainty one hour ago
    - (1-0.33) x 300 = 200
- Farm owner is the rich guy, nothing to do with math
- Cow has left the farm and traveling the world now in the form of processed meat

If you want the same guys including the cow in a trading setting like updating the hedge ratio in mean reversion strategies, let me know for a future post.

If you find this post useful, please like it on Quantocracy!

Monday, June 26, 2017

Trading Decisions of Your Stone Age Grandpa can Make You Money in FOREX

Why Ferrari or Rolex does not price their products at 149.999 or 12.999 but most of the items you see in your supermarket is priced like 4.99? Because they never like to be positioned as a bargain. Did you know that we tend to chose the price with less syllables even if the two prices have the same written lenght? These are some of the pricing strategies used by marketers.

This is a very interesting topic and you can even find yourself deep in neuroscience while reading about it. Check this site, it gives 42 pricing methods to influence your brain, crazy, https://www.nickkolenda.com/psychological-pricing-strategies/

We also deal with prices when trading and I believe there are some sub-concious forces in play. Knowing some articles on things like the effect of round numbers in trading, I see some more potential here and I find this worth digging more not as standalone trading strategies but more as filters. At the end of the day, no one knows how the price feed effects the primitive you sitting inside you.

Among the ones I have tested out of this 42 methods outlined at the above link, I want to share with one that can be applied as a filter.

In [30]:
#do the imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import bulk_tester as bt
%matplotlib inline
import datetime

Get price data for the Back Test period(BT) which is between 01.01.2009 and 01.01.2014 (M15 bars)

In [31]:
dftest = bt.get_data('EURUSD', 'BT', 'original')

Now, I want to short EURUSD for this period and I want to do this when the market is calm, lets say I believe that signal to noise ratio is better when the market is calm. I want to short when the spread is smaller than 1.1 and volume is smaller than 300. Here 300 is the volume provided by OANDA and it shows how many ticks make up a bar, an indication of fragmented activity.

Before coding the trading logic, I want to get a list of bars and use it as a control group before I apply any trading logic. I get the indexes of every 10th bar. For the backtesting period, we have 12807 such bars in hand.

In [32]:
index_of_entry = []
b = 0#this is the loop counter

#loop the frame
for index, row in dftest.iterrows():
    #this is mostly randomly selected entry points at every 10 bar, the control group
    if b % 10 == 0:
    b += 1
In [33]:

Then I run a quick and dirty backtest to see the performance of this selected entry points. The trading rule here is this:

  • short at the index if spread < 1.1 and volume < 300 (these are optimized by hand for the BT period)
  • close after 50 bars (optimized by hand for BT period)
In [34]:
a = index_of_entry
pnl = []
lag = 50#periods to close the position

for b in range(0,len(a)):
    if a[b] + lag < len(dftest) and 10000*(dftest.Ask[a[b]] - dftest.Bid[a[b]]) < 1.1 and dftest.volume[a[b]] < 300:
        pips = 10000*(dftest.Bid[a[b]] - dftest.Ask[a[b]+lag])
print("Profit in Pips: ", np.sum(pnl))
print("Number of Trades: ", len(pnl))
Profit in Pips:  861.9
Number of Trades:  710
[<matplotlib.lines.Line2D at 0x11a34f198>]

Not good right. Now I apply my stoneage grandpa filter and re select the entry points. Here Bid price has a 5 at the end.

In [35]:
index_of_entry = []
b = 0#this is the loop counter

#loop the frame
for index, row in dftest.iterrows():
    #get bid
    bid = row["Bid"]
    #this is if there is number 5 at [6] of the bid
    if len(str(bid)) > 6 and int((str(bid))[6]) in (5,5):
    b += 1
In [36]:

And apply the same trading logic. What an improvement.

In [37]:
a = index_of_entry
pnl = []
lag = 50#periods to close the position

for b in range(0,len(a)):
    if a[b] + lag < len(dftest) and 10000*(dftest.Ask[a[b]] - dftest.Bid[a[b]]) < 1.1 and dftest.volume[a[b]] < 300:
        pips = 10000*(dftest.Bid[a[b]] - dftest.Ask[a[b]+lag])
print("Profit in Pips: ", np.sum(pnl))
print("Number of Trades: ", len(pnl))
Profit in Pips:  4920.6
Number of Trades:  689
[<matplotlib.lines.Line2D at 0x11edfb160>]

And lets see what we have for all data covering unseen out of sample data(01.01.2009 to 05.05.2017) Here I simly copy pasted the same code above.

In [38]:
#get price data for the entire set: between 01.01.2009 and 05.05.2017, M15 bars
dftest = bt.get_data('EURUSD', 'ALL', 'original')
In [39]:
index_of_entry = []
b = 0#this is the loop counter

#loop the frame
for index, row in dftest.iterrows():
    #get bid
    bid = row["Bid"]
    #this is if there is number 5 at [6] of the bid
    if len(str(bid)) > 6 and int((str(bid))[6]) in (5,5):
    b += 1
In [40]:
a = index_of_entry
pnl = []
lag = 50#periods to close the position

for b in range(0,len(a)):
    if a[b] + lag < len(dftest) and 10000*(dftest.Ask[a[b]] - dftest.Bid[a[b]]) < 1.1 and dftest.volume[a[b]] < 300:
        pips = 10000*(dftest.Bid[a[b]] - dftest.Ask[a[b]+lag])
print("Profit in Pips: ", np.sum(pnl))
print("Number of Trades: ", len(pnl))
Profit in Pips:  7117.9
Number of Trades:  1005
[<matplotlib.lines.Line2D at 0x11e60e278>]

I have to make a note here. When you backtest this properly, you will not see an equity curve like this, it will be still up but with flat periods of no activity. The reason is that we have are looking at the cumulative pips return graph with number of trades in the x axis. In an equity curve, you see how your capital changes in equally spaced time.

If you find this post usefull, please like this on Quantocracy!

Friday, June 23, 2017

Struggling Quant Episode 1: How I lost USD 500.000 while figuring out the link between questions, math, stats, coding and trading

Say that you are 30 years old and you have a good 25 years to work hard. Instead of going down the easy way of working for someone else during the day and killing time in the evenings and weekends, you have chosen the hard path of quantitative trading and started the heavy work. How many profitable trading ideas you can find in 25 years. Lets say you can figure out and test ideas in 10 days, 36 ideas a year and 900 ideas in 25 years right. Since this is my post, I am rounding 900 to 1000. Assuming a hit rate of 5%, 50 profitable ideas in 25 years, 2 per year and lets also say a profitable idea makes 20% per year. Depending on your initial capital, you will make X USD in 25 years. Lets say this X is USD 12.500.000 for me. USD 500.000 per year in average. This is my potential if I start doing things right now, right? So if I do things wrong for a year, I lose USD 500.000, this is what I did last year. I could not figure out how to operationalize all the scientific literature, huge quant trading content on the internet and coding and come up with a machine for trading research. We can argue that the time I invested will pay back, but at the end of the day, I have 24 years to go and no profits in my trading account, this is my reality. If you take quant trading seriously and you believe in your potential, I think you are losing some money as well, you better put your own numbers and do your own calculation.

I have spent a whole year trying to put the pieces together; learning math and stats, coding for strategy development, back testing and live trading, understanding market structure and trying to catch up with all that huge information flow from the blogs, websites, experts, sellers, marketers, academic research etc. This is my 2nd year in my journey to become a quantitative trader. I am still struggling a lot. I can not say that the picure is clear, but I think I am taking some steps forward. I have started to operationalize all this mass and doing things in a more time efficient and repeatable manner.

There are two reasons that I am posting this. The first one is that I realized that publishing something like this forces me to a structure which I need for success in trading. Second one is that I want to help you if you are going through a similar journey, I would be happy if I can accelerate things for you.

I do not have an institutional background in quant trading. I also do not have a degree on math, stats or computer science. I am a bachelor of economics, so I am trying to figure things out mostly from point zero. In the last one and a half years, I have learned to code in Python, I have developed my event driven back tester and live trading platform and I have started doing trading research using Python libraries. I have also invested a lot of time learning math and stats and I can say that i have an overall understanding of the methods that are applicable to quantitative trading. I only use price data and I focus on intraday strategies for major currency pairs. I now have three FX strategies that I paper trade with a demo account for four months. These strategies has generated a Sharpe ratio of 1.4 out of sample with a maximum drawdown of 18% and an average leverage of 5. They place an average of 1.5 trades a day. Paper trading performance is not as good as the out of sample test but not that off either, I take this as a good start.

Among my big list of my big struggles, linking coin flips (math and stats) to actual trading ideas was one of the hardest things for me to figure out. I mean, being able to operationalize all this methods and information flow and come up with a working machine for trading research was one of my biggest struggles. It may be easier for a person with a relevant background but for me this is like trying to fix a broken car with a tool set I do not know about and without a background with cars. Here you either need to have an understanding of the cars or an understanding of the tools to start doing something.

Going back to my challenge with putting the pieces together, information available for people starting this journey is either from car experts having an overall knowledge of the repair tools or from tool experts having an overall understanding of cars. It is very hard to find resources with a good balance between the car and the tools and touching at real life problems. Maybe these people, unlike me, all have the picture clear in their minds but not willing to give their secrets away, I do not know.

My interpretation is, these two groups have a common claim, which is, "here are the tools and methods that will make you find the answers". But what is an answer without a question, or without the the right question? Albert Einstein once said "If I had an hour to solve a problem and my life depended on the solution, I would spend the first 55 minutes determining the proper question to ask, for once I know the proper question, I could solve the problem in less than five minutes"

Long story short, my belief now is that, trading research should start with a good question (art) and this question should be answered with the scientific method (science). So the link between all relevant science literature and trading is good quality questions answered with the scientific method. This has been the glue for me, putting all the pieces together, very simple right? I can hear people saying that you are reinventing the wheel, quant trading is all about scientific method. I respect if you able to operationalize all this and came up with a working machine for trading research. This has been very hard for me to realize since it is very easy to lose sight of the big picture when going through such a complex adventure. Some can easily spend years reading and thinking on stats and math, coding and reading articles without a clear picture and plan on what needs to be done. My personal experience was mostly around trying to apply standalone statistical or econometric models to price data for finding edges or researching known trading ideas with my own not so structured methods, both were ended up with failures.

I will try to walk you through my way of developing trading strategies using the scientific method with the help of a simple example. Along the way, I will try to provide formula free explanations for basic math and stats concepts applicable to this example. Focus here will be on the method and the proper usage of the tools rather than the trading idea itself, please do not take the trading idea seriously.


Everything starts with an observation and a question, these two can switch places sometimes. You can observe market activity, macroeconomic factors and other things and come up with a question. You can do data digging, realize something that you can relate to a real life phenomena and ask a question. Or you can simply read an academic article where all the cycle for the scientific method is given with a conclusion and you like to apply your own thinking to the observation on hand. I can also argue that if we assume that a given data set is able to answer a limited number of questions, then we can claim that data science is for finding the relevant questions, not the answers. Anyway.

I have this simple observation.

Observation: Japanese session is low activity for European currencies, when European session starts, there is an increase in activity, volatility kicks in, volume goes up and the game of the day starts.

Then I ask a question. This is only one way of asking a question in relation with this observation. Quality of the observation and the question is the most important part of the entire picture.

Question: Can the initial direction of European currencies at European session opening predict price movements through out the session for the same currency set?

Now I need to create a hypothesis, which is actually my answer to the question that I just have asked. My educated guess. All that math and stats you are trying to digest is mostly used here while experimenting if this hypothesis is true or not. The hypothesis should be a testable answer to the question.

Hypothesis: The direction of the European session opening for European currencies is an indication of the direction for the session closing. Or, I can say, opening direction tend to persist through out the session.

Critical point here is that, a hypothesis needs to have two things, a dependent variable which is the thing that you are trying to predict and an independent variable which is the information that you are planning to use to make this prediction.

dependent variable: direction of the European currency at session closing (session direction)

independent variable: direction or the European currency at session opening (opening direction)

So lets prepare some data and start. You do not need to know how to code or do the coding to follow, just read. I have chosen to publish this with the codes to provide new starters a feel of how it looks like.

In [316]:
#do necessary imports

#this is my vectorized backtester for quick and dirty backtesting of research findings
import bulk_tester as bt

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

Get EURUSD M15 bars for the back testing period(BT - 01.01.2009 to 01.01.2014) with no resampling(original)

get_data is a function that I created for loading price data into a Pandas dataframe as it is or with resampling to higher time frames

In [323]:
dftest = bt.get_data('EURUSD', 'BT', 'original')

Add some calculated columns to the dataframe that we will be necessary. I need changes in volume and returns to be able to measure increasing market activity and calculate the values for my dependent and independent variables.

In [324]:
#absolute percentage price change
dftest['abs_pct_return'] = abs(dftest.Bid.pct_change())
#percentage volume change
dftest['vol_change'] = dftest.volume.pct_change()
#percentage return
dftest['pct_return'] = dftest.Bid.pct_change()
#pips change
dftest['pips_change'] = 10000*(dftest.Bid - dftest.Bid.shift(1))

dftest = dftest.dropna()

I first want to validate my observation that Japanese session is low activity and the action starts with the European session for EURUSD. An easy way of seeing this is to create a graph where average absolute return and average volume plotted against 15 minute intervals. To be able to get such a graph I need to create a new column for hour and minute together so that I can group absolute returns and volume by this new column and get the average.

In [326]:
#this is a function taking a string as an input and returning a string with a zero at the first character
#if lenght of the input string is one.
def fix(string):
    if len(string) == 1:
        string = '0' + string
    return string
#add a new column in HH:MM format
dftest['hour_min'] = dftest.index.hour.map(str).map(fix) + ":" + dftest.index.minute.map(str).map(fix)

Visual check shows that there is a shift in volume around 06:00 - 08:00 GMT.

In [327]:
<matplotlib.axes._subplots.AxesSubplot at 0x13352cfd0>

Same shift is also the case for absolute returns.

In [328]:
<matplotlib.axes._subplots.AxesSubplot at 0x1293f75c0>

Also checking some days picked randomly, I can say that increase in activity starts at 07:00 GMT, so lets take 07:00 return as the independent variable. Dependent variable is the session return, this can be calculated at 12:45, last bar before NY session starts to isolate European session or at 16:45 where the European session ends but overlapping with the US session starting from 13:00. So lets take 12:45 as the session ending and isolate our analysis to European session.

At this point, I have an observation, I have a question and I have defined my testable hypothesis along with my dependent and independent variables. I have also validated my observation with data. So what is next, I now need to setup an experiment to test my hypothesis.

To make the picture clear, I also need to point out that our hypothesis which we have created in the general flow of the scientific method is not the same as hypothesis we use in the context of statistical tests. To be able to link our hypothesis with the hypothesis testing in statistical sense, we need an additional layer of refinement. However, before doing this I will try to provide a high level view of the probability theory in relation to our example.

Why we need probability theory, because it defines the mathematical model for handling uncertain situations like the one we have here. If we are willing to answer our questions with data testing our hypothesis we need to setup an experiment and run this experiment with a probabilistic model in hand. To be able to create a probabilistic model, we need to understand its rules.


A probabilistic model requires a properly defined sample space and a probability law and these two should be in line with some rules and axioms. This means that we need to make clear all possible outcomes(sample space) and the probabilities of these outcomes (probability law) and follow some rules doing this.

The sample space: In probability language, calculating the opening and closing returns for a day is called the observation. Here the experiment is a sequential observation involving looking at the returns once at session opening and once at session closing for a specific number of days. I can define my experiment as follows "observe opening and closing directions of EURUSD for 1000 days"

Number of possible outcomes of these observations for a given day is 4; open up - close up, open up-close down, open down-close up and open down-close down. The set of all possible outcomes is called the sample space, in our case the set of all possible outcomes is 4000, 4 per day and for 1000 days in total. Important thing here is that, sample space is not a set of actual observations, it is the set of all possible observations. We are not experimenting anything yet, we are just defining the space in which we will be experimenting. A subset of the sample space, which is a collection of a group of outcomes is called an event. So getting open up and close down in a given day is an event for which we can calculate probabilities. In summary, we have an experiment with 4000 possible outcomes that makes up the sample space.

A sample space should be mutually exclusive and collectively exhaustive. Mutual exclusivity in our case is in a given day, open up and open down can not happen at the same time. Collectively exhaustive means that all the possible outcomes are defined in the sample space and in a given day, we can not see any other outcome than open up, open down, close up and close down. For example if a day comes up with a session opening where 07:00 close is the same as 06:45 close, meaning that opening was flat, this outcome is not included in the sample space, hence our sample space is not collectively exhaustive. We need to define up or down with a <= or a >= sign to cover all possibilities.

Once we have properly defined our sample space, we need to start assigning probabilities to different events to be able to do some calculations. Remember, events are a grouping of outcomes, a subset of the sample space.

The probability law: Probability law is a definition of the likelihood of outcomes or events. Here we assign probabilities to outcomes and events. Lets look at our case. What we are interested here is the probabilities of four events; open up, open down, close up, close down. Remember, events are groupings of outcomes, so when we say open up, it is the set of all possible outcomes with an open up.

  • event here is open up
  • number of possible outcomes that makes this event happen is say 2000, just assume all outcomes are equally likely
  • total number of possible outcomes in the sample space is 4000
  • the probability of this event (open up) is 2000/4000 = 50%

This is called the Discrete Uniform Law which is, with the assumption that all outcomes are equally likely, probability of an event A is, P(A) = number of elements of A / number of elements in the sample space.

A probability law should obey the following rules:

  • probability of an event should be non negative. This makes sense as an event is a collection of possible outcomes and if an outcome is possible it should have positive probability. In our example, there should be a positive probability for an outcome in a given day as there will inevitably be a direction at the session opening and closing
  • probability of disjoint events is the sum of the probabilities of that events. Disjoint events means that they do not share common outcomes. In our experiment, if we define open up as an event and open down as an event, these are disjoint
  • total probability of the entire sample space should add up to 1. In our experiment, sum of the probabilities of all possible 4000 outcomes should sum up to 1

Remember, we are just setting up the theory here, we did not do any observations yet, we are just making some assumptions and trying to define some rules. If you think why do not we just go and count the days with open up and divide it by 1000 to come up with a probability, this is the actual experiment, this is not something you can use for generalization. This is why we do all those equally likely kind of assumptions. Rules governing sample space and probability law are the same for all experiments. We are not yet doing any counting here.

So, what is the link between probability theory and the experiment in hand, how do we do the experiment. The link is the concept of random variables. By definition, a random variable is a real valued function of the outcome of an experiment. I hate such definitions. What does this mean? In our case, lets define a random variable which takes the value of one if we observe open up and takes the value of zero if we observe an open down. Here the random variable takes the direction of the open (which is my observation) as an input and gives us a real value one or zero. Give random variable an observation, it gives you a number. Why do we need a number, because we know how to deal with numbers. A random variable maps the sample space to real numbers given an observation and a definition of how this mapping should be done. By the way, I have just invented the Bernoulli random variable, it is this easy to be famous.


Ok, I am out of ammo. This post will be followed by posts on the following:

  • go through joint/marginal/conditional probabilities and independence and calculate these for our example
  • build up on random variables and describe where probability distributions sit in the picture using our example
  • link these back to our hypothesis and define the actual statistical hypothesis test
  • talk about logistic regression as one of the tools for modelling binary outcomes
  • revisit all this discussion with Bayesian thinking
  • finalize the scientific method cycle and come up with a trading rule (maybe, if we reject the null hypothesis)
  • backtest the trading rule and close the discussion

If you find this post valuable, please also like this on Quantocracy, just click the arrow left to the post!