Neural Networks – Part One (by nummy)

By -

In light of Tim’s recent post on old school video games, the advancement in computing continues to amaze me and will probably continue to amaze at the same rate for many decades. Or not … because although sufficient enough to approximate, Moore’s “Law” has to hit a threshold eventually, no system is infinitely in perpetual motion because eventually you succumb to the laws of entropy. Anyway, I’d like to do a series of posts on neural networks. I’ll discuss a bit of history in this first post, and how we can apply them to a financial model that can efficiently estimate market movements.

In the late 1800s and early 1900s neural networks became a concept that tried to explain the functional behavior of the human mind. In the 1980s, the implementation of neural networks in computation experienced a boom with the rediscovery of a computable backpropagation algorithm. The idea of spontaneous order came about from Friedrich Hayek in 1950 suggesting that the brain behaves as a result of decentralized networks of simple building blocks we can call neurons.

Neuron


The mind can be essentially thought of as a network that processes multiple input variables in parallel. A neural network modeled in a computer does the same. Imagine a model of the market that can take into account multiple input variables and learn from itself. One of the strengths of NNs is the ability to find patterns and irregularities as well as being able to learn from the inter-dependence and correlation of all the inputs. Equivalent to a biological neuron, we can create an artificial neuron with a mathematical model.

Neuron_artificial 

This is the basic building block of a neural network, the single neuron. As with a biological neuron, your axon terminals are your n inputs. They have weights w_1 through w_n and transfer signals (or stimuli) u_1 through u_n into the neuron. The weights can be thought of as weights of “importance” to each input. The neuron takes into account the sum of the weighted inputs along with an activation function (in this case a step function) which triggers an output z.

Neurons can be organized in various network structures but for our purposes of financial modeling, I will only consider feed-forward networks and recurrent networks. Feed-forward networks just describe the flow of information. Information is fed forward from the input(s) to output(s). Recurrent networks are networks that allow for feedback loops.

Simple_nn

Let’s consider a simple NN model where we would like the output to be SPX daily. We consider the following three inputs to the model:

-Previous day’s EUR/USD exchange rate

-Previous day’s LIBOR rate

-Previous day’s SPX close

The hidden layer of the network allows the network to learn not only how the inputs affect the output, but how each input affects other inputs. This model would be recurrent since the previous SPX data is fed back as an input. Using MATLAB, you can setup models like this. You can run a NN training where it analyzes past data and tests its forecast performance. Suppose we have 1000 data points of each input. That means 1000 daily closes of the EUR/USD, LIBOR, and SPX. We can set aside the first 70% of the data points for the network to “train” itself and compute all the weights (or importances) of each input. The remaining 30% of the data points can be used as a way to backtest predictions against actual outcomes. The network can analyze it's error rate and re-trains itself if the computed weights are not optimal.

Nn_11_3_2009 

Here is a model I have used so far and takes about 2-3 minutes to compute on my iMac (Intel Core 2 @ 2.4GHz, 2GB RAM). The model is a simple one with SPX past data as the input … no other inputs. The dataset used is on the lower-right chart. This is SPX daily close data since the 1970s (maybe 1979? forgot exactly what year the data started). The first 80% of the data was used for the NN training. The remaining 20% of the data was used to forecast (upper-left chart). The error in forecasting is shown on the lower-left chart. The 2-day forecast made by the NN on 11/13/2009 is on the upper-right chart. The model was saying we may get a slight drop the next day but after that, the trend was to change upward soon. I only chose to plot the output of two days because as you go out further in time, your error becomes exponentionally worse. Ideally, after each day, you would re-train the network so it is up to date with the latest data.

You can imagine a much more complex model with multiple outputs (SPX, Dow, Nasdaq, FTSE) and many more inputs (exchange rates, earnings, emerging market indices, volume, etc.). Other researchers have even used commonly used indicators as inputs to the NN model (MACD, RSI, moving averages, etc.). Increasing the inputs and hidden layers greatly increases the forecasting accuracy of the model while also increasing the computation time required. The computation required for this grows exponentially and you can start to see why the “little guy” is screwed from the start. The only way to profit 95%+ of each trading day with this type of computational edge is by an institution with large amounts of capital, a sick artificial NN model running on huge server farms, and direct fiber optic hookups into the NYSE for ultra-low latency. I am quite sure I don’t need to name names on this one.

In a David vs. Goliath attempt, I want to create a NN model with help from fellow Slopers. Maybe we can have some suggestions and discussions on what inputs ultimately affect the output the most. I don’t want to model something that would take days to compute on my iMac so maybe you can list your top 5 things you think affect SPX daily performance the most. Excluding the LDI (we love you Lester), here is my guess as to which inputs could be most influential on the outcome of SPX each day:

-LIBOR rate

-U.S. 20+ Year Treasury Bond Index (TLT)

-TED Spread

-Price of Gold

-Price of Oil