Faculty       Staff       Contact       Institute Directory
Research Groups      
Undergraduate       Graduate       Institute Admissions: Undergraduate | Graduate      
Events       Institute Events      
Lab Manual       Institute Computing      
No Menu Selected

* Research

Ph.D. Theses

Machine Learning in Computational Finance

By Victor Boyarshinov
Advisor: Malik Magdon-Ismail
April 20, 2005

We focus on constructing efficient algorithms for problems arising in applications of machine learning in the field of computational finance, in particular pricing and trading. One generally should consider three different facets of designing an algorithm for an artificial intelligence application: (i) Constructing a training dataset (ii) Developing a training algorithm for discovering patterns (iii) Using side information for avoiding overfitting the training data. The first part of this thesis addresses (i). Specifically, we consider the problem of finding optimal trading strategies with respect to the total cumulative return, Sterling ratio and two variants of the Sharp Ratio. Knowing what the optimal trades are is necessary for constructing the training dataset for a supervised learning algorithm The contribution in this work is to give efficient (low order polynomial time) algorithms to compute the optimal trading strategy for various profit objectives, and under constraints on the number of trades that can be made. The heart of our algorithms is a new general algorithm for maximizing quotients over intervals, which we solve by relating this one-dimensional optimization problem to convex hull operations on the plane.

The second part of this thesis addresses (ii). The problem of learning - discovering hidden patterns in the collected data - often can be considered as the problem of finding a linear separator for two sets of points in multidimensional space. When the sets are not separable, we are interested in finding a subset of points such that after removing these points, the remaining points are separable. Such a set is called a separator set. If we assign a positive weight to every point (its fidelity), then we wish to find a separator set with the minimum possible weight. We provide a combinatorial deterministic algorithm for computing such a minimum separator set in 2 dimensions. The constructed algorithm also addresses the statistical properties of the resulting separator by providing leave-one-out error simultaneously.

The problem of overfitting training data is well recognized in the machine learning community. Standard approach to deal with this threat is the early stopping of the training algorithm iterations. Important question is when the iterations should be stopped. Usually one monitores another error measure and stops iterations when the associated error starts growing. The associated error can be same error function measured on separate set of datapoints (validation set) or even completely different error function. Since the associated error measure is somewhat different from the primary, it does not necessary shows monotinical behavior but often appears as random fluctuations around unknown function. In order to pinpoint the exact location of the critical point, one can perform shape constrained optimization to fit function to the ibserved values of the associated error. Another application of the shape constrained regression arise in the pricing of financial instruments, for example the american put option. Isotonic and unimodal regression are both examples of nonparametric shape constrained regression. In the third part of this work we present efficient algorithms for computing for L1, L1 isotonic and unimodal regressions.

* Return to main PhD Theses page



---