Loading...
Please wait, while we are loading the content...
Similar Documents
Approximation Par Réseaux À Fonctions Radiales De Base - Application À La Détermination Du Prix D'achat D'une Option.
| Content Provider | Semantic Scholar |
|---|---|
| Author | Lendasse, Amaury Lee, John Aldo Bodt, Eric De Wertz, Vincent Verleysen, Michel Lemaître, Guillaume |
| Copyright Year | 2001 |
| Abstract | APROXIMATION USING RADIAL BASIS FUNCTION NETWORKS: APPLICATION TO PRICING DERIVATIVE SECURITIES. The general scheme of approximation supposes the existence of a relationship between several variables (the inputs) and a dependent variable (the output). This relationship is unknown; we try to build an approximator between the inputs and the output. In that purpose, we need a set of inputs-output couples that form the learning data of the approximator. In this paper, we will use a neural network approximator: the radial basis function network (or RBF network). Many techniques have been developed for the learning phase of RBF; we use the technique described in (5). We w show that the results obtained with these RBF can be improved by a pre-processing technique. This preprocessing is based on linear models. It does not increase the complexity of the RBF learning phase but improves the accuracy of the approximator. These techniques are applied to pricing derivative securities. Hutchinson, Andrew and Poggio (6) have treated this problem. They used simulated data and showed that RBF are adequate models to pricing derivative securities and also to hedging them. The results we obtained are comparable to Hutchinson's results but the learning scheme is simplified in our case. We will use this example to illustrate the advantages of our preprocessing technique for RBF. BF are non-linear parametric approximation models based on combinations of Gaussian functions. In most cases, these Gaussian functions are radial (their output value depends only on the Euclidean distance between the input vector and a centre). As a consequence, input variables are not scaled in RBF, while a proper scaling could be more adequate. Our purpose is to eliminate this limitation, without penalising the simplicity of the RBF learning phase. If we build a linear model between the inputs and the output, this output will be approximated by a weighted sum of the inputs. The weight associated to each input determines its importance on the approximation of the output. This gives a very simple technique to determine the importance of all the inputs on the output. Therefore we are going to scale the inputs by the corresponding weights. A new so-called "Weighted RBF" is built, giving adequate importance to each of the inputs. |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | http://research.ics.aalto.fi/eiml/Publications/Publication45.pdf |
| Alternate Webpage(s) | https://research.cs.aalto.fi/aml/Publications/Publication45.pdf |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |