Loading...
Please wait, while we are loading the content...
Similar Documents
Schedule Day 0 : Tuesday 13 June 10 : 00 – 12 : 00 Tutorial
| Content Provider | Semantic Scholar |
|---|---|
| Author | Volkhonskiy, Denis Burnaev, Evgeny Nouretdinov, Ilia Alexander Gammerman Ishimtsev, Vladislav Nazarov, Ivan Bernstein, Alexander |
| Copyright Year | 2017 |
| Abstract | s of talks Online Aggregation of Unbounded Signed Losses Using Shifting Experts by Vladimir V. V’yugin For the decision theoretic online (DTOL) setting, we consider methods to construct algorithms that suffer loss not much more than of any sequence of experts distributed along a time interval (shifting experts setting). We present a modified version of the method of Mixing Past Posteriors which uses as basic algorithm AdaHedge with adaptive learning rate. Due to this, we combine the advantages of both algorithms: regret bounds are valid in the case of signed unbounded losses of the experts, also, we use the shifting regret which is a more optimal characteristic of the algorithm. All results are obtained in the adversarial setting—no assumptions are made about the nature of data source. We present results of numerical experiments for the case where losses of the experts cannot be bounded in advance. PMLR 60:3–17 Asymptotic Properties of Nonparametric Estimation on Manifold by Yury Yanovich In many applications, the real high-dimensional data occupy only a very small part in the high dimensional “observation space” whose intrinsic dimension is small. The most popular model of such data is Manifold model which assumes that the data lie on or near an unknown manifold (Data Manifold, DM) of lower dimensionality embedded in an ambient high-dimensional input space (Manifold Assumption about high-dimensional data). Manifold Learning is a Dimensionality Reduction problem under the Manifold assumption about the processed data, and its goal is to construct a low-dimensional parameterization of the DM (global low-dimensional coordinates on the DM) from a finite dataset sampled from the DM. Manifold Assumption means that local neighborhood of each manifold point is equivalent to an area of low-dimensional Euclidean space. Because of this, most of Manifold Learning algorithms include two parts: “local part” in which certain characteristics reflecting low-dimensional local structure of neighborhoods of all sample points are constructed via nonparametric estimation, and “global part” in which global low-dimensional coordinates on the DM are constructed by solving the certain convex optimization problem for specific cost function depending on the local characteristics. Both statistical properties of “local part” and its average over manifold are considered in the paper. The article is an extension of the paper Yanovich (2016) for the case of nonparametric estimation. PMLR 60:18–38 |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | http://clrc.rhul.ac.uk/copa2017/programme.pdf |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |