Andras, PE (2018) Random Projection Neural Network Approximation. 2017 International Joint Conference on Neural Networks (IJCNN 2017).

[thumbnail of Andras - IJCNN 2018.pdf]
Andras - IJCNN 2018.pdf - Accepted Version

Download (378kB) | Preview


Neural networks are often used to approximate functions defined over high-dimensional data spaces (e.g. text data, genomic data, multi-sensor data). Such approximation tasks are usually difficult due to the curse of dimensionality and improved methods are needed to deal with them effectively and efficiently. Since the data generally resides on a lower dimensional manifold various methods have been proposed to project the data first into a lower dimension and then build the neural network approximation over this lower dimensional projection data space. Here we follow this approach and combine it with the idea of weak learning through the use of random projections of the data. We show that random projection of the data works well and the approximation errors are smaller than in the case of approximation of the functions in the original data space. We explore the random projections with the aim to optimize this approach.

Item Type: Article
Additional Information: © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Uncontrolled Keywords: function approximation, high-dimensional, neural network, random projection, weak learning
Divisions: Faculty of Natural Sciences > School of Computing and Mathematics
Depositing User: Symplectic
Date Deposited: 20 Mar 2018 16:56
Last Modified: 31 Mar 2021 09:34

Actions (login required)

View Item
View Item