Добро пожаловать в клуб

Показать / Спрятать  Домой  Новости Статьи Файлы Форум Web ссылки F.A.Q. Логобург    Показать / Спрятать

       
Поиск   
Главное меню
ДомойНовостиСтатьиПостановка звуковФайлыКнижный мирФорумСловарьРассылкаКаталог ссылокРейтинг пользователейЧаВо(FAQ)КонкурсWeb магазинКарта сайта

Поздравляем!
Поздравляем нового Логобуржца Санара со вступлением в клуб!

Реклама

КНИЖНЫЙ МИР

AUTOENCODER NEURAL NETWORKS   Chun Chet Tan and Chikkannan Eswaran

AUTOENCODER NEURAL NETWORKS

96 страниц. 2010 год.
LAP Lambert Academic Publishing
Autoencoders are feedforward neural networks which can have more than one hidden layer. These networks attempt to reconstruct the input data at the output layer. Since the size of the hidden layer in the autoencoders is smaller than the size of the input data, the dimensionality of input data is reduced to a smaller-dimensional code space at the hidden layer. However, training a multilayer autoencoder is tedious. This is due to the fact that the weights at deep hidden layers are hardly optimized. The research work has focused on the characteristics, training and performance evaluation of autoencoders. The concepts of stacking and Restricted Boltzmann Machine have also been discussed in detail. Two datasets, namely ORL face dataset and MNIST handwritten digit dataset have been employed in these experiments. The performances of the autoencoders have also been compared with that of PCA. It has been shown that the autoencoders can also be used for image compression. The compression...
 
- Генерация страницы: 0.04 секунд -