Добро пожаловать в клуб

Показать / Спрятать  Домой  Новости Статьи Файлы Форум Web ссылки F.A.Q. Логобург    Показать / Спрятать

       
Поиск   
Главное меню
ДомойНовостиСтатьиПостановка звуковФайлыДефектологияКнижный мирФорумСловарьРассылкаКаталог ссылокРейтинг пользователейЧаВо(FAQ)КонкурсWeb магазинКарта сайта

Поздравляем!
Поздравляем нового Логобуржца Наталшечка со вступлением в клуб!

Реклама

КНИЖНЫЙ МИР

Efficient Data Deduplication in Hadoop   Priteshkumar Prajapati and Parth Shah

Efficient Data Deduplication in Hadoop

84 страниц. 2015 год.
LAP Lambert Academic Publishing
Hadoop is widely used for massively distributed data storage. Even though it is highly fault tolerant, scalable and runs on commodity hardware, it does not provide efficient and optimized data storage solution. When user uploads files with the same contents in Hadoop, it stores all files to HDFS (Hadoop Distributed File System) even if the contents are same that leads to duplication of contents hence it is wastage of storage space. Data deduplication is process to reduce the required storage capacity as only the unique instances of data get stored. The Data Deduplication process is widely used in File Server, Database management systems, Backup storage and lots of other storage solutions. A proper Deduplication strategy sufficiently utilizes the storage space under the limited storage devices. Hadoop doesn’t provide Data Deduplication solution. In this work the module of deduplication has been integrated in Hadoop framework to achieve optimized data storage.
 
- Генерация страницы: 0.04 секунд -