Evaluating parallel minibatch training for machine learning applications

Research output: Chapter in Book/Report/Conference proceedingsConference contributionpeer-review

Abstract

The amount of data available for analytics applications continues to rise. At the same time, there are some application areas where security and privacy concerns prevent liberal dissemination of data. Both of these factors motivate the hypothesis that machine learning algorithms may benefit from parallelizing the training process (for large amounts of data) and/or distributing the training process (for sensitive data that cannot be shared). We investigate this hypothesis by considering two real-world machine learning tasks (logistic regression and sparse autoencoder), and empirically test how a model’s performance changes when its parameters are set to the arithmetic means of parameters of models trained on minibatches, i.e., horizontally split portions of the data set. We observe that iterating the minibatch training and parameter averaging process for a small number of times results in models with performance only slightly worse that of models trained on the full data sets.

Original languageEnglish
Title of host publicationComputer Aided Systems Theory – EUROCAST 2017 - 16th International Conference, Revised Selected Papers
EditorsRoberto Moreno-Diaz, Alexis Quesada-Arencibia, Franz Pichler
PublisherSpringer
Pages400-407
Number of pages8
ISBN (Print)9783319747170
DOIs
Publication statusPublished - 2018
Event16th International Conference on Computer Aided Systems Theory, EUROCAST 2017 - Las Palmas de Gran Canaria, Spain
Duration: 19 Feb 201724 Feb 2017

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume10671 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference16th International Conference on Computer Aided Systems Theory, EUROCAST 2017
Country/TerritorySpain
CityLas Palmas de Gran Canaria
Period19.02.201724.02.2017

Keywords

  • Distributed machine learning
  • Logistic regression
  • Minibatch training
  • Sparse autoencoders

Fingerprint

Dive into the research topics of 'Evaluating parallel minibatch training for machine learning applications'. Together they form a unique fingerprint.

Cite this