Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

Datasets

Load datasets


Description

These functions all return a named list with elements X_train, X_test, Y_train, and Y_test. The first time calling this function will download the datasets locally; thereafter they will be loaded from the keras cache directory.

Usage

load_cifar10()

load_cifar100(label_mode = "fine")

load_imdb(num_words = NULL, skip_top = 0, maxlen = NULL, seed = 113,
  start_char = 1, oov_char = 2, index_from = 3)

load_reuters(num_words = NULL, skip_top = 0, maxlen = 1000,
  test_split = 0.2, seed = 113, start_char = 1, oov_char = 2,
  index_from = 3)

load_mnist()

load_boston_housing()

Arguments

label_mode

either "fine" or "coarse"; how to construct labels for load_cifar100.

num_words

integer or NULL. Top most frequent words to consider. Any less frequent word will appear as 0 in the sequence data.

skip_top

integer. Top most frequent words to ignore (they will appear as 0s in the sequence data).

maxlen

integer. Maximum sequence length. Any longer sequence will be truncated.

seed

integer. Seed for reproducible data shuffling.

start_char

integer. The start of a sequence will be marked with this character. Set to 1 because 0 is usually the padding character.

oov_char

integer. words that were cut out because of the num_words or skip_top limit will be replaced with this character.

index_from

integer. Index actual words with this index and higher.

test_split

float. Fraction of the dataset to use for testing.

Author(s)

Taylor B. Arnold, taylor.arnold@acm.org

References

Examples

if (keras_available()) {
  boston <- load_boston_housing()
  X_train <- normalize(boston$X_train, 0)
  Y_train <- boston$Y_train
  X_test <- normalize(boston$X_test, 0)
  Y_test <- boston$Y_test
  
  mod <- Sequential()
  mod$add(Dense(units = 200, input_shape = 13))
  mod$add(Activation("relu"))
  mod$add(Dense(units = 200))
  mod$add(Activation("relu"))
  mod$add(Dense(units = 1))
  keras_compile(mod,  loss = 'mse', optimizer = SGD())
  
  keras_fit(mod, scale(X_train), Y_train,
            batch_size = 32, epochs = 20,
            verbose = 1, validation_split = 0.1)
}

kerasR

R Interface to the Keras Deep Learning Library

v0.6.1
LGPL-2
Authors
Taylor Arnold [aut, cre]
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.