multiml.task.keras package
Subpackages
- multiml.task.keras.modules package
- Submodules
- multiml.task.keras.modules.base_model module
- multiml.task.keras.modules.connection_model module
- multiml.task.keras.modules.conv2d module
- multiml.task.keras.modules.darts_model module
- multiml.task.keras.modules.ensemble module
- multiml.task.keras.modules.functional_model module
- multiml.task.keras.modules.mlp module
- multiml.task.keras.modules.softmax_dense_layer module
- Module contents
- Submodules
Submodules
Module contents
- class multiml.task.keras.KerasBaseTask(run_eagerly=None, callbacks=['EarlyStopping', 'ModelCheckpoint'], save_tensorboard=False, **kwargs)
Bases:
MLBaseTask
Base task for Keras model.
Examples
>>> # your keras model >>> class MyKerasModel(Model): >>> def __init__(self, units=1): >>> super(MyKerasModel, self).__init__() >>> >>> self.dense = Dense(units) >>> self.relu = ReLU() >>> >>> def call(self, x): >>> return self.relu(self.dense(x)) >>> >>> # create task instance >>> task = KerasBaseTask(storegate=storegate, >>> model=MyKerasModel, >>> input_var_names=('x0', 'x1'), >>> output_var_names='outputs-keras', >>> true_var_names='labels', >>> optimizer='adam', >>> optimizer_args=dict(lr=0.1), >>> loss='binary_crossentropy') >>> task.set_hps({'num_epochs': 5}) >>> task.execute() >>> task.finalize()
- __init__(run_eagerly=None, callbacks=['EarlyStopping', 'ModelCheckpoint'], save_tensorboard=False, **kwargs)
- Parameters:
run_eagerly (bool) – Run on eager execution mode (not graph mode).
callbacks (list(str or keras.Callback)) – callback for keras model training. Predefined callbacks (EarlyStopping, ModelCheckpoint, and TensorBoard) can be selected by str. Other user-defined callbacks should be given as keras.Callback object.
save_tensorboard (bool) – use tensorboard callback in training.
**kwargs – Arbitrary keyword arguments.
- compile_model()
Compile keras model.
- compile_loss()
Compile keras model.
- load_model()
Load pre-trained keras model weights.
- dump_model(extra_args=None)
Dump current keras model.
- fit(train_data=None, valid_data=None)
Training model.
- Returns:
training results.
- Return type:
dict
- predict(data=None, phase=None)
Evaluate model prediction.
- Parameters:
phase (str) – data type (train, valid, test or None)
- Returns:
prediction by the model ndarray: target
- Return type:
ndarray
- get_inputs()
Returns keras Input from input_var_names.
- class multiml.task.keras.MLPTask(input_shapes=None, layers=None, activation=None, activation_last=None, kernel_regularizer=None, bias_regularizer=None, batch_norm=False, **kwargs)
Bases:
KerasBaseTask
Keras MLP task.
- __init__(input_shapes=None, layers=None, activation=None, activation_last=None, kernel_regularizer=None, bias_regularizer=None, batch_norm=False, **kwargs)
- Parameters:
input_shapes (tuple) – shape for Keras.Inputs
layers (list) – list of hidden layers
activation (str) – activation function for MLP
activation_last (str) – activation function in last layer
kernel_regularizer (str) – kernel regularizer
bias_regularizer (str) – bias regularizer
batch_norm (bool) – use batch normalization
**kwargs – Arbitrary keyword arguments
- set_hps(hps)
Set thresholds. Cut-names are given by get_hyperparameters methods.
- Parameters:
hps (dict) – (hyperparameter name => hyperparameter value)
- build_model()
Build a Keras MLP model.
- get_inputs()
Returns keras Input from input_var_names.
- class multiml.task.keras.Conv2DTask(conv2d_layers=None, **kwargs)
Bases:
MLPTask
Keras MLP task.
- __init__(conv2d_layers=None, **kwargs)
- Parameters:
list (conv2d_layers) – list of conv2d layer config(op_name, op_args).
**kwargs – Arbitrary keyword arguments
- build_model()
Build a Keras MLP model.
- class multiml.task.keras.EnsembleTask(subtasks, dropout_rate=None, individual_loss=False, individual_loss_weights=0.0, **kwargs)
Bases:
KerasBaseTask
- __init__(subtasks, dropout_rate=None, individual_loss=False, individual_loss_weights=0.0, **kwargs)
- Parameters:
subtasks (list) – list of task instances.
dropout_rate (float) – dropout_rate for ensemble weights. If None, no dropout.
individual_loss (bool) – use multiple outputs
individual_loss_weights (float) – coefficient for multiple outputs
**kwargs – Arbitrary keyword arguments
- compile()
Compile model, optimizer and loss.
Compiled objects will be avaialble via
self.ml.model
,self.ml.optimizer
andself.ml.loss
.Examples
>>> # compile all together, >>> self.compile() >>> # which is equivalent to: >>> self.build_model() # set self._model >>> self.compile_model() # set self.ml.model >>> self.compile_optimizer() # set self.ml.optimizer >>> self.compile_loss() # set self.ml.loss
- compile_loss()
Compile keras model.
- build_model()
Build model.
- get_input_true_data(phase)
Get input and true data.
- Parameters:
phase (str) – data type (train, valid, test or None).
- Returns:
(input, true) data for model.
- Return type:
tuple
- get_inputs()
Returns keras Input from input_var_names.
- get_submodel_names()
Returns subtask_id used in ensembling.
- Returns:
list of subtask_id
- Return type:
list (str)
- get_submodel(i_models)
Get a submodel by model index.
- Parameters:
i_models (int) – submodel index
- Returns:
submodel for the input index
- Return type:
- static get_ensemble_weights(model)
Collect ensemble_weights in the keras model.
- Parameters:
model (keras.Model) –
- Returns:
list of ensemble weights
- Return type:
list (tf.Variable)
- class multiml.task.keras.ModelConnectionTask(subtasks, loss_weights=None, variable_mapping=None, **kwargs)
Bases:
ModelConnectionTask
,KerasBaseTask
Keras implementation of ModelConnectionTask.
- build_model()
Build model.
- class multiml.task.keras.DARTSTask(optimizer_alpha, optimizer_weight, learning_rate_alpha, learning_rate_weight, zeta=0.01, **kwargs)
Bases:
ModelConnectionTask
- __init__(optimizer_alpha, optimizer_weight, learning_rate_alpha, learning_rate_weight, zeta=0.01, **kwargs)
- Parameters:
optimizer_darts_alpha (str) – optimizer for alpha in DARTS optimization
optimizer_darts_weight (str) – optimizer for weight in DARTS optimization
learning_rate_darts_alpha (float) – learning rate (epsilon) for alpha in DARTS optimization
learning_rate_darts_weight (float) – learning rate (epsilon) for weight in DARTS optimization
zeta (float) – zeta parameter in DARTS optimization
**kwargs – Arbitrary keyword arguments
- fit(train_data=None, valid_data=None)
Training model.
- Returns:
training results.
- Return type:
dict
- load_metadata()
Load metadata.
- get_best_submodels()
Returns indices of the best submodels determined by the alpha.
- Returns:
list of index of the selected submodels
- Return type:
list (int)
- build_model()
Build model.
- dump_model(extra_args=None)
Dump current DARTS model.