multiml.task.pytorch.modules package

Submodules

Module contents

class multiml.task.pytorch.modules.AdaptiveSNG(categories=None, integers=None, alpha=1.5, delta_init=1.0, lam=2, delta_max=inf, init_theta_cat=None, init_theta_int=None, threshold=0.1, patience=-1, range_restriction=True)

Bases: object

Adaptive Stochastic Natural Gradient for Categorical Distribution.

__init__(categories=None, integers=None, alpha=1.5, delta_init=1.0, lam=2, delta_max=inf, init_theta_cat=None, init_theta_int=None, threshold=0.1, patience=-1, range_restriction=True)

AdaptiveSNG.

get_lambda()
check_converge()
converge_counter()
update_parameters(fnorm_cat, fnorm_int, hstack)
most_likely_value()
get_thetas()
set_thetas(theta_cat, theta_int)
sampling()
update_theta(c_cat, c_int, losses)
class multiml.task.pytorch.modules.AdaptiveSNG_cat(categories=None, integers=None, alpha=1.5, delta_init=1.0, lam=2, delta_max=inf, init_theta_cat=None, init_theta_int=None, threshold=0.1, patience=-1, range_restriction=True)

Bases: AdaptiveSNG

check_converge()
most_likely_value()
get_thetas()
set_thetas(theta_cat, theta_int)
sampling()
update_theta(c_cat, c_int, losses)
class multiml.task.pytorch.modules.AdaptiveSNG_int(categories=None, integers=None, alpha=1.5, delta_init=1.0, lam=2, delta_max=inf, init_theta_cat=None, init_theta_int=None, threshold=0.1, patience=-1, range_restriction=True)

Bases: AdaptiveSNG

check_converge()
most_likely_value()
get_thetas()
set_thetas(theta_cat, theta_int)
sampling()
update_theta(c_cat, c_int, losses)
class multiml.task.pytorch.modules.ASNGModel(lam, delta_init_factor, alpha=1.5, range_restriction=True, *args, **kwargs)

Bases: ConnectionModel, Module

__init__(lam, delta_init_factor, alpha=1.5, range_restriction=True, *args, **kwargs)
Parameters:
  • *args – Variable length argument list

  • **kwargs – Arbitrary keyword arguments

set_most_likely()
set_fix(fix)
get_most_likely()
update_theta(losses)
get_thetas()
set_thetas(theta_cat, theta_int)
best_models()
forward_fix(inputs)
forward_sampling(inputs)
training: bool
class multiml.task.pytorch.modules.ChoiceBlockModel(models, *args, **kwargs)

Bases: Module

__init__(models, *args, **kwargs)
Parameters:

models (list(torch.nn.Module)) – list of pytorch models for choiceblock

property choice
forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class multiml.task.pytorch.modules.ConnectionModel(*args, **kwargs)

Bases: ConnectionModel, Module

__init__(*args, **kwargs)
Parameters:
  • *args – Variable length argument list

  • **kwargs – Arbitrary keyword arguments

forward(inputs)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class multiml.task.pytorch.modules.Conv2DBlock(layers_conv2d=None, initialize=True, *args, **kwargs)

Bases: Module

__init__(layers_conv2d=None, initialize=True, *args, **kwargs)
Parameters:
  • layers_conv2d (list(tuple(str, dict))) – configs of conv2d layer. list of tuple(op_name, op_args).

  • *args – Variable length argument list

  • **kwargs – Arbitrary keyword arguments

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class multiml.task.pytorch.modules.LSTMBlock(layers, activation=None, batch_norm=False, initialize=True, *args, **kwargs)

Bases: Module

__init__(layers, activation=None, batch_norm=False, initialize=True, *args, **kwargs)
Parameters:
  • layers (list) – list of hidden layers

  • activation (str) – activation function for MLP

  • activation_last (str) – activation function for the MLP last layer

  • batch_norm (bool) – use batch normalization

  • *args – Variable length argument list

  • **kwargs – Arbitrary keyword arguments

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class multiml.task.pytorch.modules.MLPBlock(layers, activation, activation_last=None, batch_norm=False, initialize=True, input_shape=None, output_shape=None, *args, **kwargs)

Bases: Module

__init__(layers, activation, activation_last=None, batch_norm=False, initialize=True, input_shape=None, output_shape=None, *args, **kwargs)
Parameters:
  • layers (list) – list of hidden layers

  • activation (str) – activation function for MLP

  • activation_last (str) – activation function for the MLP last layer

  • batch_norm (bool) – use batch normalization

  • *args – Variable length argument list

  • **kwargs – Arbitrary keyword arguments

forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class multiml.task.pytorch.modules.ASNGTaskBlockModel(name, models, *args, **kwargs)

Bases: Module

__init__(name, models, *args, **kwargs)
Parameters:

models (list(torch.nn.Module)) – list of pytorch models for choiceblock

n_subtask()
set_prob(c_cat, c_int)
forward(x)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool