scgpt.model
Submodules
scgpt.model.dsbn
- class scgpt.model.dsbn.DomainSpecificBatchNorm1d(num_features: int, num_domains: int, eps: float = 1e-05, momentum: float = 0.1, affine: bool = True, track_running_stats: bool = True)[源代码]
基类:
_DomainSpecificBatchNorm
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- property bn_handle: Module
- class scgpt.model.dsbn.DomainSpecificBatchNorm2d(num_features: int, num_domains: int, eps: float = 1e-05, momentum: float = 0.1, affine: bool = True, track_running_stats: bool = True)[源代码]
基类:
_DomainSpecificBatchNorm
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- property bn_handle: Module
scgpt.model.flash_layers
- class scgpt.model.flash_layers.FlashscGPTGenerator(encoder_layer, num_layers, norm=None, mask_check=True)[源代码]
基类:
Module
TransformerEncoder is a stack of N encoder layers. Users can build the BERT(https://arxiv.org/abs/1810.04805) model with corresponding parameters.
- 参数:
encoder_layer – an instance of the TransformerEncoderLayer() class (required).
num_layers – the number of sub-encoder-layers in the encoder (required).
norm – the layer normalization component (optional).
enable_nested_tensor – if True, input will automatically convert to nested tensor (and convert back on output). This will improve the overall performance of TransformerEncoder when padding rate is high. Default:
True
(enabled).
- Examples::
>>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8) >>> transformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers=6) >>> src = torch.rand(10, 32, 512) >>> out = transformer_encoder(src)
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(pcpt_total_embs: Tensor, gen_total_embs: Tensor, pcpt_key_padding_mask: Tensor | None = None, gen_key_padding_mask: Tensor | None = None) Tensor [源代码]
Pass the input through the encoder layers in turn.
- 参数:
src – the sequence to the encoder (required).
mask – the mask for the src sequence (optional).
src_key_padding_mask – the mask for the src keys per batch (optional).
- Shape:
see the docs in Transformer class.
- class scgpt.model.flash_layers.FlashscGPTLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation='relu', layer_norm_eps=1e-05, batch_first=True, device=None, dtype=None, norm_scheme='post')[源代码]
基类:
Module
TransformerEncoderLayer is made up of self-attn and feedforward network. The class is modified from torch.nn.TransformerEncoderLayer to support the FlashAttention.
- 参数:
d_model – the number of expected features in the input (required).
nhead – the number of heads in the multiheadattention models (required).
dim_feedforward – the dimension of the feedforward network model (default=2048).
dropout – the dropout value (default=0.1).
activation – the activation function of intermediate layer, relu or gelu (default=relu).
layer_norm_eps – the eps value in layer normalization components (default=1e-5).
batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature). Default:False
.
- Examples::
>>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8) >>> src = torch.rand(10, 32, 512) >>> out = encoder_layer(src)
- Alternatively, when
batch_first
isTrue
: >>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8, batch_first=True) >>> src = torch.rand(32, 10, 512) >>> out = encoder_layer(src)
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(pcpt_total_embs: Tensor, gen_total_embs: Tensor, pcpt_key_padding_mask: Tensor | None = None, gen_key_padding_mask: Tensor | None = None) Tensor [源代码]
Pass the input through the encoder layer.
- 参数:
src – the sequence to the encoder layer (required).
src_mask – the mask for the src sequence (optional).
src_key_padding_mask – the mask for the src keys per batch (optional).
- Shape:
see the docs in Transformer class.
- class scgpt.model.flash_layers.FlashscGPTMHA(embed_dim, num_heads, bias=True, batch_first=True, attention_dropout=0.0, causal=False, device=None, dtype=None)[源代码]
基类:
Module
Custom MHA layer for scGPT. This takes two separate forward passes on the pect genes, and on the gen genes.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(pcpt_total_embs: Tensor, gen_total_embs: Tensor, pcpt_key_padding_mask: Tensor | None = None, gen_key_padding_mask: Tensor | None = None, need_weights=False)[源代码]
pcpt_total_embs: (batch, pcpt_len, hidden_dim) (where hidden_dim = num heads * head dim) gen_total_embs: (batch, gen_len, hidden_dim) pcpt_key_padding_mask: bool tensor of shape (batch, pcpt_len), 1 means valid and 0 means not valid. gen_key_padding_mask: bool tensor of shape (batch, gen_len), 1 means valid and 0 means not valid.
scgpt.model.generation_model
- class scgpt.model.generation_model.ClsDecoder(d_model: int, n_cls: int, nlayers: int = 3, activation: callable = <class 'torch.nn.modules.activation.ReLU'>)[源代码]
基类:
Module
Decoder for classification task.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- class scgpt.model.generation_model.GeneEncoder(num_embeddings: int, embedding_dim: int, padding_idx: int | None = None)[源代码]
基类:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: Tensor) Tensor [源代码]
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class scgpt.model.generation_model.PositionalEncoding(d_model: int, dropout: float = 0.1, max_len: int = 5000)[源代码]
基类:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- class scgpt.model.generation_model.Similarity(temp)[源代码]
基类:
Module
Dot product or cosine similarity
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, y)[源代码]
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class scgpt.model.generation_model.TransformerGenerator(ntoken: int, d_model: int, nhead: int, d_hid: int, nlayers: int, nlayers_cls: int, n_cls: int, vocab: Any, dropout: float = 0.5, pad_token: str = '<pad>', pad_value: int = 0, pert_pad_id: int = 2, do_mvc: bool = False, domain_spec_batchnorm: bool | str = False, cell_emb_style: str = 'cls', mvc_decoder_style: str = 'inner product', ecs_threshold: float = 0.3, explicit_zero_prob: bool = False, use_fast_transformer: bool = False, fast_transformer_backend: str = 'flash', pre_norm: bool = False)[源代码]
基类:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- encode_batch(src: Tensor, values: Tensor, src_key_padding_mask: Tensor, batch_size: int, output_to_cpu: bool = True) Tensor [源代码]
- 参数:
src – Tensor, shape [N, seq_len]
values – Tensor, shape [N, seq_len]
src_key_padding_mask – Tensor, shape [N, seq_len]
- 返回:
output Tensor of shape [N, seq_len, embsize]
- forward(src: Tensor, values: Tensor, input_pert_flags: Tensor, src_key_padding_mask: Tensor, CLS: bool = False, CCE: bool = False, MVC: bool = False, ECS: bool = False, do_sample: bool = False) Mapping[str, Tensor] [源代码]
- 参数:
src (
Tensor
) – token ids, shape [batch_size, seq_len]values (
Tensor
) – token values, shape [batch_size, seq_len]src_key_padding_mask (
Tensor
) – mask for src, shape [batch_size, seq_len]CLS (
bool
) – if True, return the celltype classification objective (CLS) outputCCE (
bool
) – if True, return the contrastive cell embedding objective (CCE) outputMVC (
bool
) – if True, return the masked value prediction for cell embedding MVC outputECS (
bool
) – if True, return the elastic cell similarity objective (ECS) output.
- 返回:
dict of output Tensors.
scgpt.model.grad_reverse
- class scgpt.model.grad_reverse.GradReverse(*args, **kwargs)[源代码]
基类:
Function
- static backward(ctx, grad_output: Tensor) Tensor [源代码]
Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function).
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs as theforward()
returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
- static forward(ctx, x: Tensor, lambd: float) Tensor [源代码]
Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with
ctx.save_for_backward()
if they are intended to be used inbackward
(equivalently,vjp
) orctx.save_for_forward()
if they are intended to be used for injvp
.
scgpt.model.layers
- class scgpt.model.layers.MultiheadAttention(embed_dim, num_heads, dropout=0.0, batch_first=True, device=None, dtype=None)[源代码]
基类:
Module
Allows the model to jointly attend to information from different representation subspaces as described in the paper: Attention Is All You Need.
This module is modified from the original torch.nn.MultiheadAttention
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(query: Tensor, key: Tensor, value: Tensor, key_padding_mask: Tensor | None = None, need_weights: bool = True, attn_mask: Tensor | None = None, average_attn_weights: bool = True) Tuple[Tensor, Tensor | None] [源代码]
- 参数:
query – Query embeddings of shape \((L, E_q)\) for unbatched input, \((L, N, E_q)\) when
batch_first=False
or \((N, L, E_q)\) whenbatch_first=True
, where \(L\) is the target sequence length, \(N\) is the batch size, and \(E_q\) is the query embedding dimensionembed_dim
. Queries are compared against key-value pairs to produce the output. See “Attention Is All You Need” for more details.key – Key embeddings of shape \((S, E_k)\) for unbatched input, \((S, N, E_k)\) when
batch_first=False
or \((N, S, E_k)\) whenbatch_first=True
, where \(S\) is the source sequence length, \(N\) is the batch size, and \(E_k\) is the key embedding dimensionkdim
. See “Attention Is All You Need” for more details.value – Value embeddings of shape \((S, E_v)\) for unbatched input, \((S, N, E_v)\) when
batch_first=False
or \((N, S, E_v)\) whenbatch_first=True
, where \(S\) is the source sequence length, \(N\) is the batch size, and \(E_v\) is the value embedding dimensionvdim
. See “Attention Is All You Need” for more details.key_padding_mask – If specified, a mask of shape \((N, S)\) indicating which elements within
key
to ignore for the purpose of attention (i.e. treat as “padding”). For unbatched query, shape should be \((S)\). Binary and byte masks are supported. For a binary mask, aTrue
value indicates that the correspondingkey
value will be ignored for the purpose of attention. For a float mask, it will be directly added to the correspondingkey
value.need_weights – If specified, returns
attn_output_weights
in addition toattn_outputs
. Default:True
.attn_mask – If specified, a 2D or 3D mask preventing attention to certain positions. Must be of shape \((L, S)\) or \((N\cdot\text{num\_heads}, L, S)\), where \(N\) is the batch size, \(L\) is the target sequence length, and \(S\) is the source sequence length. A 2D mask will be broadcasted across the batch while a 3D mask allows for a different mask for each entry in the batch. Binary, byte, and float masks are supported. For a binary mask, a
True
value indicates that the corresponding position is not allowed to attend. For a byte mask, a non-zero value indicates that the corresponding position is not allowed to attend. For a float mask, the mask values will be added to the attention weight.average_attn_weights – If true, indicates that the returned
attn_weights
should be averaged across heads. Otherwise,attn_weights
are provided separately per head. Note that this flag only has an effect whenneed_weights=True
. Default:True
(i.e. average weights across heads)
- Outputs:
attn_output - Attention outputs of shape \((L, E)\) when input is unbatched, \((L, N, E)\) when
batch_first=False
or \((N, L, E)\) whenbatch_first=True
, where \(L\) is the target sequence length, \(N\) is the batch size, and \(E\) is the embedding dimensionembed_dim
.attn_output_weights - Only returned when
need_weights=True
. Ifaverage_attn_weights=True
, returns attention weights averaged across heads of shape \((L, S)\) when input is unbatched or \((N, L, S)\), where \(N\) is the batch size, \(L\) is the target sequence length, and \(S\) is the source sequence length. Ifaverage_attn_weights=False
, returns attention weights per head of shape \((\text{num\_heads}, L, S)\) when input is unbatched or \((N, \text{num\_heads}, L, S)\).
备注
batch_first argument is ignored for unbatched inputs.
scgpt.model.model
- class scgpt.model.model.AdversarialDiscriminator(d_model: int, n_cls: int, nlayers: int = 3, activation: callable = <class 'torch.nn.modules.activation.LeakyReLU'>, reverse_grad: bool = False)[源代码]
基类:
Module
Discriminator for the adversarial training for batch correction.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- class scgpt.model.model.BatchLabelEncoder(num_embeddings: int, embedding_dim: int, padding_idx: int | None = None)[源代码]
基类:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: Tensor) Tensor [源代码]
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class scgpt.model.model.CategoryValueEncoder(num_embeddings: int, embedding_dim: int, padding_idx: int | None = None)[源代码]
基类:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: Tensor) Tensor [源代码]
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class scgpt.model.model.ClsDecoder(d_model: int, n_cls: int, nlayers: int = 3, activation: callable = <class 'torch.nn.modules.activation.ReLU'>)[源代码]
基类:
Module
Decoder for classification task.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- class scgpt.model.model.ContinuousValueEncoder(d_model: int, dropout: float = 0.1, max_value: int = 512)[源代码]
基类:
Module
Encode real number values to a vector using neural nets projection.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- class scgpt.model.model.ExprDecoder(d_model: int, explicit_zero_prob: bool = False, use_batch_labels: bool = False)[源代码]
基类:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- class scgpt.model.model.FastTransformerEncoderWrapper(d_model: int, nhead: int, d_hid: int, nlayers: int, dropout: float = 0.5)[源代码]
基类:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- class scgpt.model.model.FlashTransformerEncoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation='relu', layer_norm_eps=1e-05, batch_first=True, device=None, dtype=None, norm_scheme='post')[源代码]
基类:
Module
TransformerEncoderLayer is made up of self-attn and feedforward network. The class is modified from torch.nn.TransformerEncoderLayer to support the FlashAttention.
- 参数:
d_model – the number of expected features in the input (required).
nhead – the number of heads in the multiheadattention models (required).
dim_feedforward – the dimension of the feedforward network model (default=2048).
dropout – the dropout value (default=0.1).
activation – the activation function of intermediate layer, relu or gelu (default=relu).
layer_norm_eps – the eps value in layer normalization components (default=1e-5).
batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature). Default:False
.
- Examples::
>>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8) >>> src = torch.rand(10, 32, 512) >>> out = encoder_layer(src)
- Alternatively, when
batch_first
isTrue
: >>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8, batch_first=True) >>> src = torch.rand(32, 10, 512) >>> out = encoder_layer(src)
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(src: Tensor, src_mask: Tensor | None = None, src_key_padding_mask: Tensor | None = None, **kwargs) Tensor [源代码]
Pass the input through the encoder layer.
- 参数:
src – the sequence to the encoder layer (required).
src_mask – the mask for the src sequence (optional).
src_key_padding_mask – the mask for the src keys per batch (optional).
- Shape:
see the docs in Transformer class.
- class scgpt.model.model.GeneEncoder(num_embeddings: int, embedding_dim: int, padding_idx: int | None = None)[源代码]
基类:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: Tensor) Tensor [源代码]
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class scgpt.model.model.MVCDecoder(d_model: int, arch_style: str = 'inner product', query_activation: ~torch.nn.modules.module.Module = <class 'torch.nn.modules.activation.Sigmoid'>, hidden_activation: ~torch.nn.modules.module.Module = <class 'torch.nn.modules.activation.PReLU'>, explicit_zero_prob: bool = False, use_batch_labels: bool = False)[源代码]
基类:
Module
Decoder for the masked value prediction for cell embeddings.
- 参数:
d_model (
int
) – dimension of the gene embedding.arch_style (
str
) – architecture style of the decoder, choice from 1. “inner product” or 2. “concat query” or 3. “sum query”.query_activation (
nn.Module
) – activation function for the query vectors.hidden_activation (
nn.Module
) – activation function for the hidden layers.
- class scgpt.model.model.PositionalEncoding(d_model: int, dropout: float = 0.1, max_len: int = 5000)[源代码]
基类:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- class scgpt.model.model.SimDecoder(d_model: int, n_cls: int, nlayers: int = 3, activation: callable = <class 'torch.nn.modules.activation.ReLU'>, projection_dim: int = 2048)[源代码]
基类:
Module
Decoder for classification task with similarity matrix.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- class scgpt.model.model.Similarity(temp)[源代码]
基类:
Module
Dot product or cosine similarity
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, y)[源代码]
Defines the computation performed at every call.
Should be overridden by all subclasses.
备注
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class scgpt.model.model.TransformerModel(ntoken: int, d_model: int, nhead: int, d_hid: int, nlayers: int, nlayers_cls: int = 3, n_cls: int = 1, vocab: Any | None = None, dropout: float = 0.5, pad_token: str = '<pad>', pad_value: int = 0, do_mvc: bool = False, do_dab: bool = False, use_batch_labels: bool = False, num_batch_labels: int | None = None, domain_spec_batchnorm: bool | str = False, input_emb_style: str = 'continuous', n_input_bins: int | None = None, cell_emb_style: str = 'cls', mvc_decoder_style: str = 'inner product', ecs_threshold: float = 0.3, explicit_zero_prob: bool = False, use_generative_training=False, use_fast_transformer: bool = False, fast_transformer_backend: str = 'flash', pre_norm: bool = False, use_sim_decoder: bool = False)[源代码]
基类:
Module
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- encode_batch(src: Tensor, values: Tensor, src_key_padding_mask: Tensor, batch_size: int, batch_labels: Tensor | None = None, output_to_cpu: bool = True, time_step: int | None = None, return_np: bool = False) Tensor [源代码]
- 参数:
src (Tensor) – shape [N, seq_len]
values (Tensor) – shape [N, seq_len]
src_key_padding_mask (Tensor) – shape [N, seq_len]
batch_size (int) – batch size for encoding
batch_labels (Tensor) – shape [N, n_batch_labels]
output_to_cpu (bool) – whether to move the output to cpu
time_step (int) – the time step index in the transformer output to return. The time step is along the second dimenstion. If None, return all.
return_np (bool) – whether to return numpy array
- 返回:
output Tensor of shape [N, seq_len, embsize]
- forward(*args, **kwargs) Mapping[str, Tensor] [源代码]
Wrapper to call either generative_forward or perceptual_forward, depending on the value of the “generative_training” kwarg.
- generate(cell_emb: Tensor, src: Tensor, values: Tensor | None = None, src_key_padding_mask: Tensor | None = None, gen_iters: int = 1, batch_labels: Tensor | None = None) Tensor [源代码]
- 参数:
cell_emb (
Tensor
) – shape (batch, embsize)src (
Tensor
) – shape (batch, seq_len)values (
Tensor
) – shape (batch, seq_len), optionalsrc_key_padding_mask (
Tensor
) – shape (batch, seq_len), optionalgen_iters (
int
) – number of generation iterationsbatch_labels (
Tensor
) – shape (batch,), optional
- generative_forward(pcpt_genes: Tensor, pcpt_values: Tensor, pcpt_key_padding_mask: Tensor, gen_genes: Tensor, gen_key_padding_mask: Tensor, batch_labels: Tensor | None = None, CLS: bool = False, CCE: bool = False, MVC: bool = False, ECS: bool = False, do_sample: bool = False, input_cell_emb: Tensor | None = None) Mapping[str, Tensor] [源代码]
- 参数:
pcpt_genes (
Tensor
) – token ids of the perceptual part, shape [batch_size, seq_len]pcpt_values (
Tensor
) – token values of the perceptual part, shape [batch_size, seq_len]pcpt_key_padding_mask (
Tensor
) – mask for pcpt_genes, shape [batch_size, seq_len]gen_genes (
Tensor
) – token ids of the generative part, shape [batch_size, seq_len]gen_key_padding_mask (
Tensor
) – mask for gen_genes, shape [batch_size, seq_len]batch_labels (
Tensor
) – batch labels, shape [batch_size]do_sample (
bool
) – whether to do sampling from bernoulli for generated zero predictions.input_cell_emb (
Tensor
) – cell embeddings, shape [batch_size, embsize]
- 返回:
pred (
Tensor
): prediction, shape [batch_size, seq_len]- cell_emb (
Tensor
): cell embeddings, shape [batch_size, embsize]
- cell_emb (
- 返回类型:
Mapping[str, Tensor]
- perceptual_forward(src: Tensor, values: Tensor, src_key_padding_mask: Tensor, batch_labels: Tensor | None = None, CLS: bool = False, CCE: bool = False, MVC: bool = False, ECS: bool = False, do_sample: bool = False) Mapping[str, Tensor] [源代码]
- 参数:
src (
Tensor
) – token ids, shape [batch_size, seq_len]values (
Tensor
) – token values, shape [batch_size, seq_len]src_key_padding_mask (
Tensor
) – mask for src, shape [batch_size, seq_len]batch_labels (
Tensor
) – batch labels, shape [batch_size]CLS (
bool
) – if True, return the celltype classification objective (CLS) outputCCE (
bool
) – if True, return the contrastive cell embedding objective (CCE) outputMVC (
bool
) – if True, return the masked value prediction for cell embedding MVC outputECS (
bool
) – if True, return the elastic cell similarity objective (ECS) output.
- 返回:
dict of output Tensors.