The article in the keras examples "pretrained_word_embeddings" explains how to do this.
(This assumes you want to use keras to train a neural network that uses your embedding as an input layer.)
In a nutshell, you include the embedding as a frozen layer, i.e. explicitly tell the network not to update the weights in your embedding layer.
The essential code snippet from this page is this - note the trainable = FALSE:
embedding_layer <- layer_embedding(
input_dim = num_words,
output_dim = EMBEDDING_DIM,
weights = list(embedding_matrix),
input_length = MAX_SEQUENCE_LENGTH,
trainable = FALSE
)
Then you use this frozen layer in your model:
preds <- sequence_input %>%
embedding_layer %>%
layer_conv_1d(filters = 128, kernel_size = 5, activation = 'relu') %>%
layer_max_pooling_1d(pool_size = 5) %>%
layer_conv_1d(filters = 128, kernel_size = 5, activation = 'relu') %>%
layer_max_pooling_1d(pool_size = 5) %>%
layer_conv_1d(filters = 128, kernel_size = 5, activation = 'relu') %>%
layer_max_pooling_1d(pool_size = 35) %>%
layer_flatten() %>%
layer_dense(units = 128, activation = 'relu') %>%
layer_dense(units = length(labels_index), activation = 'softmax')