A Lambda layer in neural networks is a layer that allows you to write custom code or functions as part of the neural network architecture. It is called a "Lambda" layer because it is inspired by the concept of lambda functions in programming languages.
Here are some key points about Lambda layers:
Overall, Lambda layers are a powerful tool that empowers you to customize and extend the functionality of your neural network by incorporating your own code or functions.
import tensorflow as tf
from tensorflow.keras.layers import Lambda
# Define a custom function to be used in the Lambda layer
def custom_function(x):
return tf.square(x) + 2*x + 1
# Create a Lambda layer and pass the custom function
lambda_layer = Lambda(custom_function)
# Build the rest of your neural network architecture
model = tf.keras.Sequential([
# Add other layers here
lambda_layer,
# Add more layers if needed
])
# Compile and train the model
model.compile(optimizer='adam', loss='mse')
model.fit(x_train, y_train, epochs=10, batch_size=32)
In this example, we define a custom function custom_function
that takes an input x
and performs a mathematical operation on it. We then create a Lambda layer lambda_layer
and pass the custom function to it. This Lambda layer can now be added to the neural network architecture, and it will apply the custom function to the input data during the forward pass.
Note that the example above is a simplified version, and in practice, you would typically use Lambda layers for more complex operations or transformations.
Your model is composed mainly of SimpleRNN layers. As mentioned in the lectures, this type of RNN simply routs its output back to the input. You will stack two of these layers in your model so the first one should have return_sequences
set to True
.
As mentioned in the documentation, SimpleRNN
layers expect a 3-dimensional tensor input with the shape [batch, timesteps, feature
]. With that, you need to reshape your window from (32, 20)
to (32, 20, 1)
. This means the 20 datapoints in the window will be mapped to 20 timesteps of the RNN. You can do this reshaping in a separate cell but you can also do this within the model itself by using Lambda layers. Notice the first layer below. It defines a lambda function that adds a dimension at the last axis of the input. That's exactly the transformation you need. For the input_shape
, you can specify None
(like in the lecture video) if you want the model to be more flexible with the number of timesteps. Alternatively, you can set it to window_size
as shown below if you want to set the timesteps
dimension to the expected size of your data windows.
Normally, you can just a have a Dense
layer output as shown in the previous labs. However, you can help the training by scaling up the output to around the same figures as your labels. This will depend on the activation functions you used in your model. SimpleRNN
uses tanh by default and that has an output range of [-1,1]
. You will use another Lambda()
layer to scale the output by 100 before it adjusts the layer weights. Feel free to remove this layer later after this lab and see what results you get.
# Build the Model
model_tune = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),
input_shape=[window_size]),
tf.keras.layers.SimpleRNN(40, return_sequences=True),
tf.keras.layers.SimpleRNN(40),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 100.0)
])
# Print the model summary
model_tune.summary()
You will then tune the learning rate as before. You will define a learning rate schedule that changes this hyperparameter dynamically. You will use the Huber Loss as your loss function to minimize sensitivity to outliers.