uz_nn_layer#
The uz_nn_layer is used by the neural network software module (uz_nn) and is based on Matrix math as well as Activation function. A layer in a neural network consists of a configurable number of neurons and an activation function. A layer multiplies the input \(\boldsymbol{x}\) of the layer with the weight matrix \(\boldsymbol{w}\) of the layer and adds the bias \(\boldsymbol{b}\) to calculate \(\boldsymbol{s}\). The output of the layer \(\boldsymbol{y}\) is the result of feeding the sum \(\boldsymbol{s}\) into the activation function of the layer.
The input \(\boldsymbol{x}\) is a vector with dimension \(1 \times n\) with the number of inputs \(n\).
The output \(\boldsymbol{y}\) is a vector with dimension \(1 \times p\) with the number of neurons \(p\).
The bias \(\boldsymbol{b}\) is of dimension \(1 \times p\) with the number of neurons \(p\).
The weights \(\boldsymbol{w}\) is of dimension \(n \times p\) with the number of inputs \(n\) and number of neurons \(p\).
All neurons in one layer have the same activation function
Example#
The following example is calculated using Matlab and single
precision.
It is used in the unit tests of the module.
The example layer has three inputs, four neurons and ReLU activation function.
Four neurons result in four output values of the layer.
Input \(\boldsymbol{x}\) with three input values:
Input values are multiplied with the weight matrix \(\boldsymbol{b} \boldsymbol{w}\):
Add bias to sum:
Feed output through activation function:
With ReLU activation function:
Software and Example#
One uz_nn_layer_t
instance uses three uz_matrix_t
(Matrix math) instances!
Take this into account in the Global configuration.
The usage of a layer requires the user to pass pointer to arrays for the weights, bias, and output of the layer to the init function.
The init function initializes the required matrices and passes the pointer to the arrays to the initialization function of Matrix math (uz_matrix_init
).
The following shows an example initialization and feedforward calculation of one layer.
length_of_weights
,length_of_bias
, andlength_of_output
have to be calculated byUZ_MATRIX_SIZE
makrothree pointer to three arrays have to be provided in the config struct
number_of_neurons
andnumber_of_inputs
are freely configurable but have to be consistent with the dimensions of the provided arraysactivation_function
determines the activation function of the layerNote that
uz_nn_layer
operates directly on the data that the arrays hold. Therefore, never access or change the data in the array directly (as is the case with Matrix math)!
#define NUMBER_OF_INPUTS 3
#define NUMBER_OF_NEURONS_IN_LAYER 4
static float x[NUMBER_OF_INPUTS] = {1, 2, 3};
static float w[NUMBER_OF_INPUTS * NUMBER_OF_NEURONS_IN_LAYER] = {0.5377, 1.8339, -2.2588, 0.8622,
0.3188, -1.3077, -0.4336, 0.3426,
3.5784, 2.7694, -1.3499, 3.0349};
static float b[NUMBER_OF_NEURONS_IN_LAYER] = {1, -2, 3, -4};
static float out[NUMBER_OF_NEURONS_IN_LAYER] = {0};
void test_uz_nn_layer_ff_relu(void)
{
struct uz_matrix_t input_matrix={0};
uz_matrix_t *input = uz_matrix_init(&input_matrix,x, UZ_MATRIX_SIZE(x), 1, NUMBER_OF_INPUTS);
float b0[4] = {1, -2, 3, -4};
struct uz_nn_layer_config config = {
.activation_function = activation_ReLU,
.number_of_neurons = NUMBER_OF_NEURONS_IN_LAYER,
.number_of_inputs = NUMBER_OF_INPUTS,
.length_of_weights = UZ_MATRIX_SIZE(w),
.length_of_bias = UZ_MATRIX_SIZE(b0),
.length_of_output = UZ_MATRIX_SIZE(out),
.weights = w,
.bias = b0,
.output = out
};
uz_nn_layer_t *layer = uz_nn_layer_init(config);
float expected[4] = {12.9105, 5.5267 , 0.0 , 6.6521};
uz_nn_layer_ff(layer, input);
uz_matrix_t *result = uz_nn_layer_get_output_data(layer);
for (uint32_t i = 0; i < 4; i++)
{
TEST_ASSERT_EQUAL_FLOAT(expected[i], uz_matrix_get_element_zero_based(result, 0, i));
}
}
Reference#
The enum activation_function
and the struct uz_nn_layer_config
are directly used by uz_nn and have to be passed to its initialization function.
-
enum activation_function#
Enum for passing the type of the activation function to the init function of the layer.
Values:
-
enumerator activation_ReLU#
-
enumerator activation_linear#
-
enumerator activation_sigmoid#
-
enumerator activation_sigmoid2#
-
enumerator activation_tanh#
-
enumerator activation_ReLU#
-
struct uz_nn_layer_config#
Configuration struct for a layer of uz_nn.
Public Members
-
enum activation_function activation_function#
Activation function of all neurons in this layer
-
uint32_t number_of_neurons#
Number of neurons in the layer
-
uint32_t number_of_inputs#
Number of inputs to the layer. Is either the number of inputs to the network or the number of neurons of the previouse layer
-
uint32_t length_of_weights#
Number of weights in the layer, has to be calculated by UZ_MATRIX_SIZE(weights)
-
uint32_t length_of_bias#
Number of bias in the layer, has to be calculated by UZ_MATRIX_SIZE(bias)
-
uint32_t length_of_output#
Number of outputs in the layer, has to be calculated by UZ_MATRIX_SIZE(output) and is equal to the number of weights
-
float *const bias#
Pointer to an array that holds the weights
-
float *const output#
Pointer to an array that holds the bias
-
enum activation_function activation_function#
Note
The following functions are not independent for direct use but are the basis of uz_nn.
-
typedef struct uz_nn_layer_t uz_nn_layer_t#
Object definition for a layer of a neural network.
-
uz_nn_layer_t *uz_nn_layer_init(struct uz_nn_layer_config layer_config)#
Initializes a layer of a neural network.
- Parameters:
layer_config – Configuration struct
- Returns:
uz_nn_layer_t*
-
void uz_nn_layer_ff(uz_nn_layer_t *const self, uz_matrix_t const *const input)#
Calculates one forward pass of the network with the given input value (column vector)
- Parameters:
self –
input – Column vector of inputs (rows==1 !)
-
uz_matrix_t *uz_nn_layer_get_output_data(uz_nn_layer_t const *const self)#
Returns a pointer to the output data of the layer. Intended to be used by the following layer as input data.
- Parameters:
self –
- Returns:
uz_matrix*