Softmax
-
void riscv_nn_softmax_common_s8(const int8_t *input, const int32_t num_rows, const int32_t row_size, const int32_t mult, const int32_t shift, const int32_t diff_min, const bool int16_output, void *output)
- group supportSoftmax
Support functions for Softmax.
Functions
-
void riscv_nn_softmax_common_s8(const int8_t *input, const int32_t num_rows, const int32_t row_size, const int32_t mult, const int32_t shift, const int32_t diff_min, const bool int16_output, void *output)
Common softmax function for s8 input and s8 or s16 output.
Note
Supported framework: TensorFlow Lite micro (bit-accurate)
- Parameters
input – [in] Pointer to the input tensor
num_rows – [in] Number of rows in the input tensor
row_size – [in] Number of elements in each input row
mult – [in] Input quantization multiplier
shift – [in] Input quantization shift within the range [0, 31]
diff_min – [in] Minimum difference with max in row. Used to check if the quantized exponential operation can be performed
int16_output – [in] Indicating s8 output if 0 else s16 output
output – [out] Pointer to the output tensor
-
void riscv_nn_softmax_common_s8(const int8_t *input, const int32_t num_rows, const int32_t row_size, const int32_t mult, const int32_t shift, const int32_t diff_min, const bool int16_output, void *output)