Page 176 - Read Online
P. 176

Li et al. Intell Robot 2024;4(3):230-43  I http://dx.doi.org/10.20517/ir.2024.15    Page 236


                                 Data processing     Attention-GRU model
                                   section                section              WOA section
                                                     Decode the parameters
                              Experimental data on   passed in by WOA to   Input  WOA encoding of initial
                                driving behavior    obtain the corresponding     values
                                                      number of iterations,
                                                    batch_size, the number of
                                                    neurons working in each   Calculation of fitness
                              Abnormal data handling                          values for whale
                                                    GRU layer, and Dropout
                                                          rate              population initialization
                                Operation behavior   Establishing Attention-  Surrounding Prey
                                feature extraction    GRU fatigue driving   Bubble netting for prey
                                                    detection model based on   Search for prey
                                                     incoming parameters
                              Correlation analysis and
                               preference of fatigue                        Calculate the fitness
                               driving discriminators
                                                    Training Attention-GRU  value and update the
                                                                           global optimal solution
                               Determine the input
                              variables for Attention-
                                    GRU                                           Y/N
                                                     Testing Attention-GRU
                                                                     Output
                                                                                    Y
                               Divide the training set
                              and test set according to                      Output Attention-GRU
                                 the ratio of 8:2     Evaluation Model      optimal hyperparameters



               Figure 4. WOA-Attention-GRU algorithm flow (adapted from Li et al., 2023  [24] ). WOA: Whale optimization algorithm; GRU: gated recur-
               rent unit.


               number of neurons working in the GRU layer, and dropout rate. Fitness value calculation: the fitness values
               of the initialized whale population are calculated, and the global optimum is updated. Iterative optimization:
               based on the fitness values, the positions of the whale individuals are updated, gradually approaching the
               global optimum, and finally outputting the optimal hyperparameters of the Attention-GRU model. In the
               Attention-GRU model part, the model is trained and tested using the hyperparameters optimized by WOA.
               The main steps include model training: training the model using the training set provided by the data process-
               ing part. The attention mechanism focuses on important features in the sequences of driving behavior data,
               assigning greater weight to important information and reducing information loss. Model testing: testing the
               trained model using the test set to evaluate the model’s performance. The model’s predictive performance is
               then assessed by calculating the MSE. Through these steps, we can effectively detect the fatigue state of drivers,
               providing more accurate and reliable detection results. “Optimal hyperparameters” refer to the best set of pa-
               rameters that can minimize the MSE between the predicted fatigue level and the actual fatigue level. These
               hyperparameters include the number of iterations, batch size, the number of neurons in each GRU layer, and
               the dropout rate. The network model structure of Attention-GRU is presented in Figure 5.


               2.2.5 Fatigue state recognition based on Transformer
               Transformer excels at handling long-range dependencies in sequence data, which is particularly beneficial
               for time series analysis. We use the standard Transformer architecture, including self-attention mechanism.
               The number of layers, number of attention heads, and hidden layer dimensions were adjusted to optimize
               performance on our dataset. The model is trained using the same driving behavior data set and adopts the
               same preprocessing method as the WOA-Attention-GRU model. The results show that although Transformer
               performs well in capturing long-range dependencies, in the context of driving behavior analysis, the method
               combining WOA and Attention mechanisms in the GRU model provides more targeted feature extraction and
               optimization.
   171   172   173   174   175   176   177   178   179   180   181