Page 37 - Read Online
P. 37

Page 6 of 10                    Bao et al. Complex Eng Syst 2022;2:16  I http://dx.doi.org/10.20517/ces.2022.30
















                Figure 4. Simulation samples generated based on the generative adoration network (time-phase images of the HTRU-Medlat dataset).



                                                                     
                                                                      =                                 (1)
                                                                   +     


               (2) Recall: the proportion of all samples with positive true labels that are correctly predicted to be positive, i.e.,:


                                                                   
                                                                 =                                      (2)
                                                                  +     


               (3) F1-score: the combined accuracy and recall is the F1-score, i.e.,:



                                                         2 ×                    ×             
                                              1 −            =                                          (3)
                                                                             +             

               3.2. Model comparison experiment
                                           [7]
               The proposed model uses a CNN model for comparison, which has a similar structure to the LeNet network
               structure [22] , but with some adaptations for the pulsar candidate identification task. For the hyperparameter
               settings of the residual network model, this paper uses a mini-batch size of 128, a learning rate of 0.001, a
               size of 0.00001 for the L2 regularisation term, and a standard Gaussian to initialise the model parameters. In
               addition, the model employs the ReLU [23]  activation function for all layers except the last layer of the model,
               which uses a sigmoid activation function. The objective function for optimisation is cross-entropy, and the
               Adam [24]  optimiser is used.

               For the generative adversarial network’s hyperparameter settings, the learning rate is 0.001, the L2 regulariza-
               tion weight is 0.0005, the number of training rounds is 200, the optimiser is Adam, the size of the minibatch is
               128, the parameters are initialized using Kaiming initialization [25] , the discriminator is trained with 5 rounds
               for each batch, the discriminator weights range from [-0.005, 0.005], and LeakyReLU employed a slope of 0.1.

               The simulated positive samples generated by the trained generator in this paper are shown in Figure 4, where
               thetenimagesinthefirstrowaretherealpulsarsamplesandthetenimagesinthesecondrowarethesimulated
               samples produced by the generator in this paper. It can be seen that the simulated samples generated by the
               model can retain the features of the real pulsar samples to a certain extent.The training loss on the HTRU-
               Medlat dataset is demonstrated in Figure 5, which reveals the optimised training performance of the proposed
               method in the experiment. The training loss curve declines sharply when the quantity of training samples
               is relatively small, and the simulated and real pulsar image loss remains fairly low at 4% when the dataset is
               expanded. The training loss of the proposed method is significantly lower than that of other existing models,
               thus guaranteeing better performance in pulsar sample identification.
   32   33   34   35   36   37   38   39   40   41   42