Page 69 - Read Online
P. 69
Ma et al. Intell Robot 2023;3(4):581-95 I http://dx.doi.org/10.20517/ir.2023.33 Page 9 of 15
critical heterogeneous information. For example, in social networks, nodes can be users, pages, or points of
interest, and edges can represent different types of interactions such as following and liking. The subnetwork
of a GHNN can handle user nodes, page nodes, and interest point nodes separately while considering different
types of edge information in order to better capture the characteristics of each node type and edge type.
In the graph learning phase, the GHNN subnetwork uses the common graph neural network structure (such
as GCN or GAT) for forward propagation and back propagation to learn the representation of nodes and edges.
These representations are low-dimensional vector representations of nodes and edges, capturing the seman-
tic information of nodes and edges throughout the entire graph. Through collaborative learning of multiple
subnetworks, GHNNs can effectively fuse information of different types of nodes and edges in heterogeneous
graphs.
After the graph feature learning is completed, the GHNN-ACO algorithm introduces ACO algorithms to
further optimize the classification and prediction of graph data. ACO algorithms are Swarm intelligence algo-
rithms inspired by the behavior of ants in the process of searching for food. Ants guide other ants to choose a
path by releasing pheromones so as to realize the collaborative search of all ants. In GHNN-ACO, ants repre-
sent a path search process. The pheromone concentration on the path will increase or decrease according to
the quality of the path. Ants tend to choose the path with higher pheromone concentration.
TheGHNN-ACOalgorithmcombinesthenoderepresentationinformationobtainedfromgraphfeaturelearn-
ing with path search in ACO algorithms. Each ant starts from a randomly chosen starting node and uses the
learned node representation information to predict the category of that node. Then, the ant selects the next
node based on the pheromone concentration and node characterization information, where the pheromone
concentration is determined based on the classification probability of the node. Ants perform path selec-
tion based on a specific strategy and update the pheromone concentration of the edges on the chosen path.
Throughout the search process of ACO algorithms, GHNN-ACO effectively utilizes the combination of node
representations and path search to optimize tasks, such as node classification and link prediction.
The structure of GHNN-ACO algorithms has many advantages but also some disadvantages. Let us analyze
one by one:
1. Processing heterogeneous graph data: The GHNN-ACO algorithm can effectively process heterogeneous
graphdata,wherenodesandedgescanhavedifferenttypesandattributes. ThroughthestructureofGHNNs,
each subnetwork specifically processes a type of node or edge, which can better capture the complex infor-
mation of heterogeneous graphs.
2. Fusion of local and global information: GHNNs learn the representation of graphs through local subnet-
works and global information aggregation. ACO algorithms can be regarded as a global search method
through path search, which combines global and local information. This fusion can help the algorithm
better handle the relationships between structures and nodes in the graph.
3. Adaptivesearchstrategy: pathsearchinACOalgorithmsisanadaptivestrategy, andantswillselectthenext
node according to pheromone concentration and node characterization information. This strategy enables
the algorithm to concentrate on searching high-quality paths during the search process, which is beneficial
for improving classification accuracy and link prediction accuracy.
4. Collaborative optimization: two parts of GHNN-ACO algorithms, namely GHNNs and ACO algorithms,
are mutually collaborative optimization. GHNNs provide node representation information, and ACO al-
gorithms use this information by searching paths. Ant path selection will affect the update of pheromone
concentration. This collaborative optimization can enable the algorithm to converge faster and find more
optimal solutions.
IntheiterativeupdateprocessoftheGHNN-ACOalgorithm,theparametersoftheGHNNandthepheromone