Full Text: PDF
Volume 4, Issue 3, 1 December 2022, Pages 393-404
Abstract. Unlike the traditional pipeline methods, the joint extraction approaches use a single model to distill the entities and semantic relations between entities from the unstructured texts and achieve better performances. A pioneering work, HRL-RE, uses a hierarchical reinforcement learning model to distill entities and relations that decompose the entire extraction process into a high-level relationship extraction and a low-level entity identification. HRL-RE makes the extraction of entities and relations more accurate while solving overlapped entities and relations to a certain extent. However, this method has not achieved satisfactory results in dealing with overlapped entities and relations in sentences. One reason is that learning a policy is usually inefficient, and the other one is the high variance of gradient estimators. In this paper, we propose a new method, Advantage Hierarchical Reinforcement Learning for Entity Relation Extraction (AHRL-ERE), which combines the HRL-RE model with a new advantage function to distill entities and relations from the structureless text. Specifically, based on the reference value of the policy function in the high-level subtask, we construct a new advantage function. Then, we combine this advantage function with the value function of the strategy in the low-level subtask to form a new value function. This new value function can immediately evaluate the current policy, so our AHR-ERE method can correct the direction of the policy gradient update in time, thereby making policy learning efficient. Moreover, our advantage function subtracts the reference value of the high-level policy value function from the low-level policy value function so that AHRL-ERE can decrease the variance of the gradient estimator. Thus our AHRL-ERE method is more effective for extracting overlapped entities and relations from the unstructured text. Experiments on the diffusely used datasets demonstrate that our proposed algorithm has better manifestation than the existing approaches do.
How to Cite this Article:
X. Zhu, W. Zhu, Hierarchical reinforcement learning with advantage function for entity relation extraction, J. Appl. Numer. Optim. 4 (2022), 393-404.