BioKnowPrompt: Incorporating imprecise knowledge into prompt-tuning verbalizer with biomedical text for relation extraction

2022 
Domain tuning pre-trained language models (PLMs) with task-specific prompts have achieved great success in different domains. By using cloze-style language prompts to stimulate the versatile knowledge of PLMs, which directly bridges the gap between pre-training tasks and various downstream tasks. Large unlabelled corpora in the biomedical domain have been created in the last decade(i.e., PubMed, PMC, MIMIC, and ScienceDirect). In this paper, we introduce BioKnowPrompt, a prompt-tuning PLMs model that has been incorporating imprecise knowledge into verbalizer for biomedical text relation extraction. In particular, we use and to infuse entity and relation information into quick creation, and we use biomedical domain knowledge constraints to synergistically improve their representation. By using additional prompts to fine-tune PLMs, we can further stimulate the rich knowledge distributed in PLMs to better serve downstream tasks such as relation extraction. BioKnowPrompt has a lot of significant potential in few-shot learning, which outperforms the previous models and achieves state-of-the-art on the 5 datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []