AI News Hub Logo

AI News Hub

Job Skill Extraction via LLM-Centric Multi-Module Framework

cs.CL updates on arXiv.org
Guojing Li (City University of Hong Kong, Renmin University of China), Zichuan Fu (City University of Hong Kong), Junyi Li (City University of Hong Kong), Faxue Liu (City University of Hong Kong), Wenxia Zhou (Renmin University of China), Yejing Wang (City University of Hong Kong), Jingtong Gao (City University of Hong Kong), Maolin Wang (City University of Hong Kong), Rungen Liu (City University of Hong Kong), Wenlin Zhang (City University of Hong Kong), Xiangyu Zhao (City University of Hong Kong)

arXiv:2604.21525v1 Announce Type: new Abstract: Span-level skill extraction from job advertisements underpins candidate-job matching and labor-market analytics, yet generative large language models (LLMs) often yield malformed spans, boundary drift, and hallucinations, especially with long-tail terms and cross-domain shift. We present SRICL, an LLM-centric framework that combines semantic retrieval (SR), in-context learning (ICL), and supervised fine-tuning (SFT) with a deterministic verifier. SR pulls in-domain annotated sentences and definitions from ESCO to form format-constrained prompts that stabilize boundaries and handle coordination. SFT aligns output behavior, while the verifier enforces pairing, non-overlap, and BIO legality with minimal retries. On six public span-labeled corpora of job-ad sentences across sectors and languages, SRICL achieves substantial STRICT-F1 improvements over GPT-3.5 prompting baselines and sharply reduces invalid tags and hallucinated spans, enabling dependable sentence-level deployment in low-resource, multi-domain settings.