Skip to the content.

LT4HALA 2026

--Home--  --CFP--  --EvaLatin--  --EvaHan--  --EvaCun--  --Program--  --Organization--


EvaLatin


INTRODUCTION

The LT4HALA 2026 workshop will also be the venue of the forth edition of EvaLatin, the campaign totally devoted to the evaluation of NLP tools for Latin. The campaign is designed with the aim of answering two questions:

EvaLatin 2026 edition will have 2 tasks, i.e. Dependency Parsing and Named Entity Recognition.

Shared test data and an evaluation script will be provided to the participants who will choose to participate in either one or all tasks.

IMPORTANT DATES

DATA

Dependency parsing

The dependency parsing task is based on the Universal Dependencies framework.

Named Entity Recognition

TBA

EVALUATION

TBA

HOW TO PARTICIPATE

Participants will be required to submit their runs and to provide a technical report that should include a brief description of their approach, focusing on the adopted algorithms, models and resources, a summary of their experiments, and an analysis of the obtained results. Technical reports will be included in the proceedings as short papers: the maximum length is 4 pages (excluding references) and they should follow the LREC 2026 official format). Reports will receive a light review (we will check for the correctness of the format, the exactness of results and ranking, and overall exposition). Reports should be submitted using the START submission page of the workshop (TBA).

Participants are allowed to use any approach (e.g. from traditional machine learning algorithms to Large Language Models) and any resource (annotated and non-annotated data, embeddings): all approaches and resources are expected to be described in the systems’ reports.


Back to the Main Page