Chapter 6. Annotation and Adjudication
Now that you have a corpus and a model, it’s time to start looking at the actual annotation process—the “A” in the MATTER cycle. Here is where you define the method by which your model is applied to your texts, both in theory (how your task is described to annotators) and in practice (what software and other tools are used to create the annotations). A critical part of this stage is adjudication—where you take your annotators’ work and use it to create the gold standard corpus that you will use for machine learning. In this chapter we will answer the following questions:
What are the components of an annotation task?
What is the difference between a model specification and annotation guidelines?
How do you create guidelines that fit your task?
What annotation tool should you use for your annotation task?
What skills do your annotators need to create your annotations?
How can you tell (qualitatively) if your annotation guidelines are good for your task?
What is involved in adjudicating the annotations?
The Infrastructure of an Annotation Project
It’s much easier to write annotation guidelines when you understand how annotation projects are usually run, so before getting into the details of guideline writing, we’re going to go over a few different ways that you can structure your annotation effort.
Currently, what we would call the “traditional” approach goes like this. Once a schema is developed and a corpus is collected, an investigator writes ...
Get Natural Language Annotation for Machine Learning now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.