Skip to main content

Improving the Annotation Efficiency and Effectiveness in the Text Domain

  • Conference paper
  • First Online:
Book cover Advances in Information Retrieval (ECIR 2019)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11438))

Included in the following conference series:

  • 1816 Accesses

Abstract

Annotated corpora are an important resource to evaluate methods, compare competing methods, or to train supervised learning methods. When creating a new corpora with the help of human annotators, two important goals are pursued by annotation practitioners: Minimizing the required resources (efficiency) and maximizing the resulting annotation quality (effectiveness). Optimizing these two criteria is a challenging problem, especially in certain domains (e.g. medical, legal). In the scope of my PhD thesis, the aim is to create novel annotation methods for an efficient and effective data acquisition. In this paper, methods and preliminary results are described for two ongoing annotation projects: medical information extraction and question-answering.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://gate.ac.uk/applications/bio-yodie.html.

References

  1. Arora, S., Liang, Y., Ma, T.: Simple but tough-to-beat baseline for sentence embeddings. In: International Conference on Learning Representations, p. 16 (2017)

    Google Scholar 

  2. Chabou, S., Iglewski, M.: PICO extraction by combining the robustness of machine-learning methods with the rule-based methods. In: 2015 World Congress on Information Technology and Computer Applications (WCITCA), pp. 1–4 (2015)

    Google Scholar 

  3. Kim, S.N., Martinez, D., Cavedon, L., Yencken, L.: Automatic classification of sentences to support evidence based medicine. BMC Bioinform. 12(2), S5 (2011)

    Article  Google Scholar 

  4. Nakov, P., Màrquez, L., Magdy, W., Moschitti, A., Glass, J., Randeree, B.: Semeval-2015 task 3: answer selection in community question answering. In: Proceedings of the 9th International Workshop on Semantic Evaluation, SemEval 2015, pp. 269–281 (2015)

    Google Scholar 

  5. Voorhees, E.M.: The TREC question answering track. Nat. Lang. Eng. 7(04) (2001). https://doi.org/10.1017/S1351324901002789

  6. Yang, Y., Yih, W.T., Meek, C.: WikiQA: a challenge dataset for open-domain question answering. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 2013–2018 (2015)

    Google Scholar 

  7. Zlabinger, M., Andersson, L., Brassey, J., Hanbury, A.: Extracting the population, intervention, comparison and sentiment from randomized controlled trials. Stud. Health Technol. Inform. 247, 146–150 (2018)

    Google Scholar 

  8. Zlabinger, M., Andersson, L., Hanbury, A., Andersson, M., Quasnik, V., Brassey, J.: Medical entity corpus with PICO elements and sentiment analysis. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC) (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Markus Zlabinger .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zlabinger, M. (2019). Improving the Annotation Efficiency and Effectiveness in the Text Domain. In: Azzopardi, L., Stein, B., Fuhr, N., Mayr, P., Hauff, C., Hiemstra, D. (eds) Advances in Information Retrieval. ECIR 2019. Lecture Notes in Computer Science(), vol 11438. Springer, Cham. https://doi.org/10.1007/978-3-030-15719-7_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-15719-7_46

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-15718-0

  • Online ISBN: 978-3-030-15719-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics