You are here

Crowdsourcing for Humanities

Crowd-sourcing techniques have been recently used to gather vast amounts of annotations from non-experts working on-line through specialized platforms such as  Amazon Mechanical Turk (AMT), or Crowdflower. Besides, other kinds of collaborative platforms for transcribing manuscripts, tagging digital content or correcting OCR output have been successfully tested in specific domains, fostering online collaboration among experts in the Humanities.

Given these premises, we plan to perform annotation by exploiting crowdsourcing as much as possible and to combine it with manual annotation and quality assessment activities by  experts with different backgrounds (linguistics, history, arts, etc.). Such advancements will in turn find direct application in the Digital Humanities scenarios. Although a number of such interfaces have been developed in the past, none of them meets all the expected requirements in terms of: usability, flexibility, import/export functionalities, visualization, multi-layer annotations, support for collaborative, on-line annotation. 

Application cases: 

1) Extend the online platform of De Gasperi’s digital archive with an “expert” view to allow semantic tagging, linguistic annotation, notes, export functionalities.
2) Create a digital archive of artworks with a restricted access for experts to take notes, enrich content with wiki-based functionalities, create a network of experts.

Objectives: 
  • Investigate different crowdsourcing scenarios for gathering manually annotated data both from experts and naive users.
  • Explore usability, visualization and flexibility issues in crowdsourced annotation of Humanities material, both textual and visual.
  • Experiment with novel annotation practices such as social tagging, games with a purpose, etc.