Please wait, while we are loading the content...
Please wait, while we are loading the content...
| Content Provider | ACM Digital Library |
|---|---|
| Author | Kazai, Gabriella |
| Abstract | The evaluation and tuning of information retrieval (IR) systems based on the Cranfield paradigm requires purpose built test collections, which include sets of human contributed relevance labels, indicating the relevance of search results to a set of user queries. Traditional methods of collecting relevance labels rely on a fixed group of hired expert judges, who are trained to interpret user queries as accurately as possible and label documents accordingly. Human judges and the obtained relevance labels thus provide a critical link within the Cranfield style IR evaluation framework, where disagreement among judges and the impact of variable judgment sets on the final outcome of an evaluation is a well studied issue. There is also reported evidence that experiment outcomes can be affected by changes to the judging guidelines or changes in the judge population. Recently, the growing volume and diversity of the topics and documents to be judged is driving the increased adoption of crowdsourcing methods in IR evaluation, offering a viable alternative that scales with modest costs. In this model, relevance judgments are distributed online over a large population of humans, a crowd, facilitated, for example, by a crowdsourcing platform, such as Amazon's Mechanical Turk or Clickworker. Such platforms allow millions of anonymous crowd workers to be hired temporarily for micro-payments to complete so-called human intelligence tasks (HITs), such as labeling images or documents. Studies have shown that workers come from diverse backgrounds, work in a variety of different environments, and have different motivations. For example, users may turn to crowdsourcing as a way to make a living, to serve an altruistic or social purpose or simply to fill their time. They may become loyal crowd workers on one or more platforms, or they may leave after their first couple of encounters. Clearly, such a model is in stark contrast to the highly controlled methods that characterize the work of trained judges. For example, in a micro-task based crowdsourcing setup, worker training is usually minimal or non-existent. Furthermore, it is widely reported that labels provided by crowd workers can vary in quality, leading to noisy labels. Crowdsourcing can also suffer from undesirable worker behaviour and practices, e.g., dishonest behaviour or lack of expertise, that result in low quality contributions. While a range of quality assurance and control techniques have now been developed to reduce noise during or after task completion, little is known about the workers themselves and possible relationships between workers' characteristics, behaviour and the quality of their work. In this talk, I will review the findings of recent research that examines and compares trained judges and crowd workers hired to complete relevance assessment tasks of varying difficulty. The investigations include a range of aspects from how HIT design, judging instructions, worker demographics and characteristics may impact work quality. The main focus of the talk will be on experiments aimed to uncover characteristics of the crowd by monitoring their behaviour during different relevance assessment tasks, and compare them to professional judges' behaviour on the same tasks. Throughout the talk I will highlight challenges of quality assurance and control in crowdsourcing and propose a possible direction for solving the issue without relying on gold standard data sets, which are expensive to create and have limited application. |
| Starting Page | 1 |
| Ending Page | 1 |
| Page Count | 1 |
| File Format | |
| ISBN | 9781450329767 |
| DOI | 10.1145/2637002.2637003 |
| Language | English |
| Publisher | Association for Computing Machinery (ACM) |
| Publisher Date | 2014-08-26 |
| Publisher Place | New York |
| Access Restriction | Subscribed |
| Subject Keyword | Crowdsourcing Worker characteristics and behaviour Ir evaluation |
| Content Type | Text |
| Resource Type | Article |
National Digital Library of India (NDLI) is a virtual repository of learning resources which is not just a repository with search/browse facilities but provides a host of services for the learner community. It is sponsored and mentored by Ministry of Education, Government of India, through its National Mission on Education through Information and Communication Technology (NMEICT). Filtered and federated searching is employed to facilitate focused searching so that learners can find the right resource with least effort and in minimum time. NDLI provides user group-specific services such as Examination Preparatory for School and College students and job aspirants. Services for Researchers and general learners are also provided. NDLI is designed to hold content of any language and provides interface support for 10 most widely used Indian languages. It is built to provide support for all academic levels including researchers and life-long learners, all disciplines, all popular forms of access devices and differently-abled learners. It is designed to enable people to learn and prepare from best practices from all over the world and to facilitate researchers to perform inter-linked exploration from multiple sources. It is developed, operated and maintained from Indian Institute of Technology Kharagpur.
Learn more about this project from here.
NDLI is a conglomeration of freely available or institutionally contributed or donated or publisher managed contents. Almost all these contents are hosted and accessed from respective sources. The responsibility for authenticity, relevance, completeness, accuracy, reliability and suitability of these contents rests with the respective organization and NDLI has no responsibility or liability for these. Every effort is made to keep the NDLI portal up and running smoothly unless there are some unavoidable technical issues.
Ministry of Education, through its National Mission on Education through Information and Communication Technology (NMEICT), has sponsored and funded the National Digital Library of India (NDLI) project.
| Sl. | Authority | Responsibilities | Communication Details |
|---|---|---|---|
| 1 | Ministry of Education (GoI), Department of Higher Education |
Sanctioning Authority | https://www.education.gov.in/ict-initiatives |
| 2 | Indian Institute of Technology Kharagpur | Host Institute of the Project: The host institute of the project is responsible for providing infrastructure support and hosting the project | https://www.iitkgp.ac.in |
| 3 | National Digital Library of India Office, Indian Institute of Technology Kharagpur | The administrative and infrastructural headquarters of the project | Dr. B. Sutradhar bsutra@ndl.gov.in |
| 4 | Project PI / Joint PI | Principal Investigator and Joint Principal Investigators of the project |
Dr. B. Sutradhar bsutra@ndl.gov.in Prof. Saswat Chakrabarti will be added soon |
| 5 | Website/Portal (Helpdesk) | Queries regarding NDLI and its services | support@ndl.gov.in |
| 6 | Contents and Copyright Issues | Queries related to content curation and copyright issues | content@ndl.gov.in |
| 7 | National Digital Library of India Club (NDLI Club) | Queries related to NDLI Club formation, support, user awareness program, seminar/symposium, collaboration, social media, promotion, and outreach | clubsupport@ndl.gov.in |
| 8 | Digital Preservation Centre (DPC) | Assistance with digitizing and archiving copyright-free printed books | dpc@ndl.gov.in |
| 9 | IDR Setup or Support | Queries related to establishment and support of Institutional Digital Repository (IDR) and IDR workshops | idr@ndl.gov.in |
|
Loading...
|