ARL crowd sourcing
Cs Portal > List of citizen science projects > Crowd Sourcing Data Collection through Amazon Mechanical Turk - (2013/11/06)
IDENTIFICATION
- Infrastructure: Amazon mechanical turk
- Developed with:
- Beta start date : N/A
- End date : The project was closed on 2013/09/01
- Engineering and technology > (other)
- Others in the same subject areas: CS4CS, ESP game, GeoTag-X, ReCaptcha... further results
- Others projects about other: CS4CS, ESP game, ReCaptcha, Transcribe Bentham
⇳ Description Crowdsourcing is an increasingly popular technique used to complete complex tasks or collect large amounts of data. This report documents the effort to employ crowdsourcing using the Mechanical Turk service hosted by Amazon. The task was to collect labeling data on several thousands of short videos clips as such labels would be perceived by a human. The approach proved to be viable, collecting large amounts of data in a relatively short time frame, but required specific considerations for the population of workers and impersonal medium through which data were collected. ➠ Purpose [[Has project purpose::According to Pierce & Fung (2013), The goal of this project is to generate a large number of video vignettes meant to visually demonstrate specific verbs. The project called for 48 verbs to be demonstrated,each in 10 different exemplars. Each exemplar was filmed with 16 different setting variations, consisting of different backgrounds, camera angle, time of day,etc. This produces a total of 7680 vignettes. The vignettes are to be data points provided to system design teams as part of a larger research and development project
This project included 6 variants (sub-projects) described in the Pierce & Fung, 2013 report.]] ? Research question unknown
TEAM
Project team page Leader: Institution: Partner institutions: Contact:
USER TASKS
CONTRIBUTION TYPE: data collection
PARTICIPATION TYPOLOGY: crowdsourcing
GAMING GENRE NONE
GAMING ELEMENTS: NONE
WHAT
◉ Tasks description [[Has participant task description::Recognition Task: Crowdsourced Study subproject
For each task, also known as a stimulus, a vignette was displayed along with a verb question (“Do you see [verb] ?”) and the verb definition. Workers responded to a single verb question with a present/absent judgment]] ⤯ Interaction with objects
▣ Interface
- Data type to manipulate: pictures, text
- interface enjoyment: somewhat cool/attractive
- Interface usability: rather easy to use
GUIDANCE
- Tutorial: Somewhat
- Peer to peer guidance: Somewhat
- Training sequence: Somewhat
- Individual performance: Somewhat
- Collective performance: Somewhat
- Research progress: Somewhat
❂ Feedback and guidance description
COMMUNITY
- Communication:
- Social Network: N/A
- Member profiles:: N/A
- Member profile elements:
- Main news site:
- Frequency of project news updates: N/A
- Type of events:
- Frequency of events :
⏣ Community description
- Community size (volounteers based)
- Role:
- Interaction form:
- Has official community manager(s): N/A
- Has team work N/A
- Other:
- Community led additions: 2013/09/01
Other information
PROJECT
Url:N/A
Start date: 2011/09/01
End date: 2013/09/01
Infrastructure: Amazon mechanical turk
TEAM
Official team page:
Leader:
PROJECT DEFINITION
Subject
Engineering and technology > (other)
Description
Crowdsourcing is an increasingly popular technique used to complete complex tasks or collect large amounts of data. This report documents the effort to employ crowdsourcing using the Mechanical Turk service hosted by Amazon. The task was to collect labeling data on several thousands of short videos clips as such labels would be perceived by a human. The approach proved to be viable, collecting large amounts of data in a relatively short time frame, but required specific considerations for the population of workers and impersonal medium through which data were collected.
Purpose.
According to Pierce & Fung (2013), The goal of this project is to generate a large number of video vignettes meant to visually demonstrate specific verbs. The project called for 48 verbs to be demonstrated,each in 10 different exemplars. Each exemplar was filmed with 16 different setting variations, consisting of different backgrounds, camera angle, time of day,etc. This produces a total of 7680 vignettes. The vignettes are to be data points provided to system design teams as part of a larger research and development project This project included 6 variants (sub-projects) described in the Pierce & Fung, 2013 report.
Research question.
unknown
ABOUT PARTICIPANT TASKS
Tasks description.
Recognition Task: Crowdsourced Study subproject For each task, also known as a stimulus, a vignette was displayed along with a verb question (“Do you see [verb] ?”) and the verb definition. Workers responded to a single verb question with a present/absent judgment
.
Grey typology | Participation typology | Contribution type: | ||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
|
||||||||||||||||||||||||||||
Gaming | ||||||||||||||||||||||||||||||
Genre: | Gaming elements: other | |||||||||||||||||||||||||||||
Interface | ||||||||||||||||||||||||||||||
Data type to manipulate: pictures, text | interface enjoyment: somewhat cool/attractive Interface usability: rather easy to use |
Member profiles::N/A Member profile elements: |
ABOUT GUIDANCE AND FEEDBACK
Guidance | Feedback on | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
.
COMMUNITY
Tools | News & Events |
---|---|
Communication: |
Main news site: |
Community description | |
Community size (volounteers based): |
Other information about community:
Community led additions: 2013/09/01
OTHER PROJECT INFORMATION
No [[has completion level::Low]
No
Engineering and technology other [[Has project purpose::According to Pierce & Fung (2013), The goal of this project is to generate a large number of video vignettes meant to visually demonstrate specific verbs. The project called for 48 verbs to be demonstrated,each in 10 different exemplars. Each exemplar was filmed with 16 different setting variations, consisting of different backgrounds, camera angle, time of day,etc. This produces a total of 7680 vignettes. The vignettes are to be data points provided to system design teams as part of a larger research and development project
This project included 6 variants (sub-projects) described in the Pierce & Fung, 2013 report.]] unknown USA Crowd Sourcing Data Collection through Amazon Mechanical Turk [[Has participant task description::Recognition Task: Crowdsourced Study subproject
For each task, also known as a stimulus, a vignette was displayed along with a verb question (“Do you see [verb] ?”) and the verb definition. Workers responded to a single verb question with a present/absent judgment]] data collection Revenue crowdsourcing pictures, text, other: Thinking: yes Computing: no Sensing: somewhat Gaming: no other
somewhat cool/attractive
rather easy to use
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
Low
Bibliography
BIBLIOGRAPHY |
.
.