ARL crowd sourcing
BIBLIOGRAPHY |
Facts about "ARL crowd sourcing"
Cs Portal > List of citizen science projects > Crowd Sourcing Data Collection through Amazon Mechanical Turk - (2013/11/06)
IDENTIFICATION
⇳ Description Crowdsourcing is an increasingly popular technique used to complete complex tasks or collect large amounts of data. This report documents the effort to employ crowdsourcing using the Mechanical Turk service hosted by Amazon. The task was to collect labeling data on several thousands of short videos clips as such labels would be perceived by a human. The approach proved to be viable, collecting large amounts of data in a relatively short time frame, but required specific considerations for the population of workers and impersonal medium through which data were collected. ➠ Purpose According to Pierce & Fung (2013), The goal of this project is to generate a large number of video vignettes meant to visually demonstrate specific verbs. The project called for 48 verbs to be demonstrated,each in 10 different exemplars. Each exemplar was filmed with 16 different setting variations, consisting of different backgrounds, camera angle, time of day,etc. This produces a total of 7680 vignettes. The vignettes are to be data points provided to system design teams as part of a larger research and development project
This project included 6 variants (sub-projects) described in the Pierce & Fung, 2013 report. ? Research question unknown
TEAM
Project team page Leader: Institution: Partner institutions: Contact:
USER TASKS
CONTRIBUTION TYPE: data collection
PARTICIPATION TYPOLOGY: crowdsourcing
GAMING GENRE NONE
GAMING ELEMENTS: NONE
◉ Tasks description Recognition Task: Crowdsourced Study subproject
For each task, also known as a stimulus, a vignette was displayed along with a verb question (“Do you see [verb] ?”) and the verb definition. Workers responded to a single verb question with a present/absent judgment ⤯ Interaction with objects
▣ Interface
GUIDANCE
❂ Feedback and guidance description
COMMUNITY
⏣ Community description
Other information
Url:N/A
Start date: 2011/09/01
End date: 2013/09/01
Infrastructure: Amazon mechanical turk
Official team page:
Leader:
Engineering and technology > (other)
Crowdsourcing is an increasingly popular technique used to complete complex tasks or collect large amounts of data. This report documents the effort to employ crowdsourcing using the Mechanical Turk service hosted by Amazon. The task was to collect labeling data on several thousands of short videos clips as such labels would be perceived by a human. The approach proved to be viable, collecting large amounts of data in a relatively short time frame, but required specific considerations for the population of workers and impersonal medium through which data were collected.
According to Pierce & Fung (2013), The goal of this project is to generate a large number of video vignettes meant to visually demonstrate specific verbs. The project called for 48 verbs to be demonstrated,each in 10 different exemplars. Each exemplar was filmed with 16 different setting variations, consisting of different backgrounds, camera angle, time of day,etc. This produces a total of 7680 vignettes. The vignettes are to be data points provided to system design teams as part of a larger research and development project This project included 6 variants (sub-projects) described in the Pierce & Fung, 2013 report.
unknown
Recognition Task: Crowdsourced Study subproject For each task, also known as a stimulus, a vignette was displayed along with a verb question (“Do you see [verb] ?”) and the verb definition. Workers responded to a single verb question with a present/absent judgment
Grey typology | Participation typology | Contribution type: | ||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
|
||||||||||||||||||||||||||||
Gaming | ||||||||||||||||||||||||||||||
Genre: | Gaming elements: other | |||||||||||||||||||||||||||||
Interface | ||||||||||||||||||||||||||||||
Data type to manipulate: pictures, text | interface enjoyment: somewhat cool/attractive Interface usability: rather easy to use |
Member profiles::N/A Member profile elements: |
Guidance | Feedback on | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
Tools | News & Events |
---|---|
Communication: |
Main news site: |
Community description | |
Community size (volounteers based): |
Other information about community:
Community led additions: 2013/09/01
No [[has completion level::Low]
No
Engineering and technology other According to Pierce & Fung (2013), The goal of this project is to generate a large number of video vignettes meant to visually demonstrate specific verbs. The project called for 48 verbs to be demonstrated,each in 10 different exemplars. Each exemplar was filmed with 16 different setting variations, consisting of different backgrounds, camera angle, time of day,etc. This produces a total of 7680 vignettes. The vignettes are to be data points provided to system design teams as part of a larger research and development project
This project included 6 variants (sub-projects) described in the Pierce & Fung, 2013 report. unknown USA Crowd Sourcing Data Collection through Amazon Mechanical Turk Recognition Task: Crowdsourced Study subproject
For each task, also known as a stimulus, a vignette was displayed along with a verb question (“Do you see [verb] ?”) and the verb definition. Workers responded to a single verb question with a present/absent judgment data collection Revenue crowdsourcing pictures, text, other: Thinking: yes Computing: no Sensing: somewhat Gaming: no other
somewhat cool/attractive
rather easy to use
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
Low
Bibliography
BIBLIOGRAPHY |
.
.