ARL crowd sourcing: Difference between revisions

The educational technology and digital learning wiki
Jump to navigation Jump to search
(Created page with "{{Citizen science project |field_project_name=Crowd Sourcing Data Collection through Amazon Mechanical Turk |field_infrastructure=Amazon mechanical turk |field_project_start_d...")
 
mNo edit summary
 
(2 intermediate revisions by one other user not shown)
Line 5: Line 5:
|field_project_end_date=2013/09/01
|field_project_end_date=2013/09/01
|field_project_open=No
|field_project_open=No
|field_cs_subject_areas=N/A
|field_subject_areas=Engineering and technology
|field_project_description=Crowdsourcing is an increasingly popular technique
|field_cs_subject_areas=other
used to complete complex tasks or collect large amounts of data. This
|field_project_description=Crowdsourcing is an increasingly popular technique used to complete complex tasks or collect large amounts of data. This report documents the effort to employ crowdsourcing using the Mechanical Turk service hosted by Amazon. The task was to collect labeling data on several thousands of short videos clips as such labels would be perceived by a human. The approach proved to be viable, collecting large amounts of data in a relatively short time frame, but required specific considerations for the population of workers and impersonal medium through which data were collected.
report documents the effort to employ crowdsourcing using the Mechanical Turk service hosted by Amazon. The task was to
|field_purpose_of_project=According to [http://www.arl.army.mil/arlreports/2013/ARL-MR-0848.pdf Pierce & Fung] (2013), The goal of this project is to generate a large number of video vignettes meant to visually demonstrate specific verbs. The project called for 48 verbs to be demonstrated,each in 10 different exemplars. Each exemplar was filmed with 16 different setting variations, consisting of different backgrounds, camera angle, time of day,etc. This produces a total of 7680
collect labeling data on several thousands of short videos clips as such labels would be perceived by a human. The approach
proved to be viable, collecting large amounts of data in a relatively short time frame, but required specific considerations
for
the population of workers and impersonal medium through which data
were
collected.
|field_purpose_of_project=According to [http://www.arl.army.mil/arlreports/2013/ARL-MR-0848.pdf Pierce & Fung] (2013), The goal of this project is to generate a
large number of video vignettes
meant to visually
demonstrate specific verbs.
The project called for 48 verbs to be demonstrated
,
each in 10
different exemplars. Each exemplar was filmed with 16 different setting variations, consisting of
different backgrounds, camera angle, time of day,
etc. This produces a total of 7680
vignettes.
vignettes.
The vignettes are to be data points provided
The vignettes are to be data points provided to system design teams as part of a larger research and development project
to system design teams as part of a larger research
and development project


This project included 6 variants (sub-projects) described in the [http://www.arl.army.mil/arlreports/2013/ARL-MR-0848.pdf Pierce & Fung, 2013] report.
This project included 6 variants (sub-projects) described in the [http://www.arl.army.mil/arlreports/2013/ARL-MR-0848.pdf Pierce & Fung, 2013] report.
|field_research_questions=unknown
|field_research_questions=unknown
|field_location_of_activities=USA
|field_location_of_activities=USA
Line 73: Line 53:


{{Free text}}
{{Free text}}
{{bibliography
|field_publication_type=other
}}

Latest revision as of 12:52, 29 November 2013

Low
Medium
High
Complete

Cs Portal > List of citizen science projects > Crowd Sourcing Data Collection through Amazon Mechanical Turk - (2013/11/06)

No image.png
No image.png
CCLlogo.png
CCLlogo.png


IDENTIFICATION

Unknown access page
Start date : 2011/09/01
  • Beta start date : N/A
  • End date : The project was closed on 2013/09/01
Subject

Description Crowdsourcing is an increasingly popular technique used to complete complex tasks or collect large amounts of data. This report documents the effort to employ crowdsourcing using the Mechanical Turk service hosted by Amazon. The task was to collect labeling data on several thousands of short videos clips as such labels would be perceived by a human. The approach proved to be viable, collecting large amounts of data in a relatively short time frame, but required specific considerations for the population of workers and impersonal medium through which data were collected. Purpose [[Has project purpose::According to Pierce & Fung (2013), The goal of this project is to generate a large number of video vignettes meant to visually demonstrate specific verbs. The project called for 48 verbs to be demonstrated,each in 10 different exemplars. Each exemplar was filmed with 16 different setting variations, consisting of different backgrounds, camera angle, time of day,etc. This produces a total of 7680 vignettes. The vignettes are to be data points provided to system design teams as part of a larger research and development project

This project included 6 variants (sub-projects) described in the Pierce & Fung, 2013 report.]] ? Research question unknown

TEAM

MAIN TEAM LOCATION
Loading map...

Project team page Leader: Institution: Partner institutions: Contact:

USER TASKS

CONTRIBUTION TYPE: data collection
PARTICIPATION TYPOLOGY: crowdsourcing


GAMING GENRE NONE
GAMING ELEMENTS: NONE

COMPUTING
THINKING
SENSING
SOME
WHAT
GAMING

Tasks description [[Has participant task description::Recognition Task: Crowdsourced Study subproject

For each task, also known as a stimulus, a vignette was displayed along with a verb question (“Do you see [verb] ?”) and the verb definition. Workers responded to a single verb question with a present/absent judgment]] Interaction with objects

Interface

  • Data type to manipulate: pictures, text
  • interface enjoyment: somewhat cool/attractive
  • Interface usability: rather easy to use

GUIDANCE

GUIDANCE
  • Tutorial: Somewhat
  • Peer to peer guidance: Somewhat
  • Training sequence: Somewhat
FEEDBACK ON
  • Individual performance: Somewhat
  • Collective performance: Somewhat
  • Research progress: Somewhat

Feedback and guidance description

COMMUNITY

COMMUNITY TOOLS
  • Communication:
  • Social Network: N/A
  • Member profiles:: N/A
  • Member profile elements:
NEWS & EVENTS
  • Main news site:
  • Frequency of project news updates: N/A
  • Type of events:
  • Frequency of events :

Community description

  • Community size (volounteers based)
  • Role:
  • Interaction form:
  • Has official community manager(s): N/A
  • Has team work N/A
  • Other:
  • Community led additions: 2013/09/01


Other information

PROJECT

Url:N/A
Start date: 2011/09/01
End date: 2013/09/01
Infrastructure: Amazon mechanical turk

TEAM

Official team page:
Leader:




PROJECT DEFINITION


Subject

Engineering and technology > (other)

Description

Crowdsourcing is an increasingly popular technique used to complete complex tasks or collect large amounts of data. This report documents the effort to employ crowdsourcing using the Mechanical Turk service hosted by Amazon. The task was to collect labeling data on several thousands of short videos clips as such labels would be perceived by a human. The approach proved to be viable, collecting large amounts of data in a relatively short time frame, but required specific considerations for the population of workers and impersonal medium through which data were collected.

Purpose.

According to Pierce & Fung (2013), The goal of this project is to generate a large number of video vignettes meant to visually demonstrate specific verbs. The project called for 48 verbs to be demonstrated,each in 10 different exemplars. Each exemplar was filmed with 16 different setting variations, consisting of different backgrounds, camera angle, time of day,etc. This produces a total of 7680 vignettes. The vignettes are to be data points provided to system design teams as part of a larger research and development project This project included 6 variants (sub-projects) described in the Pierce & Fung, 2013 report.

Research question.

unknown

ABOUT PARTICIPANT TASKS


Tasks description.

Recognition Task: Crowdsourced Study subproject For each task, also known as a stimulus, a vignette was displayed along with a verb question (“Do you see [verb] ?”) and the verb definition. Workers responded to a single verb question with a present/absent judgment

.

Grey typology Participation typology Contribution type:
Computing: NO Thinking: YES
Sensing: Somewhat Gaming: NO
Crowdsourcing Distributed intelligence
Participatory science Extreme citizen science
Science outreach
Data collection
Data analysis
Data interpretation --------
Gaming
Genre: Gaming elements: other
Interface
Data type to manipulate: pictures, text interface enjoyment: somewhat cool/attractive
Interface usability: rather easy to use
Member profiles::N/A
Member profile elements:


ABOUT GUIDANCE AND FEEDBACK


Guidance Feedback on
Tutorial and documentation: SOMEWHAT
Training sequence: SOMEWHAT
Peer to peer guidance: SOMEWHAT
individual performance: Somewhat
collective performance: Somewhat
research progress: Somewhat

.

COMMUNITY


Tools News & Events

Communication:
Social Network: N/A

Main news site:
Frequency of project news updates: N/A
Type of events:
Frequency of events :

Community description

Community size (volounteers based):
Role: Interaction form:
Has official community manager(s): N/A
Has team work N/A

Other information about community:
Community led additions: 2013/09/01

OTHER PROJECT INFORMATION




No [[has completion level::Low]




No

Engineering and technology other [[Has project purpose::According to Pierce & Fung (2013), The goal of this project is to generate a large number of video vignettes meant to visually demonstrate specific verbs. The project called for 48 verbs to be demonstrated,each in 10 different exemplars. Each exemplar was filmed with 16 different setting variations, consisting of different backgrounds, camera angle, time of day,etc. This produces a total of 7680 vignettes. The vignettes are to be data points provided to system design teams as part of a larger research and development project

This project included 6 variants (sub-projects) described in the Pierce & Fung, 2013 report.]] unknown USA Crowd Sourcing Data Collection through Amazon Mechanical Turk [[Has participant task description::Recognition Task: Crowdsourced Study subproject

For each task, also known as a stimulus, a vignette was displayed along with a verb question (“Do you see [verb] ?”) and the verb definition. Workers responded to a single verb question with a present/absent judgment]] data collection Revenue crowdsourcing pictures, text, other: Thinking: yes Computing: no Sensing: somewhat Gaming: no other


somewhat cool/attractive rather easy to use N/A N/A N/A N/A N/A N/A

N/A

N/A

N/A



N/A


N/A


Low



Bibliography

BIBLIOGRAPHY


.

.