Property:Has interaction with objects
From EduTech Wiki
This is a property of type Text.
Pages using the property "Has interaction with objects"
Showing 14 pages using this property.
|Air Quality with Biomarkers +||Participants look at a photo and decide whether or not a photo contains a lichen; and if it does, what kind of lichen it is, and how big it is using the scale of a euro coin. +|
|Andromeda Project +||Users must identify star clusters in photographs taken of the Andromeda galaxy. +|
|Bat detective +||The user will categorise bat calls by listening to, and looking at visualisations of, bat call recordings. +|
|Celebrate Urban Birds +||Participants can try to attract birds by creating feeders +|
|Citizen science project test +||Stack virtual spaghetti +|
|ESP game +||The authors define ESP as GWAP (games with a purpose) and a subgenre "output-agreement games" Output-agreement games are a generalization of the ESP Game to its fundamental input-output behavior: * Initial setup: Two strangers are randomly chosen by the game itself from among all potential players; * Rules: In each round, both are given the same input and must produce outputs based on the input. Game instructions indicate that players should try to produce the same output as their partners. Players cannot see one another’s outputs or communicate with one another; * Winning condition: Both players must produce the same output; they do not have to produce it at the same time but must produce it at some point while the input is displayed onscreen. When the input is an image and the outputs are keyword descriptions of the image, this template becomes the ESP Game +|
|Galaxy Zoo +||Volunteers don't manipule images, they just have to answer questions about it. However participants can add images to their favourite and present, comment, discuss their image to the community. Volunteer can experience a kind of particular relation with data. +|
|GeoTag-X +||GeoTag-X Analysts interact with the interface in two ways: *by answering some short multiple choice questions * by geolocating pictures on a map. To perform this task, volunteers are asked to draw a polygon around the area they want to select. Various tools have been designed to help the volunteers with this hard task, such as a satellite and an aerial map, as well as zoom in and out options. When they first start a project, volunteers are presented with a short tour that has been built into the project template to explain what each section is and what the volunteer is expected to do. The tour also points out helpful tools, like the zoom function on the photo and the link to the photo source. The tour is only shown to the volunteer on their first time accessing a project because the interface for the different projects is standardized, therefore once they learn one they can use them all. Each project has also a built-in tutorial that is being presented to every Analyst on his or her first time contributing to a project. The tutorial gives a condensed explanation of why each questions is being asked on the project, along with examples of what to look for in the photos. Volunteers have also the option to read an help box that gives more information about how to better answer the questions they are presented with. +|
|Planet Hunters +||Draw boxes around parts of the graph that represent "dips" +|
|Solar Stormwatch +||Play video, mark beginning and ending of storms, mark extent of storm across camera. +|
|Space Warps +||Mark gravitational lenses on photos. +|
|Test4Theory +||The only interactions are installing software and attaching the computer to the project. +|
|Transcribe Bentham +||Volunteer can see high-resolution images of original manuscript. It is possible to zoom into images. +|
|WhaleFM +||Participant has to select spectogram pictures, can listen to associated sound and tick if it seems to be matching a given spectogram. A selected item then must be compared again before the user can click "Match". It is possible to follow the same Whale. Each call can be discussed in a contextualized forum +|