Autodesk 123D Catch: Difference between revisions

The educational technology and digital learning wiki
Jump to navigation Jump to search
Line 42: Line 42:
* If you took the pictures in portrait mode (camera turned by 90 degrees), it's imperative to turn them around. Use for example [[Screen_capture_tutorial#Batch_processing|Irfanview]] to do this efficiently
* If you took the pictures in portrait mode (camera turned by 90 degrees), it's imperative to turn them around. Use for example [[Screen_capture_tutorial#Batch_processing|Irfanview]] to do this efficiently
* If you are lucky, your model is almost ok, but it's likely that you will have to do some manual stitching. That involves identifying four common points on three pictures, which can be quite difficult. This is why we suggested using "dots".
* If you are lucky, your model is almost ok, but it's likely that you will have to do some manual stitching. That involves identifying four common points on three pictures, which can be quite difficult. This is why we suggested using "dots".
== Hardware ==
* If you own a [[3D printing|3D printer]], you could try building the [http://www.thingiverse.com/image:119282 Camera rig] (thingiverse) made by gpvillamil


== Links ==
== Links ==

Revision as of 16:15, 3 October 2012

Draft

Introduction

Autodesk 123D Catch is an application that can take a set of ordinary photos and turn them into 3D models.

Unfortunately:

  • you will have to upload photos for processing. Only selection of photos, selecting reference points for stitching etc. is done locally. That means a lot of waiting in case you didn't take the photos as you should have...
  • there is no official manual/tutorial, but you will have to look a several video tutorials in order to learn (I resent that).

How to make a model of your torso + head

Principles

  • The object cannot move, i.e. do not try to take delayed pictures from yourself rotating on a swivel chair !
  • The final object will be assembled using areas that are identified as identical. If you model an object with a uniform surface, paint some markers on it.
  • Use consistent lightning, do not use a flash.
  • Having other objects in the picture is ok (even good) since the algorithm seems to rely on a camera going around an object that you wish to model.

Procedure

Taking pictures:

  • Add some dots to you (e.g. shoulders, temples, cheekbones). This could help stitching. You could use a felt pen or little round stickers.
  • Sit still on a chair without moving
  • Have a friend take between 20-30 pictures
  • Start with front face, then move the camera down 30 cm, then move up at least 60cm (3 pictures). Make sure that you get views of the top side of the head.
  • Move left (or right) and take three pictures again and repeat.

Alternatively, move around the object at same height, then again from above and below.

Once you have 20-30 pictures, upload for processing and wait'

First result:

  • After following the procedure and uploading the pictures you may see something like this:
123D Catch - first result
  • You can see the position of the cameras around the object (notice that some top view is missing)
  • At the bottom you can see the pictures that are used in the model and those with an exclamation mark who are not

Stitching:

  • If you took the pictures in portrait mode (camera turned by 90 degrees), it's imperative to turn them around. Use for example Irfanview to do this efficiently
  • If you are lucky, your model is almost ok, but it's likely that you will have to do some manual stitching. That involves identifying four common points on three pictures, which can be quite difficult. This is why we suggested using "dots".

Hardware

Links

Official
Tutorials