Subversion Repositories seema-scanner

Rev

Rev 252 | Go to most recent revision | Details | Compare with Previous | Last modification | View Log | RSS feed

Rev Author Line No. Line
60 jakw 1
\documentclass[10pt,notitlepage]{report}
62 jakw 2
\usepackage[utf8]{inputenc}
3
\usepackage[T1]{fontenc}
1 jakw 4
\usepackage{url}
56 jakw 5
\usepackage{graphicx}
253 - 6
\usepackage{calc}
56 jakw 7
\usepackage{fullpage}
1 jakw 8
 
252 - 9
\usepackage{color}
10
\newcommand{\dolmes}[1]{\textcolor[rgb]{1,0.0,0}{#1}}
105 jakw 11
 
253 - 12
\newlength\myheight
13
\newlength\mydepth
14
\settototalheight\myheight{Xygp}
15
\settodepth\mydepth{Xygp}
16
\setlength\fboxsep{0pt}
252 - 17
 
62 jakw 18
% \renewcommand{\chaptermark}[1]{\markboth{#1}{}}
19
% \renewcommand{\sectionmark}[1]{\markright{\thesection\ #1}}
60 jakw 20
 
1 jakw 21
\title{The SeeMa Lab Structured Light Scanner}
62 jakw 22
\author{Jakob Wilm and Eyþór Rúnar Eiríksson\\
23
		\url{{jakw,eruei}@dtu.dk}}
1 jakw 24
\date{\today}
25
 
26
\begin{document}
27
 
28
\maketitle
29
 
56 jakw 30
\begin{figure}[h]
31
	\centering
32
		\includegraphics[width=.9\textwidth]{mesh0.png}
33
	\label{fig:mesh0}
34
\end{figure}
35
 
1 jakw 36
\begin{abstract}
101 jakw 37
This is the manual for the Seeing Machines Lab Structured Light Scanner (SeeMa-Scanner). The scanner consists of both hardware components (including cameras, projector and rotation stage), and software for calibration, scanning and reconstruction. While most of the components should be self-explanatory, we describe the hardware, and each software component, making it possible for students and staff to extend the scanner with new functionality. We also give a brief step-by-step guide on how to get from a physical object to a digital mesh model of it. 
105 jakw 38
\end{abstract} 
1 jakw 39
 
60 jakw 40
\chapter{The scanner}
56 jakw 41
\section{Getting started}
101 jakw 42
This section describes the main hardware and software parts of the system.
1 jakw 43
 
106 jakw 44
If your main objective is to digitise objects, you should be able to do so on your own by reading the chapter ''Practical Scanning'', which gives a step-by-step recipe to perform a complete object scan and reconstruction. 
56 jakw 45
 
101 jakw 46
Technical projects and contributions are very welcome. Please get in touch with the authors if you plan any alterations to the hardware, or would like write access to the SVN repository containing the software. The public read access url to the SeeMaLab Scanner repository is: \url{http://svn.compute.dtu.dk/svn/seema-scanner/}.
56 jakw 47
 
2 jakw 48
\section{Hardware parts}
56 jakw 49
\begin{table}
50
	\begin{tabular}{l l l p{0.3\textwidth}}
51
		\textbf{Part}              & \textbf{Manufacturer} & \textbf{Model} & \textbf{Specifications} \\
62 jakw 52
		\hline\\[0.2cm]
53
		Industrial Cameras & Point Grey Research & GS3-U3-91S6C-C & Color, 9.1 MP, Sony ICX814 CCD, 1", 3.69 $\mu$m, Global shutter, 3376 x 2704 at 9 FPS \\[0.5cm]
177 jakw 54
		Camera Lenses & Kowa & LM16SC & 1'', 16mm, 6MPix \\[0.5cm]
56 jakw 55
		Projector		& LG & PF80G & DLP 1080p HD resolution (1920 x 1080), 1,000 ANSI lumen, LED light source \\[0.5cm]
56
		Rotations Stage & Newmark & RM-5-110 & 0.36 arc-sec resolution, 70 arc-sec accuracy, 5 arc-sec repeatability, stepper motor, 72:1 gear ratio, home switch, no optical encoder \\[0.5cm]
57
		Rotation Controller & Newmark & NSC-A1 & Single Axis, Serial over USB, C API \\[0.5cm]
58
		Breadboard & Thorlabs & PBG11111 & 4' x 2.5' x 1.0", 21 kg, 1/4"-20 Holes on 1" Centers\\[0.5cm]
59
		Computer & Dell & Precision T1700 & 32GB RAM, 256 GB SSD drive, 2 TB data storage HDD, Ubuntu OS
60
	\end{tabular}
61
	\label{tbl:hardwareparts}
62
\end{table}
1 jakw 63
 
56 jakw 64
Table \ref{tbl:hardwareparts} lists the main hardware parts of the SeeMaLab 3D scanner with their specifications. The hardware consists of a set of industrial cameras and a projector mounted on a sturdy aluminum optical breadboard. A microtranslation stage holds the circular object plate, which can accurately rotate the scan object, in order to capture point clouds from different angles. 
65
 
66
The cameras, projector and rotation stage are mounted rigidly with respect to each other, which is important for high quality results. See figure \ref{fig:hardware0} for an image of the inside of the main scanner assembly. A darkening curtain can be lowered, to prevent ambient light from interfering with the measurement procedure. 
67
\begin{figure}[h]
68
	\centering
94 eruei 69
		\includegraphics[width=.9\textwidth]{hardware0.jpg}
150 jakw 70
	\caption{The scanner hardware. Two industrial cameras and one projector constitute the optical parts. An angel figurine acts as the scan object, and is placed on top of the circular rotation plate. The calibration target is also seen on its holder.}
56 jakw 71
	\label{fig:hardware0}
72
\end{figure}
73
 
74
The geometry of the scanner is illustrated on figure \ref{fig:hardwaredimensions}, which also indicates the minimum focus range of the cameras and projector.
75
\begin{figure}[h]
76
	\centering	
77
		\includegraphics[width=.9\textwidth]{hardwaredimensions.pdf}
78
	\caption{The physical dimensions of the breadboard, and throw angles of the cameras and projector.}
79
	\label{fig:hardwaredimensions}
80
\end{figure}
81
 
60 jakw 82
\subsection{Projector}
101 jakw 83
The SeeMa-Scanner uses a standard commercial Full-HD projector. This is very cost-effective, but brings a few challenges. The projector is configured to perform minimal image processing, and the HDMI port is set to ''Notebook''-mode, which gives the lowest possible input lag (approx. 80 ms). The projector contains a DLP micromirror array to produce binary patterns with a high refresh rates (kHz range). Intermediate gray-values are created by the projector by altering the relative on-off cycles of each micromirror. A truthful capture of gray-values with the camera, requires an integration time that is a multiple of the 16.7 ms refresh period of the projector. 
56 jakw 84
 
101 jakw 85
Commercial projectors do not have a linear response, which would be necessary for truthful capture of gray-value patterns. Gamma can be set to the lowest possible value of $1.6$, and if matched in the graphics card configuration of the scan computer, a close to linear response can be achieved. By only using binary patterns, this problem is avoided.
86
 
60 jakw 87
\subsection{Cameras}
106 jakw 88
These are high resolution 9MPx industrial CCD colour cameras. While colour information is usually not necessary in structured light, it enables us to full colour texture the scanned object. In the program code, a white balance is used for the camera, which was chosen ad-hoc to approximately match the colour profile of the projector. To capture real true colours, a colour calibration would have to be done.
56 jakw 89
 
60 jakw 90
\subsection{Rotation stage}
150 jakw 91
This is a so-called micro-rotation stage, commonly used in high precision photonic research and production. A larger diameter circular breadboard is fixed onto the rotation stage. The rotation stage has a stepper motor which drives a worm-gear. This gives high precision and very high repeatability. Note that the rotation stage does not have an optical encoder. It is reset to 0 degrees at each program start in software. The motor controller can be configured for different levels of microstepping and motor current. Higher motor current provides more torque and less risk of missing steps. Load on the plate must not exceed 20 kg, and the load's centre of mass rotation axis. Objects can be stabilised on the plate using e.g. modelling clay.
56 jakw 92
 
60 jakw 93
\subsection{Calibration target}
152 jakw 94
A calibration target is also part of the scanner. It was produced by printing a checkerboard in vector format on a high quality laser printer, and gluing it onto a thick piece of float glass using spray adhesive. The target is asymmetrical, which is necessary to uniquely match chessboard corners in both cameras. The calibration target was designed to fill the scan objects space. If the scan area was to be made smaller, a smaller calibration target would need to be fabricated. 
60 jakw 95
 
2 jakw 96
\section{Software components}
153 jakw 97
The SeeMaLab 3D scanner has a full graphical user interface for calibration, and scanning. The output from this software is a number of color pointclouds in the PLY format along with a Meshlab alignment project file (file suffix .aln). The .aln-file contains orientation information as provided from the rotation stage parameters. This allows the user to import the point cloud for further processing in Meshlab, e.g. to produce a full mesh model of the surface. The rotation axis is determined during calibration, which means that usually no manual or algorithm-assisted alignment of partial surfaces is necessary. 
2 jakw 98
 
56 jakw 99
To get fine grained control over the scan procedure, the user can modify the source code for the GUI application, or use the supplied Matlab wrappers. These wrappers provide basic functionality to capture images with the cameras, project a specific pattern on the projector, or rotate the rotation stage to a specific position. Using these components, a full structured light scanner can be implemented in Matlab with full design freedom. 
2 jakw 100
 
168 jakw 101
\section{Compiling and Installing}
102
A default user account is used on the SeeMaLab scanner computers, and here the software is installed in a stable (tested) version. The software repository is checked out and built in the home folder (\texttt{{\textasciitilde}/seema-scanner}). An icon in the launcher bar links to this executable. 
103
 
104
The software was developed using Qt, OpenCV and the Pointcloud Library (PCL). 
105
 
253 - 106
In order to make modifications and test (e.g. change some parameters in the reconstruction process), the SVN repository should be checked out/compiled in a seperate user account. The software is linked against the default versions of Qt, OpenCV and PCL in the current Ubuntu LTS release. This ensures easy compilation and install. 
168 jakw 107
 
60 jakw 108
\section{GUI}
168 jakw 109
The GUI enables the user to perform calibration of the scanner, and to acquire scan data. It is built in a modular fashion, to allow for new structured light strategies to be implemented. It is, however, supposed to be simple and stable, so please keep experimental builds in separate SVN branches. 
60 jakw 110
 
168 jakw 111
GUI functionality heavily depends on Qt. Most other components, specifically those with Matlab wrappers, have minimal dependencies, and can be used outside of the GUI framework.
60 jakw 112
 
168 jakw 113
In the GUI program, the user can open a preference window, so select a pattern sequence and configure the timing parameters. These preferences are stored in \texttt{{\textasciitilde}/.config/DTU/seema-scanner.conf}. Some preferences are not exposed in the GUI, but can be manually edited in the file before the program is started.
106 jakw 114
 
60 jakw 115
\section{\texttt{Projector} Class} 
106 jakw 116
This class provides a fullscreen OpenGL context, and the ability to project any texture. The window/context creation is operating system dependant. It works very well on Linux with proprietary nVidia drivers, as found on the scan computer. In order to get a completely independent screen output, which does not interfere with the window manager, the projector needs to be set up as a seperate X screen in \texttt{xorg.conf}. The absolute position of this second X screen must provide a small gap to the primary screen. This gives a secondary screen, which is not recognised by Compiz (Unity in Ubuntu), but which can be accessed through the Projector class.
60 jakw 117
 
118
\section{\texttt{Camera} Class}
106 jakw 119
An abstraction from the individual industrial camera APIs was created, in order to ease replacement and enhance modularity. A concrete implementation for Point Grey cameras is provided. The program is currently designed for ''software triggering'' of the cameras. Due to substantial input lag in the projector and cameras, a certain pause must be made in program execution between projecting a certain pattern, and image capture. Close temporal synchronisation of both cameras is achieved by calling the trigger method on both cameras, and collecting the images subsequently.
60 jakw 120
 
121
\section{\texttt{RotationStage} Class}
62 jakw 122
Here a C++ abstraction for the Newmark motion control API was implemented. The C API essentially receives serial commands for serial-over-USB, and full documentation is provided on the Newmark website. Important things to consider are the latencies of many of these calls. Specifically reading and writing ''hardware settings'' such as microstep levels and motor current take considerable amounts of time. The motor's controllers inherent positional unit is ''number of microsteps''. This can be converted to an angular position, $\alpha$, by means of the following formula:
123
\[
124
	\alpha = \frac{\textrm{XPOS} \cdot 1.8}{\textrm{MS} \cdot 72} \quad ,
125
\]
105 jakw 126
where XPOS is the rotation controller's value, $1.8$ is the number of degrees per step on the motor axis. MS is the current microstep setting, and $72$ the worm-gear ratio. The \texttt{RotationStage} class interface abstracts from this and lets you rotate to a specific angle between $0$ and $360$ using the shortest direction. 
60 jakw 127
 
106 jakw 128
In order for the SeeMaLab computer to communicate with the rotation stage controller, appropriate udev permissions must be configured.
129
 
252 - 130
 
60 jakw 131
\chapter{Practical scanning}
101 jakw 132
Please be very careful with this very expensive equipment, and considerate by not misplacing any parts and not borrowing any components of the scanner hardware.
252 - 133
The following guide explains the setup, and the steps involved in calibration and acquisition of a $360^\circ$ scan of an object. 
2 jakw 134
 
252 - 135
\section{Setup}
253 - 136
In contrast to the SeeMa-Scanner located in the Image Lab, the Traveling SeeMa-Scanner first has to be assembled from the parts stored in a large box. Figures \ref{fig:setup0} and \ref{fig:setup1} illustrate the final setup of the Traveling SeeMa-Scanner. Take the following points into consideration:
252 - 137
\begin{itemize}
253 - 138
	\item Choose a black, non-shiny background (e.g. provided black fabric). Cover (or even paint black) any shiny objects in the scan area such as screws, stands, etc. This will ensure that only the object of interest is scanned.
252 - 139
 
253 - 140
	\item How to choose the distance between the projector and the circular rotation plate? This distance is dependent on the object size. Make sure the object is inside the field of view of both cameras and the projector for any rotation angle.
252 - 141
 
253 - 142
	\item How to choose the distance between the two cameras (base line)? The closer the cameras, the better can concavities be scanned (i.e. there is less occlusion). However, this comes with a larger error in the point coordinate determination. As a rule of thumb, the base line should lie in the interval $[\frac{1}{3}x, 3x]$, where $x$ denotes the distance between the projector and the object (working distance). Usually, we like to put the cameras rather close together.	
143
 
252 - 144
\end{itemize}
56 jakw 145
 
146
 
253 - 147
\section{Calibration}\label{sec:calibration}
252 - 148
Calibration parameters consist of camera focal lengths, central points, lens distortion parameters, camera extrinsics (their relative position and angles), and the location and orientation of the rotation stage axis. These parameters are stored in the GUI, but in most cases, it is recommended to perform a new calibration before acquiring new data. Also, the exact position of cameras may be altered to better fit the object, in which case recalibration must be done. The calibration parameters can be exported into a \texttt{*.xml} file through the top bar menu. The global coordinate system, in which everything is expresses coincides with the left camera.
149
 
56 jakw 150
\begin{enumerate}
252 - 151
	\item The projector is turned on using the remote control or the touch interface on its top. Make sure the proper HDMI input is chosen as source. Remember to turn on the projector before the scanner computer (otherwise the computer screen is projected)!
152
 
253 - 153
	\item The GUI application "SeeMaLab 3D Scanner" is started on the scanner computer by clicking on the icon \raisebox{-\mydepth}{\fbox{\includegraphics[height=\myheight]{images/icon1.png}}}. Some software settings can be altered through the ''SMScanner $\rightarrow$ Preferences'' menu, if necessary (see Figure \ref{fig:preferences_menu}). For the calibration part, choose the "Calibration" tab.
154
 
155
\item Make sure the projector is focused on the plane going approximately through the rotation axis of the rotation stage. This can be checked by putting the object to be scanned on the rotation stage, then pressing "Calibration $\rightarrow$ Project Focussing Pattern". Look at the object directly, NOT at the GUI: If the projected pattern is not sharp on the object, focus the projector by turning the focus ring, which is located in front of the projector lens. When using phase-shifting patterns (see Section \ref{sec:scan}), the projector usually does not need to be extremely well focused in order to obtain a good scan -- however, it is good practice to do so.
156
% holding a white paper or your hand close to the rotation axis, \dolmes{then pressing one of the arrow buttons on the projector}. If the projected image is not sharp, focus the projector by turning the focus ring, which is located in front of the projector lens.
252 - 157
 
253 - 158
	\item Additionally, ensure that both cameras are in focus at the middle of the rotation stage. To do so, look at the camera images in the GUI, where you can zoom in by turning the mouse wheel, and check whether the projected pattern is sharp on the object (see Figure \ref{fig:projected_pattern}). If needed, the cameras can be focused by turning the corresponding focus ring, which is located in front of the camera's lens.
252 - 159
 
253 - 160
	\item Make sure the size of the calibration target (checkerboard plate) fits the object. Then, press ''SMScanner $\rightarrow$ Preferences'' and adjust the parameters of the calibration target (Calibration pattern; and Size). In order to determine the calibration pattern of the checkerboard plate, do NOT count the number of squares in horizontal and vertical direction, but the number of inner edges (saddle points). The parameter Size is given by the side length of a square. In Figure \ref{fig:calibration_target}, the calibration pattern is given by $13\times 22$ and the side length by $15$ mm.
252 - 161
 
253 - 162
	\item Position the calibration target on the circular rotation plate parallel to the projector -- make sure the rotation axis approximately intersects the center of the calibration target --, and inside the field of view of both cameras and the projector. You can for example use four screws to mount the target on the rotation plate. White light will be provided from the projector for guidance. The GUI should look similar to the one in Figure \ref{fig:calibration0}.
252 - 163
 
253 - 164
	\item\label{item:light} Optimally, the camera images in the GUI show a rather grayish calibration plate, and the background is totally black. If any pixels of the calibration plate are completely white, there is too much light. There are two options for adjusting the light:
252 - 165
	\begin{itemize}
166
		\item Adjust the lense aperture by turning the aperture ring on the camera. The narrower the aperture, the less light reaches the image plane, and vice versa.
253 - 167
		\item Adjust the shutter time in the ''SMScanner $\rightarrow$ Preferences'' menu. The shutter time has to be a multiple of 16.666 milliseconds per image: 16.666, 33.333, 50.000, 66.666, 83.333, 99.996, 116.666, 133.333, etc (important: type 3 digits after the decimal point!). This is due to the fact that the projector has a frame rate of 60 images per second, which corresponds to a projection time of 1/60 seconds per image, i.e. 16.666 milliseconds per image. The larger the shutter time, the more light reaches the image plane, and vice versa. Note that changing the shutter time does not affect calibration, i.e. the shutter time can also be changed after having done a calibration, without having to do a recalibration!
252 - 168
	\end{itemize} 
169
 
170
	\item SeeMaLab-Scanner: The darkening curtain is lowered, to improve the signal to noise ratio, and to avoid artifacts pertaining from ambient lighting.\\ Traveling SeeMa-Scanner: The light is usually no problem, otherwise darken the room.
171
 
253 - 172
	\item\label{item:batch1} A number of calibration sets need to be acquired. The minimum is 3 sets, and more is beneficial. The calibration pattern needs to be fully visible and equally bright in both cameras. The viewing angle must not be too shallow. Press ''Batch acquisition'' in order to acquire a reasonable number of calibration sets using default parameters. Figure \ref{fig:calibration0} reveals that default acquisition is at angles $340^\circ, 338^\circ, \dots, 20^\circ$. %The present ''batch acquisition'' gives a reasonable number of calibration sets.
173
 
174
	\item\label{item:batch2} Depending on the object's poses to be scanned (see Section \ref{sec:scan}), you might consider to turn the calibration target by 90 degrees and redo calibration step \ref{item:batch1}.
175
 
176
	\item\label{item:batch3} In order to improve the calibration, additional single or batch acquisitions can be done as follows: Try to cover regions in both cameras' field of view that are not covered by the aqcuisition described in calibration steps \ref{item:batch1} and \ref{item:batch2}, e.g. edges, upper/lower part, and different depths (check the camera images in the GUI!). You could for example try to place the calibration target at the very back or front of the rotation plate, put it on the table, lean it against the wall, or put it on something. You might also consider moving the rotation plate away, in which case this calibration step should be performed before steps \ref{item:batch1} and \ref{item:batch2}.\footnote{For inspiration, you might have a look at http://www.vision.caltech.edu/bouguetj/calib\_doc/, where different calibration examples are shown.} Note that the calibration target has to be inside the field of view of at least one of the cameras!
252 - 177
 
253 - 178
	\item All the acquired images are listed above the ''Calibrate Camera'' and ''Calibrate Rotation Stage'' buttons. After acquisition, go through the steps described below in order to calibrate both the camera and the rotation stage:
179
 
180
\begin{enumerate}
181
\item Mark all images to be used for camera calibration (usually all images). By clicking the ''Calibrate Camera'' button, calibration parameters are automatically determined. The log message (see Figure \ref{fig:log_message1}) will show the calibration parameters, and different errors, which measure the quality of calibration (e.g. reprojection errors; focal length and lens distortion uncertainties).
182
 
183
\item Mark all images to be used for determination of the rotation axis, i.e. (usually all) images acquired in calibration step \ref{item:batch1}. Alternatively, you can also choose images acquired in step \ref{item:batch2}, but stick to images acquired in only one of the two steps! By clicking the ''Calibrate Rotation Stage'' button, the rotation axis is automatically determined. The log message (see Figure \ref{fig:log_message2}) shows the calibration parameters and the error.
184
 
185
%untick all the images acquired in calibration steps \ref{item:batch2} and \ref{item:batch3}. By doing so, you make sure that only the images acquired in step \ref{item:batch1} are used for determination of the rotation axis, whereas all images are used for camera calibration. By clicking the ''Calibrate'' button, calibration parameters are then automatically determined. This procedure can take up to a few minutes. The terminal output will show recalibration errors, which measure the quality of calibration.
186
%After acquisition, individual calibration sets can be re-examined. 
187
\end{enumerate}
252 - 188
 
253 - 189
	\item A successful calibration goes along with a colorful pattern on the calibration target as shown in Figure \ref{fig:successful_calibration}. If the calibration fails for a calibration set, it is automatically ignored. Thus, it does not matter if the calibration is not successful for a few sets. In addition, the calibration result can be examined by changing to the ''Point Clouds'' tab in the GUI (see Figure \ref{fig:pointclouds0}). Left and right cameras are represented by coloured coordinate systems (the viewing direction is the positive z-axis, y points down, x to the right). The rotation axis, as determined by the calibration procedure is shown as a white line.
190
 
191
\item The calibration parameters can be saved by pressing "Calibration $\rightarrow$ Export Parameters", and they can be loaded by pressing "Calibration $\rightarrow$ Import Parameters".
102 jakw 192
\end{enumerate}
193
 
252 - 194
 
253 - 195
\section{Making a 360 degree scan}\label{sec:scan}
196
Image acquisition consists of projecting a sequence of patterns onto the object, which are then converted to depth values by means of the specific algorithm. You can choose among different pattern modes in the ''SMScanner $\rightarrow$ Preferences'' menu:
197
%
198
\begin{itemize}
199
\item Gray Coding \cite{aanaes}  %GrayCode
200
\item Gray Coding Horizontal+Vertical (experimental)\footnote{This implementation of Gray encoding uses horizontal and vertial stripes, which adds some encoding redundancy, but avoids interpolation effects from rectifying homographies.} \cite{aanaes} %GrayCodeHorzVert
201
\item Phase Shifting 2 frequency heterodyne\footnote{Different from the paper, it uses only two different frequencies.} \cite{reich} %PhaseShiftTwoFreq
202
\item Phase Shifting 3 frequency (experimental) \cite{reich} %PhaseShiftThreeFreq
203
\item Phase Shifting 2 frequency horz.+vert. (experimental)\footnote{Based on Phase Shifting 2 frequency heterodyne, but uses horizontal and vertial fringes, which adds some encoding redundancy, but avoids interpolation effects from rectifying homographies.} \cite{reich} %PhaseShiftTwoFreqHorzVert
204
\item Embedded Phase Shifting (experimental) \cite{moreno} %PhaseShiftEmbedded
205
\item Line Shifting \cite{guhring} %LineShift
206
\end{itemize}
207
%
208
From experience, we know that the phase shifting algorithm works well with many objects, so one might want to start using this algorithm.\\
209
Depending on the surface complexity of the scan object (blind spots, holes, details, etc.), multiple $360^\circ$ scans may be necessary. In that case, the following procedure is done multiple times with the object in different orientations (poses) in order to cover the whole surface, and capture all details. Consider to change the rotation angle for a better result. In order to obtain a good quality scan, the number of poses, as well as the poses and rotation angles used for the scanning have to be carefully investigated.
102 jakw 210
\begin{enumerate}
253 - 211
	\item Choose the ''Capture'' tab in the GUI.
252 - 212
 
213
	\item The scan object is now placed on the rotation plate such that it is visible in both cameras. SeeMaLab-Scanner: Lower the darkening curtain.
214
 
253 - 215
	\item Check the light conditions: Again, the object should appear grayish with a completely black background (see Figure \ref{fig:light}). If necessary, adjust the light conditions, preferably by changing the shutter time as described in the calibration part (see Section \ref{sec:calibration}, step \ref{item:light}), since recalibration is not needed.
252 - 216
 
253 - 217
	\item Press ''Single Capture'' or ''Batch Capture'' in the GUI in order to scan the object.
252 - 218
 
253 - 219
	\item Sequences of patterns are projected onto the object, and images are acquired. The captured images can be reviewed by clicking on the frames (see Figure \ref{fig:capture1}). Captured sequences are automatically reconstructed, where the name of a reconstructed sequence appears black, otherwise it is grey (see Figure \ref{fig:capture0}).
252 - 220
 
253 - 221
	\item The results can be investigated by choosing the ''Points Clouds'' tab in the GUI (see Figure \ref{fig:pointclouds1}). In order to zoom in at a specific point, hover the mouse over this point and press F. Single point clouds can be shown (ticked), or hidden (unticked). % click on a point + press F
252 - 222
 
253 - 223
	\item All data can be exported from the GUI program by means of the top bar menus. By exporting the point clouds into a folder (''Point Clouds $\rightarrow$ Export Point Clouds''), a \texttt{*.aln} is stored alongside these, which contains pose information in global coordinate space, which aligns the points clouds correctly and relative to each other. The captured images can be exported by either pressing ''Capture $\rightarrow$ Export Sequences'' (whole sequence) or ''Capture $\rightarrow$ Export White Frames'' (no images showing projected patterns on the object). 
224
 
225
It is good practice to use the following structure:
226
\begin{itemize}
227
	\item Create a folder for each object to scan (e.g. \texttt{owl})
228
	\item For an object, create a folder for each pose (e.g. \texttt{pose0} and \texttt{pose1}, using 2 poses)
229
	\item For each pose, save images (e.g. folders \texttt{sequence\_0}, ..., \texttt{sequence\_8}, using a step size of 40 degrees) and point clouds (e.g. \texttt{pointcloud\_0.ply}, ..., \texttt{pointcloud\_8.ply}). These names are default names.
230
	\item Save the calibration file for each object, or pose in case the scanner had to be recalibrated for different poses (e.g. \texttt{cal.xml})
231
\end{itemize}
232
 
56 jakw 233
\end{enumerate}
252 - 234
 
168 jakw 235
\begin{figure}[h]
56 jakw 236
	\centering
253 - 237
		\includegraphics[width=.7\textwidth]{images/setup0.jpg}
238
	\caption{Traveling SeeMa-Scanner: Mounting the cameras and the projector}
239
	\label{fig:setup0}
240
\end{figure}
241
\begin{figure}[h]
242
	\centering
243
		\includegraphics[width=.7\textwidth]{images/setup1.jpg}
244
	\caption{Traveling SeeMa-Scanner: Final setup}
245
	\label{fig:setup1}
246
\end{figure}
247
\begin{figure}[h]
248
	\centering
249
		\includegraphics[width=.4\textwidth]{images/preferences_menu.png}
250
	\caption{The "SMScanner $\rightarrow$ Preferences" menu}
251
	\label{fig:preferences_menu}
252
\end{figure}
253
\begin{figure}[h]
254
	\centering
255
		\includegraphics[width=.7\textwidth]{images/projected_pattern.png}
256
	\caption{Projected pattern on the object after having pressed "Calibration $\rightarrow$ Project Focussing Pattern"}
257
	\label{fig:projected_pattern}
258
\end{figure}
259
\begin{figure}[h]
260
	\centering
261
		\includegraphics[width=.7\textwidth]{images/calibration_target.jpg}
262
	\caption{Calibration target}
263
	\label{fig:calibration_target}
264
\end{figure}
265
\begin{figure}[h]
266
	\centering
267
		\includegraphics[width=.7\textwidth]{images/calibration_tab.png}
268
	\caption{GUI showing the ''Calibration'' tab}
56 jakw 269
	\label{fig:calibration0}
270
\end{figure}
168 jakw 271
\begin{figure}[h]
56 jakw 272
	\centering
253 - 273
		\includegraphics[width=.7\textwidth]{images/log_message1_.png}
274
	\caption{Log message of camera calibration}
275
	\label{fig:log_message1}
276
\end{figure}
277
\begin{figure}[h]
278
	\centering
279
		\includegraphics[width=.7\textwidth]{images/log_message2_.png}
280
	\caption{Log message of rotation stage calibration}
281
	\label{fig:log_message2}
282
\end{figure}
283
\begin{figure}[h]
284
	\centering
285
		\includegraphics[width=.7\textwidth]{images/successful_calibration.png}
286
	\caption{GUI showing a successful calibration}
287
	\label{fig:successful_calibration}
288
\end{figure}
289
\begin{figure}[h]
290
	\centering
291
		\includegraphics[width=.7\textwidth]{images/pointclouds0.png}
292
	\caption{GUI showing the calibration result in the ''Point Clouds'' tab}
56 jakw 293
	\label{fig:pointclouds0}
294
\end{figure}
168 jakw 295
\begin{figure}[h]
56 jakw 296
	\centering
253 - 297
		\includegraphics[width=\textwidth]{images/light.png}
298
	\caption{Light condition/Shutter time: Not enough light/$16.666$ ms (left), good/$33.333$ ms (middle), too much light/$50.000$ ms (right).}
299
	\label{fig:light}
300
\end{figure}
301
\begin{figure}[h]
302
	\centering
303
		\includegraphics[width=.7\textwidth]{images/capture1.png}
304
	\caption{GUI showing the ''Capture'' tab}
305
	\label{fig:capture1}
306
\end{figure}
307
\begin{figure}[h]
308
	\centering
309
		\includegraphics[width=.7\textwidth]{images/capture0.png}
310
	\caption{GUI showing the ''Capture'' tab}
56 jakw 311
	\label{fig:capture0}
61 jakw 312
\end{figure}
168 jakw 313
\begin{figure}[h]
61 jakw 314
	\centering
253 - 315
		\includegraphics[width=.7\textwidth]{images/pointclouds1.png}
316
	\caption{''Point Clouds'' tab with reconstructed point clouds}
61 jakw 317
	\label{fig:pointclouds1}
318
\end{figure}
319
\clearpage	
56 jakw 320
 
252 - 321
 
322
\chapter{Reconstructing a mesh surface}
106 jakw 323
Multiple point clouds can be merged into a single watertight mesh representation using Meshlab. Meshlab is available on the scanner computer, but also freely available for download for multiple platforms. The basic steps involved in merging and reconstructing are outlined below. The input data will consist of one or more sets of pointclouds acquired with the SeeMaLab GUI. Note that if multiple object poses are desired (for complex geometries/blind spots, etc.), it is recommended to close and restart the GUI for each pose, to clear the captured sequences and memory.
56 jakw 324
\begin{enumerate}
325
	\item Load a set of point clouds, by opening the \texttt{*.aln} file in Meshlab (''File $\rightarrow$ Open Project...''). See figure \ref{fig:meshlab0} for an illustration of one full set of scans loaded into Meshlab.
106 jakw 326
	\item The PLY files do contain XYZ and RGB values for all points. You will need to compute normals, in order for the surface reconstruction to succeed. These normals can be estimated and consistently oriented by considering the camera viewpoint. Select all point cloud in turn and for each, choose ''Filters $\rightarrow$ Point Sets $\rightarrow$ Compute Normals for Point Set''. Make sure the ''Flip normals...'' checkbox is ticked (see fig. \ref{fig:meshlab1}). Suitable neighbourhood values are in the order of $10$. You can visualise the estimated normals through the ''Render'' menu.
327
	\item After estimating normals for all point clouds in a set, choose ''Filters $\rightarrow$ Mesh Layer $\rightarrow$ Flatten Visible Layers''. Make sure to retain unreferenced vertices, because at this point, none of the points will be part of any triangles (see figure \ref{fig:meshlab2}). This process will alter all coordinates by applying the pose transformation to all point clouds before merging them.
56 jakw 328
	\item Save the resulting merged point cloud. In the save dialog, make sure to include the normals in the output file (see fig. \ref{fig:meshlab3}).
329
\end{enumerate}
330
 
168 jakw 331
\begin{figure}[h]
56 jakw 332
	\centering
333
		\includegraphics[width=\textwidth]{meshlab0.png}
334
	\caption{One full set of scans (9 point clouds covering $360^\circ$ in $40^\circ$ intervals).}	
335
	\label{fig:meshlab0}
336
\end{figure}
168 jakw 337
\begin{figure}[h]
56 jakw 338
	\centering
60 jakw 339
		\includegraphics[width=.4\textwidth]{meshlab1.png}
56 jakw 340
	\caption{Estimate normals, and orient them consistenly towards the camera (positive z-axis).}
341
	\label{fig:meshlab1}
342
\end{figure}
168 jakw 343
\begin{figure}[h]
56 jakw 344
	\centering
61 jakw 345
		\includegraphics[width=.25\textwidth]{meshlab2.png}
56 jakw 346
	\caption{Flatten visible layers and retain ''unreferences vertices'', i.e. points not in a triangle.}
347
	\label{fig:meshlab2}
348
\end{figure}
168 jakw 349
\begin{figure}[h]
56 jakw 350
	\centering
60 jakw 351
		\includegraphics[width=.7\textwidth]{meshlab3.png}
56 jakw 352
	\caption{Save the merged point clouds, and include the estimated normals in the output file.}
353
	\label{fig:meshlab3}
354
\end{figure}
355
 
106 jakw 356
If you have acquired multiple $360^\circ$ scans of your object in different position, proceed as above for each set. Then, you will need to align and merge these point cloud. Meshlab has manual coarse and automated ICP alignment integrated. Note that the automatic alignment procedure in Meshlab requires high quality point normal estimates for all point cloud to succeed. If this is not given, the alignment process will fail without warning or errors.
56 jakw 357
\begin{enumerate}
106 jakw 358
	\item Load the point clouds of interest (''File $\rightarrow$ Import Mesh''). The imported point cloud will not be properly aligned. Open the alignment tool (a big yellow A tool button). See figure \ref{fig:meshlab4} for an image of this tool. ''Glueing'' in Meshlab means setting an initial rough alignment. You can ''glue'' the first mesh, and rough ''glue'' the others to it by selecting a small number (minimum 4) of surface point correspondences with the mouse. When all point clouds have been ''glued'', you can initiate automatic fine-alignment (group-wise ICP) by pressing ''Process''. A good alignment should be confirmed by selecting ''False colors'', and seeing a good mix of colours in overlap areas. 
56 jakw 359
	\item Merge the aligned point cloud ''Filters $\rightarrow$ Mesh Layer $\rightarrow$ Flatten Visible Layers''.
360
\end{enumerate}
361
\begin{figure}[h]
362
	\centering
363
		\includegraphics[width=.9\textwidth]{meshlab4.png}
364
	\caption{The alignment tool in Meshlab.}
365
	\label{fig:meshlab4}
366
\end{figure}
367
 
60 jakw 368
The next step is to reconstruct a surface from a point cloud. This can be done using the Poisson surface reconstruction built into Meshlab. It is accessible through ''File $\rightarrow$ Point Set $\rightarrow$ Surface Reconstruction: Poisson''. You will most probably have to vary the parameters for this step, to obtain pleasing results for your particular data. 
56 jakw 369
 
149 jakw 370
The full Poisson code is available at \url{http://www.cs.jhu.edu/~misha/Code/PoissonRecon/}, and also installed on the scanner computer. The full software allows for finer control over the process, and also to remove mesh membranes with little point support. We refer to the documentation provided by the authors of the PoissonRecon code.
56 jakw 371
 
106 jakw 372
The Poisson reconstruction algorithm does not keep colour information. In order to obtain a coloured mesh, one needs to re-project the per-point colour information from the full point cloud to the mesh. This can be done in Meshlab through the ''Filters $\rightarrow$ Sampling $\rightarrow$ Vertex Attribute Transfer'' functionality. 
253 - 373
 
374
\addcontentsline{toc}{section}{References}
375
{\setlength{\baselineskip}{0.75\baselineskip}
376
\begin{thebibliography}{99}
377
 
378
\bibitem{aanaes} Aanaes Henrik, 2014. 'Lecture Notes on Computer Vision', DTU.
379
 
380
\bibitem{guhring} Guhring Jens, 2000. 'Dense 3D surface acquisition by structured light using off-the-shelf components', Proceedings of SPIE Vol. 4309: Videometrics and Optical Methods for 3D Shape Measurement.
381
 
382
\bibitem{moreno} Moreno Daniel, Son Kilho \& Taubin Gabriel, 2015. 'Embedded Phase Shifting: Robust Phase Shifting with Embedded Signals', Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2301--2309.
383
 
384
\bibitem{reich} Reich Carsten, Ritter Reinhold \& Thesing Jan, 1997. 'White light heterodyne principle for 3D-measurement', Proceedings of SPIE Vol. 3100: Sensors, Sensor Systems, and Sensor Data Processing.
385
 
386
\end{thebibliography}
387
}
388
 
389
 
390
 
391
 
64 jakw 392
\end{document}