Rev 252 | Go to most recent revision | Blame | Compare with Previous | Last modification | View Log | RSS feed
\documentclass[10pt,notitlepage]{report}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{url}
\usepackage{graphicx}
\usepackage{calc}
\usepackage{fullpage}
\usepackage{color}
\newcommand{\dolmes}[1]{\textcolor[rgb]{1,0.0,0}{#1}}
\newlength\myheight
\newlength\mydepth
\settototalheight\myheight{Xygp}
\settodepth\mydepth{Xygp}
\setlength\fboxsep{0pt}
% \renewcommand{\chaptermark}[1]{\markboth{#1}{}}
% \renewcommand{\sectionmark}[1]{\markright{\thesection\ #1}}
\title{The SeeMa Lab Structured Light Scanner}
\author{Jakob Wilm and Eyþór Rúnar Eiríksson\\
\url{{jakw,eruei}@dtu.dk}}
\date{\today}
\begin{document}
\maketitle
\begin{figure}[h]
\centering
\includegraphics[width=.9\textwidth]{mesh0.png}
\label{fig:mesh0}
\end{figure}
\begin{abstract}
This is the manual for the Seeing Machines Lab Structured Light Scanner (SeeMa-Scanner). The scanner consists of both hardware components (including cameras, projector and rotation stage), and software for calibration, scanning and reconstruction. While most of the components should be self-explanatory, we describe the hardware, and each software component, making it possible for students and staff to extend the scanner with new functionality. We also give a brief step-by-step guide on how to get from a physical object to a digital mesh model of it.
\end{abstract}
\chapter{The scanner}
\section{Getting started}
This section describes the main hardware and software parts of the system.
If your main objective is to digitise objects, you should be able to do so on your own by reading the chapter ''Practical Scanning'', which gives a step-by-step recipe to perform a complete object scan and reconstruction.
Technical projects and contributions are very welcome. Please get in touch with the authors if you plan any alterations to the hardware, or would like write access to the SVN repository containing the software. The public read access url to the SeeMaLab Scanner repository is: \url{http://svn.compute.dtu.dk/svn/seema-scanner/}.
\section{Hardware parts}
\begin{table}
\begin{tabular}{l l l p{0.3\textwidth}}
\textbf{Part} & \textbf{Manufacturer} & \textbf{Model} & \textbf{Specifications} \\
\hline\\[0.2cm]
Industrial Cameras & Point Grey Research & GS3-U3-91S6C-C & Color, 9.1 MP, Sony ICX814 CCD, 1", 3.69 $\mu$m, Global shutter, 3376 x 2704 at 9 FPS \\[0.5cm]
Camera Lenses & Kowa & LM16SC & 1'', 16mm, 6MPix \\[0.5cm]
Projector & LG & PF80G & DLP 1080p HD resolution (1920 x 1080), 1,000 ANSI lumen, LED light source \\[0.5cm]
Rotations Stage & Newmark & RM-5-110 & 0.36 arc-sec resolution, 70 arc-sec accuracy, 5 arc-sec repeatability, stepper motor, 72:1 gear ratio, home switch, no optical encoder \\[0.5cm]
Rotation Controller & Newmark & NSC-A1 & Single Axis, Serial over USB, C API \\[0.5cm]
Breadboard & Thorlabs & PBG11111 & 4' x 2.5' x 1.0", 21 kg, 1/4"-20 Holes on 1" Centers\\[0.5cm]
Computer & Dell & Precision T1700 & 32GB RAM, 256 GB SSD drive, 2 TB data storage HDD, Ubuntu OS
\end{tabular}
\label{tbl:hardwareparts}
\end{table}
Table \ref{tbl:hardwareparts} lists the main hardware parts of the SeeMaLab 3D scanner with their specifications. The hardware consists of a set of industrial cameras and a projector mounted on a sturdy aluminum optical breadboard. A microtranslation stage holds the circular object plate, which can accurately rotate the scan object, in order to capture point clouds from different angles.
The cameras, projector and rotation stage are mounted rigidly with respect to each other, which is important for high quality results. See figure \ref{fig:hardware0} for an image of the inside of the main scanner assembly. A darkening curtain can be lowered, to prevent ambient light from interfering with the measurement procedure.
\begin{figure}[h]
\centering
\includegraphics[width=.9\textwidth]{hardware0.jpg}
\caption{The scanner hardware. Two industrial cameras and one projector constitute the optical parts. An angel figurine acts as the scan object, and is placed on top of the circular rotation plate. The calibration target is also seen on its holder.}
\label{fig:hardware0}
\end{figure}
The geometry of the scanner is illustrated on figure \ref{fig:hardwaredimensions}, which also indicates the minimum focus range of the cameras and projector.
\begin{figure}[h]
\centering
\includegraphics[width=.9\textwidth]{hardwaredimensions.pdf}
\caption{The physical dimensions of the breadboard, and throw angles of the cameras and projector.}
\label{fig:hardwaredimensions}
\end{figure}
\subsection{Projector}
The SeeMa-Scanner uses a standard commercial Full-HD projector. This is very cost-effective, but brings a few challenges. The projector is configured to perform minimal image processing, and the HDMI port is set to ''Notebook''-mode, which gives the lowest possible input lag (approx. 80 ms). The projector contains a DLP micromirror array to produce binary patterns with a high refresh rates (kHz range). Intermediate gray-values are created by the projector by altering the relative on-off cycles of each micromirror. A truthful capture of gray-values with the camera, requires an integration time that is a multiple of the 16.7 ms refresh period of the projector.
Commercial projectors do not have a linear response, which would be necessary for truthful capture of gray-value patterns. Gamma can be set to the lowest possible value of $1.6$, and if matched in the graphics card configuration of the scan computer, a close to linear response can be achieved. By only using binary patterns, this problem is avoided.
\subsection{Cameras}
These are high resolution 9MPx industrial CCD colour cameras. While colour information is usually not necessary in structured light, it enables us to full colour texture the scanned object. In the program code, a white balance is used for the camera, which was chosen ad-hoc to approximately match the colour profile of the projector. To capture real true colours, a colour calibration would have to be done.
\subsection{Rotation stage}
This is a so-called micro-rotation stage, commonly used in high precision photonic research and production. A larger diameter circular breadboard is fixed onto the rotation stage. The rotation stage has a stepper motor which drives a worm-gear. This gives high precision and very high repeatability. Note that the rotation stage does not have an optical encoder. It is reset to 0 degrees at each program start in software. The motor controller can be configured for different levels of microstepping and motor current. Higher motor current provides more torque and less risk of missing steps. Load on the plate must not exceed 20 kg, and the load's centre of mass rotation axis. Objects can be stabilised on the plate using e.g. modelling clay.
\subsection{Calibration target}
A calibration target is also part of the scanner. It was produced by printing a checkerboard in vector format on a high quality laser printer, and gluing it onto a thick piece of float glass using spray adhesive. The target is asymmetrical, which is necessary to uniquely match chessboard corners in both cameras. The calibration target was designed to fill the scan objects space. If the scan area was to be made smaller, a smaller calibration target would need to be fabricated.
\section{Software components}
The SeeMaLab 3D scanner has a full graphical user interface for calibration, and scanning. The output from this software is a number of color pointclouds in the PLY format along with a Meshlab alignment project file (file suffix .aln). The .aln-file contains orientation information as provided from the rotation stage parameters. This allows the user to import the point cloud for further processing in Meshlab, e.g. to produce a full mesh model of the surface. The rotation axis is determined during calibration, which means that usually no manual or algorithm-assisted alignment of partial surfaces is necessary.
To get fine grained control over the scan procedure, the user can modify the source code for the GUI application, or use the supplied Matlab wrappers. These wrappers provide basic functionality to capture images with the cameras, project a specific pattern on the projector, or rotate the rotation stage to a specific position. Using these components, a full structured light scanner can be implemented in Matlab with full design freedom.
\section{Compiling and Installing}
A default user account is used on the SeeMaLab scanner computers, and here the software is installed in a stable (tested) version. The software repository is checked out and built in the home folder (\texttt{{\textasciitilde}/seema-scanner}). An icon in the launcher bar links to this executable.
The software was developed using Qt, OpenCV and the Pointcloud Library (PCL).
In order to make modifications and test (e.g. change some parameters in the reconstruction process), the SVN repository should be checked out/compiled in a seperate user account. The software is linked against the default versions of Qt, OpenCV and PCL in the current Ubuntu LTS release. This ensures easy compilation and install.
\section{GUI}
The GUI enables the user to perform calibration of the scanner, and to acquire scan data. It is built in a modular fashion, to allow for new structured light strategies to be implemented. It is, however, supposed to be simple and stable, so please keep experimental builds in separate SVN branches.
GUI functionality heavily depends on Qt. Most other components, specifically those with Matlab wrappers, have minimal dependencies, and can be used outside of the GUI framework.
In the GUI program, the user can open a preference window, so select a pattern sequence and configure the timing parameters. These preferences are stored in \texttt{{\textasciitilde}/.config/DTU/seema-scanner.conf}. Some preferences are not exposed in the GUI, but can be manually edited in the file before the program is started.
\section{\texttt{Projector} Class}
This class provides a fullscreen OpenGL context, and the ability to project any texture. The window/context creation is operating system dependant. It works very well on Linux with proprietary nVidia drivers, as found on the scan computer. In order to get a completely independent screen output, which does not interfere with the window manager, the projector needs to be set up as a seperate X screen in \texttt{xorg.conf}. The absolute position of this second X screen must provide a small gap to the primary screen. This gives a secondary screen, which is not recognised by Compiz (Unity in Ubuntu), but which can be accessed through the Projector class.
\section{\texttt{Camera} Class}
An abstraction from the individual industrial camera APIs was created, in order to ease replacement and enhance modularity. A concrete implementation for Point Grey cameras is provided. The program is currently designed for ''software triggering'' of the cameras. Due to substantial input lag in the projector and cameras, a certain pause must be made in program execution between projecting a certain pattern, and image capture. Close temporal synchronisation of both cameras is achieved by calling the trigger method on both cameras, and collecting the images subsequently.
\section{\texttt{RotationStage} Class}
Here a C++ abstraction for the Newmark motion control API was implemented. The C API essentially receives serial commands for serial-over-USB, and full documentation is provided on the Newmark website. Important things to consider are the latencies of many of these calls. Specifically reading and writing ''hardware settings'' such as microstep levels and motor current take considerable amounts of time. The motor's controllers inherent positional unit is ''number of microsteps''. This can be converted to an angular position, $\alpha$, by means of the following formula:
\[
\alpha = \frac{\textrm{XPOS} \cdot 1.8}{\textrm{MS} \cdot 72} \quad ,
\]
where XPOS is the rotation controller's value, $1.8$ is the number of degrees per step on the motor axis. MS is the current microstep setting, and $72$ the worm-gear ratio. The \texttt{RotationStage} class interface abstracts from this and lets you rotate to a specific angle between $0$ and $360$ using the shortest direction.
In order for the SeeMaLab computer to communicate with the rotation stage controller, appropriate udev permissions must be configured.
\chapter{Practical scanning}
Please be very careful with this very expensive equipment, and considerate by not misplacing any parts and not borrowing any components of the scanner hardware.
The following guide explains the setup, and the steps involved in calibration and acquisition of a $360^\circ$ scan of an object.
\section{Setup}
In contrast to the SeeMa-Scanner located in the Image Lab, the Traveling SeeMa-Scanner first has to be assembled from the parts stored in a large box. Figures \ref{fig:setup0} and \ref{fig:setup1} illustrate the final setup of the Traveling SeeMa-Scanner. Take the following points into consideration:
\begin{itemize}
\item Choose a black, non-shiny background (e.g. provided black fabric). Cover (or even paint black) any shiny objects in the scan area such as screws, stands, etc. This will ensure that only the object of interest is scanned.
\item How to choose the distance between the projector and the circular rotation plate? This distance is dependent on the object size. Make sure the object is inside the field of view of both cameras and the projector for any rotation angle.
\item How to choose the distance between the two cameras (base line)? The closer the cameras, the better can concavities be scanned (i.e. there is less occlusion). However, this comes with a larger error in the point coordinate determination. As a rule of thumb, the base line should lie in the interval $[\frac{1}{3}x, 3x]$, where $x$ denotes the distance between the projector and the object (working distance). Usually, we like to put the cameras rather close together.
\end{itemize}
\section{Calibration}\label{sec:calibration}
Calibration parameters consist of camera focal lengths, central points, lens distortion parameters, camera extrinsics (their relative position and angles), and the location and orientation of the rotation stage axis. These parameters are stored in the GUI, but in most cases, it is recommended to perform a new calibration before acquiring new data. Also, the exact position of cameras may be altered to better fit the object, in which case recalibration must be done. The calibration parameters can be exported into a \texttt{*.xml} file through the top bar menu. The global coordinate system, in which everything is expresses coincides with the left camera.
\begin{enumerate}
\item The projector is turned on using the remote control or the touch interface on its top. Make sure the proper HDMI input is chosen as source. Remember to turn on the projector before the scanner computer (otherwise the computer screen is projected)!
\item The GUI application "SeeMaLab 3D Scanner" is started on the scanner computer by clicking on the icon \raisebox{-\mydepth}{\fbox{\includegraphics[height=\myheight]{images/icon1.png}}}. Some software settings can be altered through the ''SMScanner $\rightarrow$ Preferences'' menu, if necessary (see Figure \ref{fig:preferences_menu}). For the calibration part, choose the "Calibration" tab.
\item Make sure the projector is focused on the plane going approximately through the rotation axis of the rotation stage. This can be checked by putting the object to be scanned on the rotation stage, then pressing "Calibration $\rightarrow$ Project Focussing Pattern". Look at the object directly, NOT at the GUI: If the projected pattern is not sharp on the object, focus the projector by turning the focus ring, which is located in front of the projector lens. When using phase-shifting patterns (see Section \ref{sec:scan}), the projector usually does not need to be extremely well focused in order to obtain a good scan -- however, it is good practice to do so.
% holding a white paper or your hand close to the rotation axis, \dolmes{then pressing one of the arrow buttons on the projector}. If the projected image is not sharp, focus the projector by turning the focus ring, which is located in front of the projector lens.
\item Additionally, ensure that both cameras are in focus at the middle of the rotation stage. To do so, look at the camera images in the GUI, where you can zoom in by turning the mouse wheel, and check whether the projected pattern is sharp on the object (see Figure \ref{fig:projected_pattern}). If needed, the cameras can be focused by turning the corresponding focus ring, which is located in front of the camera's lens.
\item Make sure the size of the calibration target (checkerboard plate) fits the object. Then, press ''SMScanner $\rightarrow$ Preferences'' and adjust the parameters of the calibration target (Calibration pattern; and Size). In order to determine the calibration pattern of the checkerboard plate, do NOT count the number of squares in horizontal and vertical direction, but the number of inner edges (saddle points). The parameter Size is given by the side length of a square. In Figure \ref{fig:calibration_target}, the calibration pattern is given by $13\times 22$ and the side length by $15$ mm.
\item Position the calibration target on the circular rotation plate parallel to the projector -- make sure the rotation axis approximately intersects the center of the calibration target --, and inside the field of view of both cameras and the projector. You can for example use four screws to mount the target on the rotation plate. White light will be provided from the projector for guidance. The GUI should look similar to the one in Figure \ref{fig:calibration0}.
\item\label{item:light} Optimally, the camera images in the GUI show a rather grayish calibration plate, and the background is totally black. If any pixels of the calibration plate are completely white, there is too much light. There are two options for adjusting the light:
\begin{itemize}
\item Adjust the lense aperture by turning the aperture ring on the camera. The narrower the aperture, the less light reaches the image plane, and vice versa.
\item Adjust the shutter time in the ''SMScanner $\rightarrow$ Preferences'' menu. The shutter time has to be a multiple of 16.666 milliseconds per image: 16.666, 33.333, 50.000, 66.666, 83.333, 99.996, 116.666, 133.333, etc (important: type 3 digits after the decimal point!). This is due to the fact that the projector has a frame rate of 60 images per second, which corresponds to a projection time of 1/60 seconds per image, i.e. 16.666 milliseconds per image. The larger the shutter time, the more light reaches the image plane, and vice versa. Note that changing the shutter time does not affect calibration, i.e. the shutter time can also be changed after having done a calibration, without having to do a recalibration!
\end{itemize}
\item SeeMaLab-Scanner: The darkening curtain is lowered, to improve the signal to noise ratio, and to avoid artifacts pertaining from ambient lighting.\\ Traveling SeeMa-Scanner: The light is usually no problem, otherwise darken the room.
\item\label{item:batch1} A number of calibration sets need to be acquired. The minimum is 3 sets, and more is beneficial. The calibration pattern needs to be fully visible and equally bright in both cameras. The viewing angle must not be too shallow. Press ''Batch acquisition'' in order to acquire a reasonable number of calibration sets using default parameters. Figure \ref{fig:calibration0} reveals that default acquisition is at angles $340^\circ, 338^\circ, \dots, 20^\circ$. %The present ''batch acquisition'' gives a reasonable number of calibration sets.
\item\label{item:batch2} Depending on the object's poses to be scanned (see Section \ref{sec:scan}), you might consider to turn the calibration target by 90 degrees and redo calibration step \ref{item:batch1}.
\item\label{item:batch3} In order to improve the calibration, additional single or batch acquisitions can be done as follows: Try to cover regions in both cameras' field of view that are not covered by the aqcuisition described in calibration steps \ref{item:batch1} and \ref{item:batch2}, e.g. edges, upper/lower part, and different depths (check the camera images in the GUI!). You could for example try to place the calibration target at the very back or front of the rotation plate, put it on the table, lean it against the wall, or put it on something. You might also consider moving the rotation plate away, in which case this calibration step should be performed before steps \ref{item:batch1} and \ref{item:batch2}.\footnote{For inspiration, you might have a loo
k at http://www.vision.caltech.edu/bouguetj/calib\_doc/, where different calibration examples are shown.} Note that the calibration target has to be inside the field of view of at least one of the cameras!
\item All the acquired images are listed above the ''Calibrate Camera'' and ''Calibrate Rotation Stage'' buttons. After acquisition, go through the steps described below in order to calibrate both the camera and the rotation stage:
\begin{enumerate}
\item Mark all images to be used for camera calibration (usually all images). By clicking the ''Calibrate Camera'' button, calibration parameters are automatically determined. The log message (see Figure \ref{fig:log_message1}) will show the calibration parameters, and different errors, which measure the quality of calibration (e.g. reprojection errors; focal length and lens distortion uncertainties).
\item Mark all images to be used for determination of the rotation axis, i.e. (usually all) images acquired in calibration step \ref{item:batch1}. Alternatively, you can also choose images acquired in step \ref{item:batch2}, but stick to images acquired in only one of the two steps! By clicking the ''Calibrate Rotation Stage'' button, the rotation axis is automatically determined. The log message (see Figure \ref{fig:log_message2}) shows the calibration parameters and the error.
%untick all the images acquired in calibration steps \ref{item:batch2} and \ref{item:batch3}. By doing so, you make sure that only the images acquired in step \ref{item:batch1} are used for determination of the rotation axis, whereas all images are used for camera calibration. By clicking the ''Calibrate'' button, calibration parameters are then automatically determined. This procedure can take up to a few minutes. The terminal output will show recalibration errors, which measure the quality of calibration.
%After acquisition, individual calibration sets can be re-examined.
\end{enumerate}
\item A successful calibration goes along with a colorful pattern on the calibration target as shown in Figure \ref{fig:successful_calibration}. If the calibration fails for a calibration set, it is automatically ignored. Thus, it does not matter if the calibration is not successful for a few sets. In addition, the calibration result can be examined by changing to the ''Point Clouds'' tab in the GUI (see Figure \ref{fig:pointclouds0}). Left and right cameras are represented by coloured coordinate systems (the viewing direction is the positive z-axis, y points down, x to the right). The rotation axis, as determined by the calibration procedure is shown as a white line.
\item The calibration parameters can be saved by pressing "Calibration $\rightarrow$ Export Parameters", and they can be loaded by pressing "Calibration $\rightarrow$ Import Parameters".
\end{enumerate}
\section{Making a 360 degree scan}\label{sec:scan}
Image acquisition consists of projecting a sequence of patterns onto the object, which are then converted to depth values by means of the specific algorithm. You can choose among different pattern modes in the ''SMScanner $\rightarrow$ Preferences'' menu:
%
\begin{itemize}
\item Gray Coding \cite{aanaes} %GrayCode
\item Gray Coding Horizontal+Vertical (experimental)\footnote{This implementation of Gray encoding uses horizontal and vertial stripes, which adds some encoding redundancy, but avoids interpolation effects from rectifying homographies.} \cite{aanaes} %GrayCodeHorzVert
\item Phase Shifting 2 frequency heterodyne\footnote{Different from the paper, it uses only two different frequencies.} \cite{reich} %PhaseShiftTwoFreq
\item Phase Shifting 3 frequency (experimental) \cite{reich} %PhaseShiftThreeFreq
\item Phase Shifting 2 frequency horz.+vert. (experimental)\footnote{Based on Phase Shifting 2 frequency heterodyne, but uses horizontal and vertial fringes, which adds some encoding redundancy, but avoids interpolation effects from rectifying homographies.} \cite{reich} %PhaseShiftTwoFreqHorzVert
\item Embedded Phase Shifting (experimental) \cite{moreno} %PhaseShiftEmbedded
\item Line Shifting \cite{guhring} %LineShift
\end{itemize}
%
From experience, we know that the phase shifting algorithm works well with many objects, so one might want to start using this algorithm.\\
Depending on the surface complexity of the scan object (blind spots, holes, details, etc.), multiple $360^\circ$ scans may be necessary. In that case, the following procedure is done multiple times with the object in different orientations (poses) in order to cover the whole surface, and capture all details. Consider to change the rotation angle for a better result. In order to obtain a good quality scan, the number of poses, as well as the poses and rotation angles used for the scanning have to be carefully investigated.
\begin{enumerate}
\item Choose the ''Capture'' tab in the GUI.
\item The scan object is now placed on the rotation plate such that it is visible in both cameras. SeeMaLab-Scanner: Lower the darkening curtain.
\item Check the light conditions: Again, the object should appear grayish with a completely black background (see Figure \ref{fig:light}). If necessary, adjust the light conditions, preferably by changing the shutter time as described in the calibration part (see Section \ref{sec:calibration}, step \ref{item:light}), since recalibration is not needed.
\item Press ''Single Capture'' or ''Batch Capture'' in the GUI in order to scan the object.
\item Sequences of patterns are projected onto the object, and images are acquired. The captured images can be reviewed by clicking on the frames (see Figure \ref{fig:capture1}). Captured sequences are automatically reconstructed, where the name of a reconstructed sequence appears black, otherwise it is grey (see Figure \ref{fig:capture0}).
\item The results can be investigated by choosing the ''Points Clouds'' tab in the GUI (see Figure \ref{fig:pointclouds1}). In order to zoom in at a specific point, hover the mouse over this point and press F. Single point clouds can be shown (ticked), or hidden (unticked). % click on a point + press F
\item All data can be exported from the GUI program by means of the top bar menus. By exporting the point clouds into a folder (''Point Clouds $\rightarrow$ Export Point Clouds''), a \texttt{*.aln} is stored alongside these, which contains pose information in global coordinate space, which aligns the points clouds correctly and relative to each other. The captured images can be exported by either pressing ''Capture $\rightarrow$ Export Sequences'' (whole sequence) or ''Capture $\rightarrow$ Export White Frames'' (no images showing projected patterns on the object).
It is good practice to use the following structure:
\begin{itemize}
\item Create a folder for each object to scan (e.g. \texttt{owl})
\item For an object, create a folder for each pose (e.g. \texttt{pose0} and \texttt{pose1}, using 2 poses)
\item For each pose, save images (e.g. folders \texttt{sequence\_0}, ..., \texttt{sequence\_8}, using a step size of 40 degrees) and point clouds (e.g. \texttt{pointcloud\_0.ply}, ..., \texttt{pointcloud\_8.ply}). These names are default names.
\item Save the calibration file for each object, or pose in case the scanner had to be recalibrated for different poses (e.g. \texttt{cal.xml})
\end{itemize}
\end{enumerate}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{images/setup0.jpg}
\caption{Traveling SeeMa-Scanner: Mounting the cameras and the projector}
\label{fig:setup0}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{images/setup1.jpg}
\caption{Traveling SeeMa-Scanner: Final setup}
\label{fig:setup1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.4\textwidth]{images/preferences_menu.png}
\caption{The "SMScanner $\rightarrow$ Preferences" menu}
\label{fig:preferences_menu}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{images/projected_pattern.png}
\caption{Projected pattern on the object after having pressed "Calibration $\rightarrow$ Project Focussing Pattern"}
\label{fig:projected_pattern}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{images/calibration_target.jpg}
\caption{Calibration target}
\label{fig:calibration_target}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{images/calibration_tab.png}
\caption{GUI showing the ''Calibration'' tab}
\label{fig:calibration0}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{images/log_message1_.png}
\caption{Log message of camera calibration}
\label{fig:log_message1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{images/log_message2_.png}
\caption{Log message of rotation stage calibration}
\label{fig:log_message2}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{images/successful_calibration.png}
\caption{GUI showing a successful calibration}
\label{fig:successful_calibration}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{images/pointclouds0.png}
\caption{GUI showing the calibration result in the ''Point Clouds'' tab}
\label{fig:pointclouds0}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{images/light.png}
\caption{Light condition/Shutter time: Not enough light/$16.666$ ms (left), good/$33.333$ ms (middle), too much light/$50.000$ ms (right).}
\label{fig:light}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{images/capture1.png}
\caption{GUI showing the ''Capture'' tab}
\label{fig:capture1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{images/capture0.png}
\caption{GUI showing the ''Capture'' tab}
\label{fig:capture0}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{images/pointclouds1.png}
\caption{''Point Clouds'' tab with reconstructed point clouds}
\label{fig:pointclouds1}
\end{figure}
\clearpage
\chapter{Reconstructing a mesh surface}
Multiple point clouds can be merged into a single watertight mesh representation using Meshlab. Meshlab is available on the scanner computer, but also freely available for download for multiple platforms. The basic steps involved in merging and reconstructing are outlined below. The input data will consist of one or more sets of pointclouds acquired with the SeeMaLab GUI. Note that if multiple object poses are desired (for complex geometries/blind spots, etc.), it is recommended to close and restart the GUI for each pose, to clear the captured sequences and memory.
\begin{enumerate}
\item Load a set of point clouds, by opening the \texttt{*.aln} file in Meshlab (''File $\rightarrow$ Open Project...''). See figure \ref{fig:meshlab0} for an illustration of one full set of scans loaded into Meshlab.
\item The PLY files do contain XYZ and RGB values for all points. You will need to compute normals, in order for the surface reconstruction to succeed. These normals can be estimated and consistently oriented by considering the camera viewpoint. Select all point cloud in turn and for each, choose ''Filters $\rightarrow$ Point Sets $\rightarrow$ Compute Normals for Point Set''. Make sure the ''Flip normals...'' checkbox is ticked (see fig. \ref{fig:meshlab1}). Suitable neighbourhood values are in the order of $10$. You can visualise the estimated normals through the ''Render'' menu.
\item After estimating normals for all point clouds in a set, choose ''Filters $\rightarrow$ Mesh Layer $\rightarrow$ Flatten Visible Layers''. Make sure to retain unreferenced vertices, because at this point, none of the points will be part of any triangles (see figure \ref{fig:meshlab2}). This process will alter all coordinates by applying the pose transformation to all point clouds before merging them.
\item Save the resulting merged point cloud. In the save dialog, make sure to include the normals in the output file (see fig. \ref{fig:meshlab3}).
\end{enumerate}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{meshlab0.png}
\caption{One full set of scans (9 point clouds covering $360^\circ$ in $40^\circ$ intervals).}
\label{fig:meshlab0}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.4\textwidth]{meshlab1.png}
\caption{Estimate normals, and orient them consistenly towards the camera (positive z-axis).}
\label{fig:meshlab1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.25\textwidth]{meshlab2.png}
\caption{Flatten visible layers and retain ''unreferences vertices'', i.e. points not in a triangle.}
\label{fig:meshlab2}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{meshlab3.png}
\caption{Save the merged point clouds, and include the estimated normals in the output file.}
\label{fig:meshlab3}
\end{figure}
If you have acquired multiple $360^\circ$ scans of your object in different position, proceed as above for each set. Then, you will need to align and merge these point cloud. Meshlab has manual coarse and automated ICP alignment integrated. Note that the automatic alignment procedure in Meshlab requires high quality point normal estimates for all point cloud to succeed. If this is not given, the alignment process will fail without warning or errors.
\begin{enumerate}
\item Load the point clouds of interest (''File $\rightarrow$ Import Mesh''). The imported point cloud will not be properly aligned. Open the alignment tool (a big yellow A tool button). See figure \ref{fig:meshlab4} for an image of this tool. ''Glueing'' in Meshlab means setting an initial rough alignment. You can ''glue'' the first mesh, and rough ''glue'' the others to it by selecting a small number (minimum 4) of surface point correspondences with the mouse. When all point clouds have been ''glued'', you can initiate automatic fine-alignment (group-wise ICP) by pressing ''Process''. A good alignment should be confirmed by selecting ''False colors'', and seeing a good mix of colours in overlap areas.
\item Merge the aligned point cloud ''Filters $\rightarrow$ Mesh Layer $\rightarrow$ Flatten Visible Layers''.
\end{enumerate}
\begin{figure}[h]
\centering
\includegraphics[width=.9\textwidth]{meshlab4.png}
\caption{The alignment tool in Meshlab.}
\label{fig:meshlab4}
\end{figure}
The next step is to reconstruct a surface from a point cloud. This can be done using the Poisson surface reconstruction built into Meshlab. It is accessible through ''File $\rightarrow$ Point Set $\rightarrow$ Surface Reconstruction: Poisson''. You will most probably have to vary the parameters for this step, to obtain pleasing results for your particular data.
The full Poisson code is available at \url{http://www.cs.jhu.edu/~misha/Code/PoissonRecon/}, and also installed on the scanner computer. The full software allows for finer control over the process, and also to remove mesh membranes with little point support. We refer to the documentation provided by the authors of the PoissonRecon code.
The Poisson reconstruction algorithm does not keep colour information. In order to obtain a coloured mesh, one needs to re-project the per-point colour information from the full point cloud to the mesh. This can be done in Meshlab through the ''Filters $\rightarrow$ Sampling $\rightarrow$ Vertex Attribute Transfer'' functionality.
\addcontentsline{toc}{section}{References}
{\setlength{\baselineskip}{0.75\baselineskip}
\begin{thebibliography}{99}
\bibitem{aanaes} Aanaes Henrik, 2014. 'Lecture Notes on Computer Vision', DTU.
\bibitem{guhring} Guhring Jens, 2000. 'Dense 3D surface acquisition by structured light using off-the-shelf components', Proceedings of SPIE Vol. 4309: Videometrics and Optical Methods for 3D Shape Measurement.
\bibitem{moreno} Moreno Daniel, Son Kilho \& Taubin Gabriel, 2015. 'Embedded Phase Shifting: Robust Phase Shifting with Embedded Signals', Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2301--2309.
\bibitem{reich} Reich Carsten, Ritter Reinhold \& Thesing Jan, 1997. 'White light heterodyne principle for 3D-measurement', Proceedings of SPIE Vol. 3100: Sensors, Sensor Systems, and Sensor Data Processing.
\end{thebibliography}
}
\end{document}