Subversion Repositories seema-scanner

Rev

Rev 252 | Go to most recent revision | Show entire file | Ignore whitespace | Details | Blame | Last modification | View Log | RSS feed

Rev 252 Rev 253
Line 1... Line 1...
1
\documentclass[10pt,notitlepage]{report}
1
\documentclass[10pt,notitlepage]{report}
2
\usepackage[utf8]{inputenc}
2
\usepackage[utf8]{inputenc}
3
\usepackage[T1]{fontenc}
3
\usepackage[T1]{fontenc}
4
\usepackage{url}
4
\usepackage{url}
5
\usepackage{graphicx}
5
\usepackage{graphicx}
-
 
6
\usepackage{calc}
6
\usepackage{fullpage}
7
\usepackage{fullpage}
7
 
8
 
8
\usepackage{color}
9
\usepackage{color}
9
\newcommand{\dolmes}[1]{\textcolor[rgb]{1,0.0,0}{#1}}
10
\newcommand{\dolmes}[1]{\textcolor[rgb]{1,0.0,0}{#1}}
10
 
11
 
-
 
12
\newlength\myheight
-
 
13
\newlength\mydepth
-
 
14
\settototalheight\myheight{Xygp}
-
 
15
\settodepth\mydepth{Xygp}
-
 
16
\setlength\fboxsep{0pt}
11
 
17
 
12
% \renewcommand{\chaptermark}[1]{\markboth{#1}{}}
18
% \renewcommand{\chaptermark}[1]{\markboth{#1}{}}
13
% \renewcommand{\sectionmark}[1]{\markright{\thesection\ #1}}
19
% \renewcommand{\sectionmark}[1]{\markright{\thesection\ #1}}
14
 
20
 
15
\title{The SeeMa Lab Structured Light Scanner}
21
\title{The SeeMa Lab Structured Light Scanner}
Line 95... Line 101...
95
\section{Compiling and Installing}
101
\section{Compiling and Installing}
96
A default user account is used on the SeeMaLab scanner computers, and here the software is installed in a stable (tested) version. The software repository is checked out and built in the home folder (\texttt{{\textasciitilde}/seema-scanner}). An icon in the launcher bar links to this executable. 
102
A default user account is used on the SeeMaLab scanner computers, and here the software is installed in a stable (tested) version. The software repository is checked out and built in the home folder (\texttt{{\textasciitilde}/seema-scanner}). An icon in the launcher bar links to this executable. 
97
 
103
 
98
The software was developed using Qt, OpenCV and the Pointcloud Library (PCL). 
104
The software was developed using Qt, OpenCV and the Pointcloud Library (PCL). 
99
 
105
 
100
In order to make modifications and test (e.g. change some parameters in the reconstruction process), the SVN repository should be checked out/compiled in a seperate user account. The software is linked against Qt, OpenCV and PCL in the current Ubuntu LTS release. Additional PPAs may be required, see the README file in the software repository. This ensures easy compilation and install. 
106
In order to make modifications and test (e.g. change some parameters in the reconstruction process), the SVN repository should be checked out/compiled in a seperate user account. The software is linked against the default versions of Qt, OpenCV and PCL in the current Ubuntu LTS release. This ensures easy compilation and install. 
101
 
107
 
102
\section{GUI}
108
\section{GUI}
103
The GUI enables the user to perform calibration of the scanner, and to acquire scan data. It is built in a modular fashion, to allow for new structured light strategies to be implemented. It is, however, supposed to be simple and stable, so please keep experimental builds in separate SVN branches. 
109
The GUI enables the user to perform calibration of the scanner, and to acquire scan data. It is built in a modular fashion, to allow for new structured light strategies to be implemented. It is, however, supposed to be simple and stable, so please keep experimental builds in separate SVN branches. 
104
 
110
 
105
GUI functionality heavily depends on Qt. Most other components, specifically those with Matlab wrappers, have minimal dependencies, and can be used outside of the GUI framework.
111
GUI functionality heavily depends on Qt. Most other components, specifically those with Matlab wrappers, have minimal dependencies, and can be used outside of the GUI framework.
Line 125... Line 131...
125
\chapter{Practical scanning}
131
\chapter{Practical scanning}
126
Please be very careful with this very expensive equipment, and considerate by not misplacing any parts and not borrowing any components of the scanner hardware.
132
Please be very careful with this very expensive equipment, and considerate by not misplacing any parts and not borrowing any components of the scanner hardware.
127
The following guide explains the setup, and the steps involved in calibration and acquisition of a $360^\circ$ scan of an object. 
133
The following guide explains the setup, and the steps involved in calibration and acquisition of a $360^\circ$ scan of an object. 
128
 
134
 
129
\section{Setup}
135
\section{Setup}
130
In contrast to the SeeMa-Scanner located in the Image Lab, the Traveling SeeMa-Scanner first has to be assembled from the single parts shown in Figure \dolmes{XXX (Figure of box)}. Figure \dolmes{XXX} illustrates the final setup of the Traveling SeeMa-Scanner. Take the following points into consideration:
136
In contrast to the SeeMa-Scanner located in the Image Lab, the Traveling SeeMa-Scanner first has to be assembled from the parts stored in a large box. Figures \ref{fig:setup0} and \ref{fig:setup1} illustrate the final setup of the Traveling SeeMa-Scanner. Take the following points into consideration:
131
\begin{itemize}
137
\begin{itemize}
132
	\item Choose a black, non-shiny background (e.g. provided black fabric). Cover (or paint black) any shiny objects in the scan area such as screws, stands, etc. This will ensure that only the object of interest is scanned.
138
	\item Choose a black, non-shiny background (e.g. provided black fabric). Cover (or even paint black) any shiny objects in the scan area such as screws, stands, etc. This will ensure that only the object of interest is scanned.
133
	
139
	
134
	\item How to choose the distance between the projector and the circular rotation plate? This distance is dependent on the object size. Make sure the object is inside the field of view of cameras and projector for any rotation angle. \dolmes{Would you rather want a small distance?}
140
	\item How to choose the distance between the projector and the circular rotation plate? This distance is dependent on the object size. Make sure the object is inside the field of view of both cameras and the projector for any rotation angle.
-
 
141
	
-
 
142
	\item How to choose the distance between the two cameras (base line)? The closer the cameras, the better can concavities be scanned (i.e. there is less occlusion). However, this comes with a larger error in the point coordinate determination. As a rule of thumb, the base line should lie in the interval $[\frac{1}{3}x, 3x]$, where $x$ denotes the distance between the projector and the object (working distance). Usually, we like to put the cameras rather close together.	
135
	
143
	
136
	\item How to choose the distance between the two cameras? The closer the cameras, the better can concavities be scanned (i.e. there is less occlusion). However, this comes with a larger error in the point coordinate determination. As a rule of thumb, this distance should lie in the interval $[\frac{1}{3}x, 3x]$, where $x$ denotes the distance between the projector and the object. Usually, we like to put the cameras rather close together.	
-
 
137
\end{itemize}
144
\end{itemize}
138
 
145
 
139
 
146
 
140
\section{Calibration}
147
\section{Calibration}\label{sec:calibration}
141
Calibration parameters consist of camera focal lengths, central points, lens distortion parameters, camera extrinsics (their relative position and angles), and the location and orientation of the rotation stage axis. These parameters are stored in the GUI, but in most cases, it is recommended to perform a new calibration before acquiring new data. Also, the exact position of cameras may be altered to better fit the object, in which case recalibration must be done. The calibration parameters can be exported into a \texttt{*.xml} file through the top bar menu. The global coordinate system, in which everything is expresses coincides with the left camera.
148
Calibration parameters consist of camera focal lengths, central points, lens distortion parameters, camera extrinsics (their relative position and angles), and the location and orientation of the rotation stage axis. These parameters are stored in the GUI, but in most cases, it is recommended to perform a new calibration before acquiring new data. Also, the exact position of cameras may be altered to better fit the object, in which case recalibration must be done. The calibration parameters can be exported into a \texttt{*.xml} file through the top bar menu. The global coordinate system, in which everything is expresses coincides with the left camera.
142
 
149
 
143
\begin{enumerate}
150
\begin{enumerate}
144
	\item The projector is turned on using the remote control or the touch interface on its top. Make sure the proper HDMI input is chosen as source. Remember to turn on the projector before the scanner computer (otherwise the computer screen is projected)!
151
	\item The projector is turned on using the remote control or the touch interface on its top. Make sure the proper HDMI input is chosen as source. Remember to turn on the projector before the scanner computer (otherwise the computer screen is projected)!
145
	
152
	
146
	\item The GUI application is started on the scanner computer \dolmes{name/icon?}. Some software settings can be altered through the ''File $\rightarrow$ Preference'' menu, if necessary (the GUI needs to be restarted after altering these settings).
153
	\item The GUI application "SeeMaLab 3D Scanner" is started on the scanner computer by clicking on the icon \raisebox{-\mydepth}{\fbox{\includegraphics[height=\myheight]{images/icon1.png}}}. Some software settings can be altered through the ''SMScanner $\rightarrow$ Preferences'' menu, if necessary (see Figure \ref{fig:preferences_menu}). For the calibration part, choose the "Calibration" tab.
-
 
154
 
-
 
155
\item Make sure the projector is focused on the plane going approximately through the rotation axis of the rotation stage. This can be checked by putting the object to be scanned on the rotation stage, then pressing "Calibration $\rightarrow$ Project Focussing Pattern". Look at the object directly, NOT at the GUI: If the projected pattern is not sharp on the object, focus the projector by turning the focus ring, which is located in front of the projector lens. When using phase-shifting patterns (see Section \ref{sec:scan}), the projector usually does not need to be extremely well focused in order to obtain a good scan -- however, it is good practice to do so.
-
 
156
% holding a white paper or your hand close to the rotation axis, \dolmes{then pressing one of the arrow buttons on the projector}. If the projected image is not sharp, focus the projector by turning the focus ring, which is located in front of the projector lens.
147
	
157
	
148
	\item Ensure that both the cameras and the projector are in focus at the middle of the rotation stage. \dolmes{You can check this by looking on the camera images in the GUI, and by holding your hand over the rotation plate for the projector.}
158
	\item Additionally, ensure that both cameras are in focus at the middle of the rotation stage. To do so, look at the camera images in the GUI, where you can zoom in by turning the mouse wheel, and check whether the projected pattern is sharp on the object (see Figure \ref{fig:projected_pattern}). If needed, the cameras can be focused by turning the corresponding focus ring, which is located in front of the camera's lens.
149
	
159
	
150
	\item Make sure the size of the calibration plate fits the object. \dolmes{where/how to adjust the parameters of the calibration plate?}
160
	\item Make sure the size of the calibration target (checkerboard plate) fits the object. Then, press ''SMScanner $\rightarrow$ Preferences'' and adjust the parameters of the calibration target (Calibration pattern; and Size). In order to determine the calibration pattern of the checkerboard plate, do NOT count the number of squares in horizontal and vertical direction, but the number of inner edges (saddle points). The parameter Size is given by the side length of a square. In Figure \ref{fig:calibration_target}, the calibration pattern is given by $13\times 22$ and the side length by $15$ mm.
151
	
161
	
152
	\item Position the calibration target on the circular rotation plate parallel to the projector -- make sure the rotation axis approximately intersects the center of the calibration target --, and inside the field of view of cameras and projector. White light will be provided from the projector for guidance. The GUI will show as shown on figure \ref{fig:calibration0}.
162
	\item Position the calibration target on the circular rotation plate parallel to the projector -- make sure the rotation axis approximately intersects the center of the calibration target --, and inside the field of view of both cameras and the projector. You can for example use four screws to mount the target on the rotation plate. White light will be provided from the projector for guidance. The GUI should look similar to the one in Figure \ref{fig:calibration0}.
153
	
163
	
154
	\item Optimally, the camera images in the GUI show a rather grayish calibration plate, and the background is totally black. If any pixels of the calibration plate are completely white, there is too much light. There are two options for adjusting the light:
164
	\item\label{item:light} Optimally, the camera images in the GUI show a rather grayish calibration plate, and the background is totally black. If any pixels of the calibration plate are completely white, there is too much light. There are two options for adjusting the light:
155
	\begin{itemize}
165
	\begin{itemize}
156
		\item Adjust the lense aperture by turning the aperture ring on the camera. The narrower the aperture, the less light reaches the image plane, and vice versa.
166
		\item Adjust the lense aperture by turning the aperture ring on the camera. The narrower the aperture, the less light reaches the image plane, and vice versa.
157
		\item Adjust the \dolmes{camera/projector?} shutter time by \dolmes{how?}. The shutter time has to be a multiple of 16.666 milliseconds per image: 16.666, 33.333, 50.000, 66.666, 83.333, 100.000, 116.666, 133.333, etc (type \dolmes{how many?} digits after the decimal point!). This is due to the fact that the projector has a \dolmes{frame rate?} of 60 images per second, which corresponds to a \dolmes{projection time?} of 1/60 seconds per image, i.e. 16.666 milliseconds per image. The larger the shutter time, the more light reaches the image plane, and vice versa. Note that changing the shutter time does not affect calibration!
167
		\item Adjust the shutter time in the ''SMScanner $\rightarrow$ Preferences'' menu. The shutter time has to be a multiple of 16.666 milliseconds per image: 16.666, 33.333, 50.000, 66.666, 83.333, 99.996, 116.666, 133.333, etc (important: type 3 digits after the decimal point!). This is due to the fact that the projector has a frame rate of 60 images per second, which corresponds to a projection time of 1/60 seconds per image, i.e. 16.666 milliseconds per image. The larger the shutter time, the more light reaches the image plane, and vice versa. Note that changing the shutter time does not affect calibration, i.e. the shutter time can also be changed after having done a calibration, without having to do a recalibration!
158
	\end{itemize} 
168
	\end{itemize} 
159
	
169
	
160
	\item SeeMaLab-Scanner: The darkening curtain is lowered, to improve the signal to noise ratio, and to avoid artifacts pertaining from ambient lighting.\\ Traveling SeeMa-Scanner: The light is usually no problem, otherwise darken the room.
170
	\item SeeMaLab-Scanner: The darkening curtain is lowered, to improve the signal to noise ratio, and to avoid artifacts pertaining from ambient lighting.\\ Traveling SeeMa-Scanner: The light is usually no problem, otherwise darken the room.
161
	
171
	
162
	\item A number of calibration sets need to be acquired. The minimum is 3 sets, and more is beneficial. The calibration pattern needs to be fully visible and equally bright in both cameras. The viewing angle must not be too shallow. Press ''batch acquisition'' in order to acquire a reasonable number of calibration sets using default parameters. \dolmes{Figure \ref{fig:calibration0} reveals that default acquisition is at angles $330^\circ, 335^\circ, \dots, 30^\circ$}. %The present ''batch acquisition'' gives a reasonable number of calibration sets.
172
	\item\label{item:batch1} A number of calibration sets need to be acquired. The minimum is 3 sets, and more is beneficial. The calibration pattern needs to be fully visible and equally bright in both cameras. The viewing angle must not be too shallow. Press ''Batch acquisition'' in order to acquire a reasonable number of calibration sets using default parameters. Figure \ref{fig:calibration0} reveals that default acquisition is at angles $340^\circ, 338^\circ, \dots, 20^\circ$. %The present ''batch acquisition'' gives a reasonable number of calibration sets.
-
 
173
 
-
 
174
	\item\label{item:batch2} Depending on the object's poses to be scanned (see Section \ref{sec:scan}), you might consider to turn the calibration target by 90 degrees and redo calibration step \ref{item:batch1}.
-
 
175
 
-
 
176
	\item\label{item:batch3} In order to improve the calibration, additional single or batch acquisitions can be done as follows: Try to cover regions in both cameras' field of view that are not covered by the aqcuisition described in calibration steps \ref{item:batch1} and \ref{item:batch2}, e.g. edges, upper/lower part, and different depths (check the camera images in the GUI!). You could for example try to place the calibration target at the very back or front of the rotation plate, put it on the table, lean it against the wall, or put it on something. You might also consider moving the rotation plate away, in which case this calibration step should be performed before steps \ref{item:batch1} and \ref{item:batch2}.\footnote{For inspiration, you might have a look at http://www.vision.caltech.edu/bouguetj/calib\_doc/, where different calibration examples are shown.} Note that the calibration target has to be inside the field of view of at least one of the cameras!
163
	
177
	
-
 
178
	\item All the acquired images are listed above the ''Calibrate Camera'' and ''Calibrate Rotation Stage'' buttons. After acquisition, go through the steps described below in order to calibrate both the camera and the rotation stage:
-
 
179
 
-
 
180
\begin{enumerate}
-
 
181
\item Mark all images to be used for camera calibration (usually all images). By clicking the ''Calibrate Camera'' button, calibration parameters are automatically determined. The log message (see Figure \ref{fig:log_message1}) will show the calibration parameters, and different errors, which measure the quality of calibration (e.g. reprojection errors; focal length and lens distortion uncertainties).
-
 
182
 
-
 
183
\item Mark all images to be used for determination of the rotation axis, i.e. (usually all) images acquired in calibration step \ref{item:batch1}. Alternatively, you can also choose images acquired in step \ref{item:batch2}, but stick to images acquired in only one of the two steps! By clicking the ''Calibrate Rotation Stage'' button, the rotation axis is automatically determined. The log message (see Figure \ref{fig:log_message2}) shows the calibration parameters and the error.
-
 
184
 
164
	\item After acquisition, individual calibration sets can be re-examined. Calibration parameters are automatically determined by clicking the ''Calibrate'' button. This procedure can take up to a few minutes. The terminal output will show recalibration errors, which measure the quality of calibration. 
185
%untick all the images acquired in calibration steps \ref{item:batch2} and \ref{item:batch3}. By doing so, you make sure that only the images acquired in step \ref{item:batch1} are used for determination of the rotation axis, whereas all images are used for camera calibration. By clicking the ''Calibrate'' button, calibration parameters are then automatically determined. This procedure can take up to a few minutes. The terminal output will show recalibration errors, which measure the quality of calibration.
-
 
186
%After acquisition, individual calibration sets can be re-examined. 
-
 
187
\end{enumerate}
165
	
188
	
166
	\item A successful calibration goes along with a colorful pattern on the calibration target as shown in Figure \dolmes{XXX}. \dolmes{If the calibration fails for a calibration set, it is automatically ignored. Thus, it does not matter if the calibration is not successful for a few sets.} In addition,the calibration result can be examined by changing to the ''Point Clouds'' tab in the GUI (see Figure \ref{fig:pointclouds0}). Left and right cameras are represented by coloured coordinate systems (the viewing direction is the positive z-axis, y points down, x to the right). The rotation axis, as determined by the calibration procedure is shown as a white line section. 
189
	\item A successful calibration goes along with a colorful pattern on the calibration target as shown in Figure \ref{fig:successful_calibration}. If the calibration fails for a calibration set, it is automatically ignored. Thus, it does not matter if the calibration is not successful for a few sets. In addition, the calibration result can be examined by changing to the ''Point Clouds'' tab in the GUI (see Figure \ref{fig:pointclouds0}). Left and right cameras are represented by coloured coordinate systems (the viewing direction is the positive z-axis, y points down, x to the right). The rotation axis, as determined by the calibration procedure is shown as a white line.
-
 
190
 
-
 
191
\item The calibration parameters can be saved by pressing "Calibration $\rightarrow$ Export Parameters", and they can be loaded by pressing "Calibration $\rightarrow$ Import Parameters".
167
\end{enumerate}
192
\end{enumerate}
168
 
193
 
169
 
194
 
170
\section{Making a 360 degree scan}
195
\section{Making a 360 degree scan}\label{sec:scan}
171
Image acquisition consists of projecting a sequence of patterns onto the object, which are then converted to depth values by means of the specific algorithm.\\
196
Image acquisition consists of projecting a sequence of patterns onto the object, which are then converted to depth values by means of the specific algorithm. You can choose among different pattern modes in the ''SMScanner $\rightarrow$ Preferences'' menu:
-
 
197
%
-
 
198
\begin{itemize}
-
 
199
\item Gray Coding \cite{aanaes}  %GrayCode
-
 
200
\item Gray Coding Horizontal+Vertical (experimental)\footnote{This implementation of Gray encoding uses horizontal and vertial stripes, which adds some encoding redundancy, but avoids interpolation effects from rectifying homographies.} \cite{aanaes} %GrayCodeHorzVert
-
 
201
\item Phase Shifting 2 frequency heterodyne\footnote{Different from the paper, it uses only two different frequencies.} \cite{reich} %PhaseShiftTwoFreq
-
 
202
\item Phase Shifting 3 frequency (experimental) \cite{reich} %PhaseShiftThreeFreq
-
 
203
\item Phase Shifting 2 frequency horz.+vert. (experimental)\footnote{Based on Phase Shifting 2 frequency heterodyne, but uses horizontal and vertial fringes, which adds some encoding redundancy, but avoids interpolation effects from rectifying homographies.} \cite{reich} %PhaseShiftTwoFreqHorzVert
-
 
204
\item Embedded Phase Shifting (experimental) \cite{moreno} %PhaseShiftEmbedded
-
 
205
\item Line Shifting \cite{guhring} %LineShift
-
 
206
\end{itemize}
-
 
207
%
-
 
208
From experience, we know that the phase shifting algorithm works well with many objects, so one might want to start using this algorithm.\\
172
Depending on the surface complexity (blind spots, holes, details, etc.), multiple $360^\circ$ scans may be necessary. In that case, the following procedure is done multiple times with the object in different orientations (poses). Also consider to change the rotation angle in order to obtain a better result.
209
Depending on the surface complexity of the scan object (blind spots, holes, details, etc.), multiple $360^\circ$ scans may be necessary. In that case, the following procedure is done multiple times with the object in different orientations (poses) in order to cover the whole surface, and capture all details. Consider to change the rotation angle for a better result. In order to obtain a good quality scan, the number of poses, as well as the poses and rotation angles used for the scanning have to be carefully investigated.
173
\begin{enumerate}
210
\begin{enumerate}
174
	\item Choose the ''Capture'' tab in the GUI -- see figure \ref{fig:capture0} for an illustration.
211
	\item Choose the ''Capture'' tab in the GUI.
175
	 
212
	 
176
	\item The scan object is now placed on the rotation plate such that it is visible in both cameras. SeeMaLab-Scanner: Lower the darkening curtain.
213
	\item The scan object is now placed on the rotation plate such that it is visible in both cameras. SeeMaLab-Scanner: Lower the darkening curtain.
177
	
214
	
178
	\item Check the \dolmes{light conditions}: Again, the object should appear grayish with a completely black background. \dolmes{If necessary, adjust the light conditions}, preferably by changing the shutter time as described in the calibration part, since re-calibration is not needed.
215
	\item Check the light conditions: Again, the object should appear grayish with a completely black background (see Figure \ref{fig:light}). If necessary, adjust the light conditions, preferably by changing the shutter time as described in the calibration part (see Section \ref{sec:calibration}, step \ref{item:light}), since recalibration is not needed.
179
	
216
	
180
	\item Press ''Single Capture'' or ''Batch Capture'' in the GUI.
217
	\item Press ''Single Capture'' or ''Batch Capture'' in the GUI in order to scan the object.
181
	
218
	
182
	\item Sequences of patterns are projected onto the object \dolmes{different methods? where to choose?}. The captured images can be reviewed, and one or multiple captured sequences reconstructed using the ''Reconstruct'' button. 
219
	\item Sequences of patterns are projected onto the object, and images are acquired. The captured images can be reviewed by clicking on the frames (see Figure \ref{fig:capture1}). Captured sequences are automatically reconstructed, where the name of a reconstructed sequence appears black, otherwise it is grey (see Figure \ref{fig:capture0}).
183
	
220
	
184
	\item The results will show up in the ''Points Clouds'' tab \dolmes{how to zoom in - a button has to be pressed?}. Single point clouds can be shown or hidden, see figure \ref{fig:pointclouds1}.
221
	\item The results can be investigated by choosing the ''Points Clouds'' tab in the GUI (see Figure \ref{fig:pointclouds1}). In order to zoom in at a specific point, hover the mouse over this point and press F. Single point clouds can be shown (ticked), or hidden (unticked). % click on a point + press F
185
	
222
	
186
	\item All data can be exported from the GUI program by means of the top bar menus. By exporting the point clouds into a folder, a \texttt{*.aln} is stored alongside these, which contains pose information in global coordinate space, which aligns the points clouds correctly and relative to each other.
223
	\item All data can be exported from the GUI program by means of the top bar menus. By exporting the point clouds into a folder (''Point Clouds $\rightarrow$ Export Point Clouds''), a \texttt{*.aln} is stored alongside these, which contains pose information in global coordinate space, which aligns the points clouds correctly and relative to each other. The captured images can be exported by either pressing ''Capture $\rightarrow$ Export Sequences'' (whole sequence) or ''Capture $\rightarrow$ Export White Frames'' (no images showing projected patterns on the object). 
187
\end{enumerate}
-
 
188
 
224
 
-
 
225
It is good practice to use the following structure:
-
 
226
\begin{itemize}
-
 
227
	\item Create a folder for each object to scan (e.g. \texttt{owl})
-
 
228
	\item For an object, create a folder for each pose (e.g. \texttt{pose0} and \texttt{pose1}, using 2 poses)
-
 
229
	\item For each pose, save images (e.g. folders \texttt{sequence\_0}, ..., \texttt{sequence\_8}, using a step size of 40 degrees) and point clouds (e.g. \texttt{pointcloud\_0.ply}, ..., \texttt{pointcloud\_8.ply}). These names are default names.
-
 
230
	\item Save the calibration file for each object, or pose in case the scanner had to be recalibrated for different poses (e.g. \texttt{cal.xml})
-
 
231
\end{itemize}
189
 
232
 
-
 
233
\end{enumerate}
-
 
234
 
-
 
235
\begin{figure}[h]
-
 
236
	\centering
-
 
237
		\includegraphics[width=.7\textwidth]{images/setup0.jpg}
-
 
238
	\caption{Traveling SeeMa-Scanner: Mounting the cameras and the projector}
-
 
239
	\label{fig:setup0}
-
 
240
\end{figure}
-
 
241
\begin{figure}[h]
-
 
242
	\centering
-
 
243
		\includegraphics[width=.7\textwidth]{images/setup1.jpg}
-
 
244
	\caption{Traveling SeeMa-Scanner: Final setup}
-
 
245
	\label{fig:setup1}
-
 
246
\end{figure}
-
 
247
\begin{figure}[h]
-
 
248
	\centering
-
 
249
		\includegraphics[width=.4\textwidth]{images/preferences_menu.png}
-
 
250
	\caption{The "SMScanner $\rightarrow$ Preferences" menu}
-
 
251
	\label{fig:preferences_menu}
-
 
252
\end{figure}
-
 
253
\begin{figure}[h]
-
 
254
	\centering
-
 
255
		\includegraphics[width=.7\textwidth]{images/projected_pattern.png}
-
 
256
	\caption{Projected pattern on the object after having pressed "Calibration $\rightarrow$ Project Focussing Pattern"}
-
 
257
	\label{fig:projected_pattern}
-
 
258
\end{figure}
190
\begin{figure}[h]
259
\begin{figure}[h]
191
	\centering
260
	\centering
-
 
261
		\includegraphics[width=.7\textwidth]{images/calibration_target.jpg}
-
 
262
	\caption{Calibration target}
-
 
263
	\label{fig:calibration_target}
-
 
264
\end{figure}
-
 
265
\begin{figure}[h]
-
 
266
	\centering
192
		\includegraphics[width=.7\textwidth]{calibration0.png}
267
		\includegraphics[width=.7\textwidth]{images/calibration_tab.png}
193
	\caption{The GUI showing the ''Calibration'' tab.}
268
	\caption{GUI showing the ''Calibration'' tab}
194
	\label{fig:calibration0}
269
	\label{fig:calibration0}
195
\end{figure}
270
\end{figure}
196
\begin{figure}[h]
271
\begin{figure}[h]
197
	\centering
272
	\centering
-
 
273
		\includegraphics[width=.7\textwidth]{images/log_message1_.png}
-
 
274
	\caption{Log message of camera calibration}
-
 
275
	\label{fig:log_message1}
-
 
276
\end{figure}
-
 
277
\begin{figure}[h]
-
 
278
	\centering
-
 
279
		\includegraphics[width=.7\textwidth]{images/log_message2_.png}
-
 
280
	\caption{Log message of rotation stage calibration}
-
 
281
	\label{fig:log_message2}
-
 
282
\end{figure}
-
 
283
\begin{figure}[h]
-
 
284
	\centering
-
 
285
		\includegraphics[width=.7\textwidth]{images/successful_calibration.png}
-
 
286
	\caption{GUI showing a successful calibration}
-
 
287
	\label{fig:successful_calibration}
-
 
288
\end{figure}
-
 
289
\begin{figure}[h]
-
 
290
	\centering
198
		\includegraphics[width=.7\textwidth]{pointclouds0.png}
291
		\includegraphics[width=.7\textwidth]{images/pointclouds0.png}
199
	\caption{GUI showing the result of calibration in the ''Point Clouds'' tab.}
292
	\caption{GUI showing the calibration result in the ''Point Clouds'' tab}
200
	\label{fig:pointclouds0}
293
	\label{fig:pointclouds0}
201
\end{figure}
294
\end{figure}
202
\begin{figure}[h]
295
\begin{figure}[h]
203
	\centering
296
	\centering
-
 
297
		\includegraphics[width=\textwidth]{images/light.png}
-
 
298
	\caption{Light condition/Shutter time: Not enough light/$16.666$ ms (left), good/$33.333$ ms (middle), too much light/$50.000$ ms (right).}
-
 
299
	\label{fig:light}
-
 
300
\end{figure}
-
 
301
\begin{figure}[h]
-
 
302
	\centering
-
 
303
		\includegraphics[width=.7\textwidth]{images/capture1.png}
-
 
304
	\caption{GUI showing the ''Capture'' tab}
-
 
305
	\label{fig:capture1}
-
 
306
\end{figure}
-
 
307
\begin{figure}[h]
-
 
308
	\centering
204
		\includegraphics[width=.7\textwidth]{capture0.png}
309
		\includegraphics[width=.7\textwidth]{images/capture0.png}
205
	\caption{The ''Capture'' tab in the GUI.}
310
	\caption{GUI showing the ''Capture'' tab}
206
	\label{fig:capture0}
311
	\label{fig:capture0}
207
\end{figure}
312
\end{figure}
208
\begin{figure}[h]
313
\begin{figure}[h]
209
	\centering
314
	\centering
210
		\includegraphics[width=.7\textwidth]{pointclouds1.png}
315
		\includegraphics[width=.7\textwidth]{images/pointclouds1.png}
211
	\caption{''Point Clouds'' tab with reconstructed point clouds.}
316
	\caption{''Point Clouds'' tab with reconstructed point clouds}
212
	\label{fig:pointclouds1}
317
	\label{fig:pointclouds1}
213
\end{figure}
318
\end{figure}
214
\clearpage	
319
\clearpage	
215
 
320
 
216
 
321
 
Line 263... Line 368...
263
The next step is to reconstruct a surface from a point cloud. This can be done using the Poisson surface reconstruction built into Meshlab. It is accessible through ''File $\rightarrow$ Point Set $\rightarrow$ Surface Reconstruction: Poisson''. You will most probably have to vary the parameters for this step, to obtain pleasing results for your particular data. 
368
The next step is to reconstruct a surface from a point cloud. This can be done using the Poisson surface reconstruction built into Meshlab. It is accessible through ''File $\rightarrow$ Point Set $\rightarrow$ Surface Reconstruction: Poisson''. You will most probably have to vary the parameters for this step, to obtain pleasing results for your particular data. 
264
 
369
 
265
The full Poisson code is available at \url{http://www.cs.jhu.edu/~misha/Code/PoissonRecon/}, and also installed on the scanner computer. The full software allows for finer control over the process, and also to remove mesh membranes with little point support. We refer to the documentation provided by the authors of the PoissonRecon code.
370
The full Poisson code is available at \url{http://www.cs.jhu.edu/~misha/Code/PoissonRecon/}, and also installed on the scanner computer. The full software allows for finer control over the process, and also to remove mesh membranes with little point support. We refer to the documentation provided by the authors of the PoissonRecon code.
266
 
371
 
267
The Poisson reconstruction algorithm does not keep colour information. In order to obtain a coloured mesh, one needs to re-project the per-point colour information from the full point cloud to the mesh. This can be done in Meshlab through the ''Filters $\rightarrow$ Sampling $\rightarrow$ Vertex Attribute Transfer'' functionality. 
372
The Poisson reconstruction algorithm does not keep colour information. In order to obtain a coloured mesh, one needs to re-project the per-point colour information from the full point cloud to the mesh. This can be done in Meshlab through the ''Filters $\rightarrow$ Sampling $\rightarrow$ Vertex Attribute Transfer'' functionality. 
-
 
373
 
-
 
374
\addcontentsline{toc}{section}{References}
-
 
375
{\setlength{\baselineskip}{0.75\baselineskip}
-
 
376
\begin{thebibliography}{99}
-
 
377
 
-
 
378
\bibitem{aanaes} Aanaes Henrik, 2014. 'Lecture Notes on Computer Vision', DTU.
-
 
379
 
-
 
380
\bibitem{guhring} Guhring Jens, 2000. 'Dense 3D surface acquisition by structured light using off-the-shelf components', Proceedings of SPIE Vol. 4309: Videometrics and Optical Methods for 3D Shape Measurement.
-
 
381
 
-
 
382
\bibitem{moreno} Moreno Daniel, Son Kilho \& Taubin Gabriel, 2015. 'Embedded Phase Shifting: Robust Phase Shifting with Embedded Signals', Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2301--2309.
-
 
383
 
-
 
384
\bibitem{reich} Reich Carsten, Ritter Reinhold \& Thesing Jan, 1997. 'White light heterodyne principle for 3D-measurement', Proceedings of SPIE Vol. 3100: Sensors, Sensor Systems, and Sensor Data Processing.
-
 
385
 
-
 
386
\end{thebibliography}
-
 
387
}
-
 
388
 
-
 
389
 
-
 
390
 
-
 
391
 
268
\end{document}
392
\end{document}