Subversion Repositories seema-scanner

Rev

Rev 105 | Rev 149 | Go to most recent revision | Show entire file | Ignore whitespace | Details | Blame | Last modification | View Log | RSS feed

Rev 105 Rev 106
Line 30... Line 30...
30
 
30
 
31
\chapter{The scanner}
31
\chapter{The scanner}
32
\section{Getting started}
32
\section{Getting started}
33
This section describes the main hardware and software parts of the system.
33
This section describes the main hardware and software parts of the system.
34
 
34
 
35
If your main objective is to digitize objects, you should be able to do so on your own by reading the chapter ''Practical Scanning'', which gives a step-by-step recipe to perform a complete object scan and reconstruction. 
35
If your main objective is to digitise objects, you should be able to do so on your own by reading the chapter ''Practical Scanning'', which gives a step-by-step recipe to perform a complete object scan and reconstruction. 
36
 
36
 
37
Technical projects and contributions are very welcome. Please get in touch with the authors if you plan any alterations to the hardware, or would like write access to the SVN repository containing the software. The public read access url to the SeeMaLab Scanner repository is: \url{http://svn.compute.dtu.dk/svn/seema-scanner/}.
37
Technical projects and contributions are very welcome. Please get in touch with the authors if you plan any alterations to the hardware, or would like write access to the SVN repository containing the software. The public read access url to the SeeMaLab Scanner repository is: \url{http://svn.compute.dtu.dk/svn/seema-scanner/}.
38
 
38
 
39
\section{Hardware parts}
39
\section{Hardware parts}
40
\begin{table}
40
\begin{table}
Line 74... Line 74...
74
The SeeMa-Scanner uses a standard commercial Full-HD projector. This is very cost-effective, but brings a few challenges. The projector is configured to perform minimal image processing, and the HDMI port is set to ''Notebook''-mode, which gives the lowest possible input lag (approx. 80 ms). The projector contains a DLP micromirror array to produce binary patterns with a high refresh rates (kHz range). Intermediate gray-values are created by the projector by altering the relative on-off cycles of each micromirror. A truthful capture of gray-values with the camera, requires an integration time that is a multiple of the 16.7 ms refresh period of the projector. 
74
The SeeMa-Scanner uses a standard commercial Full-HD projector. This is very cost-effective, but brings a few challenges. The projector is configured to perform minimal image processing, and the HDMI port is set to ''Notebook''-mode, which gives the lowest possible input lag (approx. 80 ms). The projector contains a DLP micromirror array to produce binary patterns with a high refresh rates (kHz range). Intermediate gray-values are created by the projector by altering the relative on-off cycles of each micromirror. A truthful capture of gray-values with the camera, requires an integration time that is a multiple of the 16.7 ms refresh period of the projector. 
75
 
75
 
76
Commercial projectors do not have a linear response, which would be necessary for truthful capture of gray-value patterns. Gamma can be set to the lowest possible value of $1.6$, and if matched in the graphics card configuration of the scan computer, a close to linear response can be achieved. By only using binary patterns, this problem is avoided.
76
Commercial projectors do not have a linear response, which would be necessary for truthful capture of gray-value patterns. Gamma can be set to the lowest possible value of $1.6$, and if matched in the graphics card configuration of the scan computer, a close to linear response can be achieved. By only using binary patterns, this problem is avoided.
77
 
77
 
78
\subsection{Cameras}
78
\subsection{Cameras}
79
These are high resolution 9MPx industrial CCD color cameras. While color information is usually not necessary in structured light, it enables us to full color texture the scanned object. In the program code, a white balance is used for the camera, which was chosen ad-hoc to approximately match the color profile of the projector. To capture real true colors, a color calibration would have to be done.
79
These are high resolution 9MPx industrial CCD colour cameras. While colour information is usually not necessary in structured light, it enables us to full colour texture the scanned object. In the program code, a white balance is used for the camera, which was chosen ad-hoc to approximately match the colour profile of the projector. To capture real true colours, a colour calibration would have to be done.
80
 
80
 
81
\subsection{Rotation stage}
81
\subsection{Rotation stage}
82
This is a socalled micro-rotation stage, commonly used in high precision photonic research and production. A larger diameter plate was attached. The rotation stage has a stepper motor which drives a worm-gear. This gives high precision and very high repeatability. Note that the rotation stage does not have an optical encoder. It is reset to 0 degrees at each program start in software. The motor controller can be configured for different levels of microstepping and motor current. Higher motor current provides more torque and less risk of missing steps. Load on the plate should not exceed 20 kg, and be centered around the rotation axis. Objects can be stabilized on the plate using e.g. modeling clay.
82
This is a so-called micro-rotation stage, commonly used in high precision photonic research and production. A larger diameter plate was attached. The rotation stage has a stepper motor which drives a worm-gear. This gives high precision and very high repeatability. Note that the rotation stage does not have an optical encoder. It is reset to 0 degrees at each program start in software. The motor controller can be configured for different levels of microstepping and motor current. Higher motor current provides more torque and less risk of missing steps. Load on the plate should not exceed 20 kg, and be centered around the rotation axis. Objects can be stabilised on the plate using e.g. modelling clay.
83
 
83
 
84
\subsection{Calibration target}
84
\subsection{Calibration target}
85
A calibration target is also part of the scanner. It was produced by printing a checkerboard in vector format, and gluing it onto a thick piece of float glass using spray adhesive. The target is asymmetrical, which is necessary to uniquely match chessboard corners in both cameras. The calibration target was designed to fill the scan objects space. If you need a smaller scan area, a smaller calibration target would be beneficial. In order to use a different chessboard, the field size and count paramters in the GUI configuration file (\texttt{{\textasciitilde}/.config/DTU/seema-scanner.conf}) need to be changed. Also note the minimal focus distance of the projector and cameras.
85
A calibration target is also part of the scanner. It was produced by printing a checkerboard in vector format, and gluing it onto a thick piece of float glass using spray adhesive. The target is asymmetrical, which is necessary to uniquely match chessboard corners in both cameras. The calibration target was designed to fill the scan objects space. If you need a smaller scan area, a smaller calibration target would be beneficial. 
86
 
86
 
87
\section{Software components}
87
\section{Software components}
88
The SeeMaLab 3D scanner has a full graphical user interface for calibration, and scanning. The output from this software is a number of color pointclouds in the PLY format along with a Meshlab alignment project file (file suffix .aln), which contains orientation information as provided from the rotation stage parameters. This allows the user to import the point cloud for further processing in Meshlab, e.g. to produce a full mesh model of the surface. The rotation axis is determined during calibration, which means that usually no manual or algorithm-assisted alignment of partial surfaces is necessary. 
88
The SeeMaLab 3D scanner has a full graphical user interface for calibration, and scanning. The output from this software is a number of color pointclouds in the PLY format along with a Meshlab alignment project file (file suffix .aln), which contains orientation information as provided from the rotation stage parameters. This allows the user to import the point cloud for further processing in Meshlab, e.g. to produce a full mesh model of the surface. The rotation axis is determined during calibration, which means that usually no manual or algorithm-assisted alignment of partial surfaces is necessary. 
89
 
89
 
90
To get fine grained control over the scan procedure, the user can modify the source code for the GUI application, or use the supplied Matlab wrappers. These wrappers provide basic functionality to capture images with the cameras, project a specific pattern on the projector, or rotate the rotation stage to a specific position. Using these components, a full structured light scanner can be implemented in Matlab with full design freedom. 
90
To get fine grained control over the scan procedure, the user can modify the source code for the GUI application, or use the supplied Matlab wrappers. These wrappers provide basic functionality to capture images with the cameras, project a specific pattern on the projector, or rotate the rotation stage to a specific position. Using these components, a full structured light scanner can be implemented in Matlab with full design freedom. 
91
 
91
 
92
\section{GUI}
92
\section{GUI}
93
The scanner GUI was developed using Qt, OpenCV and the Pointcloud Library (PCL). It enables the user to perform calibration of the scanner, and to aquire scan data. It is built in a modular fashion, to allow for new structured light strategies to be implemented. It is, however, supposed to be simple and stable, so please keep experimental builds in seperate SVN branches. 
93
The scanner GUI was developed using Qt, OpenCV and the Pointcloud Library (PCL). It enables the user to perform calibration of the scanner, and to acquire scan data. It is built in a modular fashion, to allow for new structured light strategies to be implemented. It is, however, supposed to be simple and stable, so please keep experimental builds in separate SVN branches. 
94
 
94
 
95
GUI functionality heavily depends on Qt. For interoperability with PCL, it is necessary to build against Qt 4.x. Most other components, specifically those with Matlab wrappers, have minimal dependencies, and can be used outside of the GUI framework.
95
GUI functionality heavily depends on Qt. For interoperability with PCL, it is necessary to build against Qt 4.x. Most other components, specifically those with Matlab wrappers, have minimal dependencies, and can be used outside of the GUI framework.
96
 
96
 
-
 
97
The GUI is installed on the SeeMaLab Scanner computer on the seema-scanner user account. The software repository is checked out and built in the home folder (\texttt{{\textasciitilde}/seema-scanner}). An icon in the launcher bar links to this executable. 
-
 
98
 
-
 
99
In the GUI program, the user can open a preference window, so select a pattern sequence and configure the timing parameters. These preferences are stored in \texttt{{\textasciitilde}/.config/DTU/seema-scanner.conf}. Some preferences are not exposed in the GUI (e.g. calibration board size and field count), but can be manually edited in the file before the program is started.
-
 
100
 
97
\section{\texttt{Projector} Class} 
101
\section{\texttt{Projector} Class} 
98
This class provides a fullscreen OpenGL context, and the ability to project any texture. The window/context creation is operating system dependant. It works very well on Linux with proprietary nVidia drivers, as found on the scan computer. In order to get a completely independant screen output, which does not interfere with the window manager, the projector needs to be set up as a seperate X screen in \texttt{xorg.conf}. The absolute position of this second X screen must provide a small gap to the primary screen. This gives a secondary screen, which is not recognized by Compiz (Unity in Ubuntu), but which can be accessed through the Projector class.
102
This class provides a fullscreen OpenGL context, and the ability to project any texture. The window/context creation is operating system dependant. It works very well on Linux with proprietary nVidia drivers, as found on the scan computer. In order to get a completely independent screen output, which does not interfere with the window manager, the projector needs to be set up as a seperate X screen in \texttt{xorg.conf}. The absolute position of this second X screen must provide a small gap to the primary screen. This gives a secondary screen, which is not recognised by Compiz (Unity in Ubuntu), but which can be accessed through the Projector class.
99
 
103
 
100
\section{\texttt{Camera} Class}
104
\section{\texttt{Camera} Class}
101
An abstraction from the individual industrial camera APIs was created, in order to ease replacement and enhance modularity. A concrete implementation for Point Grey cameras is provided. The program is currently designed for ''software triggering'' of the cameras. Due to substantial input lag in the projector and cameras, a certain pause must be made in program execution between projecting a certain pattern, and image capture. Close temporal syncronization of both cameras is achieved by calling the trigger method on both cameras, and collecting the images subsequently.
105
An abstraction from the individual industrial camera APIs was created, in order to ease replacement and enhance modularity. A concrete implementation for Point Grey cameras is provided. The program is currently designed for ''software triggering'' of the cameras. Due to substantial input lag in the projector and cameras, a certain pause must be made in program execution between projecting a certain pattern, and image capture. Close temporal synchronisation of both cameras is achieved by calling the trigger method on both cameras, and collecting the images subsequently.
102
 
106
 
103
\section{\texttt{RotationStage} Class}
107
\section{\texttt{RotationStage} Class}
104
Here a C++ abstraction for the Newmark motion control API was implemented. The C API essentially receives serial commands for serial-over-USB, and full documentation is provided on the Newmark website. Important things to consider are the latencies of many of these calls. Specifically reading and writing ''hardware settings'' such as microstep levels and motor current take considerable amounts of time. The motor's controllers inherent positional unit is ''number of microsteps''. This can be converted to an angular position, $\alpha$, by means of the following formula:
108
Here a C++ abstraction for the Newmark motion control API was implemented. The C API essentially receives serial commands for serial-over-USB, and full documentation is provided on the Newmark website. Important things to consider are the latencies of many of these calls. Specifically reading and writing ''hardware settings'' such as microstep levels and motor current take considerable amounts of time. The motor's controllers inherent positional unit is ''number of microsteps''. This can be converted to an angular position, $\alpha$, by means of the following formula:
105
\[
109
\[
106
	\alpha = \frac{\textrm{XPOS} \cdot 1.8}{\textrm{MS} \cdot 72} \quad ,
110
	\alpha = \frac{\textrm{XPOS} \cdot 1.8}{\textrm{MS} \cdot 72} \quad ,
107
\]
111
\]
108
where XPOS is the rotation controller's value, $1.8$ is the number of degrees per step on the motor axis. MS is the current microstep setting, and $72$ the worm-gear ratio. The \texttt{RotationStage} class interface abstracts from this and lets you rotate to a specific angle between $0$ and $360$ using the shortest direction. 
112
where XPOS is the rotation controller's value, $1.8$ is the number of degrees per step on the motor axis. MS is the current microstep setting, and $72$ the worm-gear ratio. The \texttt{RotationStage} class interface abstracts from this and lets you rotate to a specific angle between $0$ and $360$ using the shortest direction. 
109
 
113
 
-
 
114
In order for the SeeMaLab computer to communicate with the rotation stage controller, appropriate udev permissions must be configured.
-
 
115
 
110
\chapter{Practical scanning}
116
\chapter{Practical scanning}
111
Please be very careful with this very expensive equipment, and considerate by not misplacing any parts and not borrowing any components of the scanner hardware.
117
Please be very careful with this very expensive equipment, and considerate by not misplacing any parts and not borrowing any components of the scanner hardware.
112
The following guide explains the steps involved in calibration and aquisition of a $360^\circ$ scan of an object. 
118
The following guide explains the steps involved in calibration and acquisition of a $360^\circ$ scan of an object. 
113
 
119
 
114
Calibration parameters consist of camera focal lengths, central points, lens distortion parameters, camera extrinsics (their relative position and angles), and the location and orientation of the rotation stage axis. These parameters are stored in the GUI, but in most cases, it is recommended to perform a new calibration before aquiring new data. Also, the exact position of cameras may be altered to better fit the object, in which case recalibration must be done. The calibration parameters can be exported into a \texttt{*.xml} file through the top bar menu. The global coordinate system, in which everything is expresses coincides with the left camera.
120
Calibration parameters consist of camera focal lengths, central points, lens distortion parameters, camera extrinsics (their relative position and angles), and the location and orientation of the rotation stage axis. These parameters are stored in the GUI, but in most cases, it is recommended to perform a new calibration before acquiring new data. Also, the exact position of cameras may be altered to better fit the object, in which case recalibration must be done. The calibration parameters can be exported into a \texttt{*.xml} file through the top bar menu. The global coordinate system, in which everything is expresses coincides with the left camera.
115
 
121
 
116
Image aquisition consists of projecting a sequence of patterns onto the object, which are then converted to depth values by means of the specific algorithm.
122
Image aquisition consists of projecting a sequence of patterns onto the object, which are then converted to depth values by means of the specific algorithm.
117
 
123
 
118
\section{Calibration}
124
\section{Calibration}
119
\begin{enumerate}
125
\begin{enumerate}
120
	\item The GUI application is started on the scanner computer. The projector is turned on using the remote control or the touch interface on its top. Make sure the proper HDMI input is chosen as source. Some software settings can be altered through the ''File $\rightarrow$ Preference'' menu, if necessary (the GUI needs to be restarted after altering these settings).
126
	\item The GUI application is started on the scanner computer. The projector is turned on using the remote control or the touch interface on its top. Make sure the proper HDMI input is chosen as source. Some software settings can be altered through the ''File $\rightarrow$ Preference'' menu, if necessary (the GUI needs to be restarted after altering these settings).
121
	\item Position the calibration target on the circular rotation plate, and inside the field of view of cameras and projector. White light will be provided from the projector for guidance. The GUI will show as shown on figure \ref{fig:calibration0}.
127
	\item Position the calibration target on the circular rotation plate, and inside the field of view of cameras and projector. White light will be provided from the projector for guidance. The GUI will show as shown on figure \ref{fig:calibration0}.
122
	\item The darkening curtain is lowered, to improve the signal to noise ratio, and to avoid artifacts pertaining from ambient lighting.
128
	\item The darkening curtain is lowered, to improve the signal to noise ratio, and to avoid artifacts pertaining from ambient lighting.
123
	\item A number of calibration sets need to be aquired. The minium is 3 sets, and more is beneficial. The calibration pattern needs to be fully visible and equally bright in both cameras. The viewing angle must not be too shallow. The preset ''batch aquisition'' gives a reasonable number of calibration sets.
129
	\item A number of calibration sets need to be acquired. The minimum is 3 sets, and more is beneficial. The calibration pattern needs to be fully visible and equally bright in both cameras. The viewing angle must not be too shallow. The preset ''batch acquisition'' gives a reasonable number of calibration sets.
124
	\item After aquisition, individual calibration sets can be re-examined. Calibration parameters are automatically determined by clicking the ''Calibrate'' button. This procedure can take up to a few minutes. The terminal output will show recalibration errors, which measure the quality of calibration. 
130
	\item After acquisition, individual calibration sets can be re-examined. Calibration parameters are automatically determined by clicking the ''Calibrate'' button. This procedure can take up to a few minutes. The terminal output will show recalibration errors, which measure the quality of calibration. 
125
	\item The calibration result can be examined by changing to the ''Point Clouds'' tab in the GUI (see fig. \ref{fig:pointclouds0}). Left and right cameras are representated by colored coordinate systems (the viewing direction is the positive z-axis, y points down, x to the right). The rotation axis, as determined by the calibration procedure is shown as a white line section. 
131
	\item The calibration result can be examined by changing to the ''Point Clouds'' tab in the GUI (see fig. \ref{fig:pointclouds0}). Left and right cameras are represented by coloured coordinate systems (the viewing direction is the positive z-axis, y points down, x to the right). The rotation axis, as determined by the calibration procedure is shown as a white line section. 
126
\end{enumerate}
132
\end{enumerate}
127
 
133
 
128
\section{Making a 360 degree scan}
134
\section{Making a 360 degree scan}
129
Depending on the surface complexity (blind spots, etc.), multiple $360^\circ$ scans may be necessary. In that case, the following procedure is done multiple times with the object in different orientations.
135
Depending on the surface complexity (blind spots, etc.), multiple $360^\circ$ scans may be necessary. In that case, the following procedure is done multiple times with the object in different orientations.
130
\begin{enumerate}
136
\begin{enumerate}
131
	\item Choose the ''Capture'' tab in the GUI -- see figure \ref{fig:capture0} for an illustration. 
137
	\item Choose the ''Capture'' tab in the GUI -- see figure \ref{fig:capture0} for an illustration. 
132
	\item The scan object is now placed on the rotation plate such that it is visible in both cameras, and the darkening curtain again lowered. 
138
	\item The scan object is now placed on the rotation plate such that it is visible in both cameras, and the darkening curtain again lowered. 
133
	\item Press ''Single Capture'' or ''Batch Capture'' in the GUI.
139
	\item Press ''Single Capture'' or ''Batch Capture'' in the GUI.
134
	\item Sequences of patterns are projected onto the object. The captured images can be reviewed, and one or multiple captured sequences reconstructed using the ''Reconstruct'' button. 
140
	\item Sequences of patterns are projected onto the object. The captured images can be reviewed, and one or multiple captured sequences reconstructed using the ''Reconstruct'' button. 
135
	\item The results will show up in the ''Points Clouds'' tab. Single point clouds can be shown or hidden, see figure \ref{fig:pointclouds1}.
141
	\item The results will show up in the ''Points Clouds'' tab. Single point clouds can be shown or hidden, see figure \ref{fig:pointclouds1}.
136
	\item All data can be exported from the GUI program by means of the top bar menues. By exporting the point clouds into a folder, a \texttt{*.aln} is stored alongside these, which contains pose information in global coordinate space, which aligns the points clouds correctly and relative to each other.
142
	\item All data can be exported from the GUI program by means of the top bar menus. By exporting the point clouds into a folder, a \texttt{*.aln} is stored alongside these, which contains pose information in global coordinate space, which aligns the points clouds correctly and relative to each other.
137
\end{enumerate}
143
\end{enumerate}
138
\begin{figure}[H]
144
\begin{figure}[H]
139
	\centering
145
	\centering
140
		\includegraphics[width=.7\textwidth]{calibration0.png}
146
		\includegraphics[width=.7\textwidth]{calibration0.png}
141
	\caption{The GUI showing the ''Calibration'' tab.}
147
	\caption{The GUI showing the ''Calibration'' tab.}
Line 160... Line 166...
160
	\label{fig:pointclouds1}
166
	\label{fig:pointclouds1}
161
\end{figure}
167
\end{figure}
162
\clearpage	
168
\clearpage	
163
 
169
 
164
\section{Reconstructing a mesh surface}
170
\section{Reconstructing a mesh surface}
165
Multiple point clouds can be merged into a single watertight mesh representation using Meshlab. Meshlab is available on the scanner computer, but also freely available for download for multiple platforms. The basic steps involved in merging and reconstructing are outlined below. The input data will consist of one or more sets of pointclouds aquired with the SeeMaLab GUI. Note that if multiple object poses are desired (for complex geometries/blind spots, etc.), it is recommended to close and restart the GUI for each pose, to clear the captured sequences and memory.
171
Multiple point clouds can be merged into a single watertight mesh representation using Meshlab. Meshlab is available on the scanner computer, but also freely available for download for multiple platforms. The basic steps involved in merging and reconstructing are outlined below. The input data will consist of one or more sets of pointclouds acquired with the SeeMaLab GUI. Note that if multiple object poses are desired (for complex geometries/blind spots, etc.), it is recommended to close and restart the GUI for each pose, to clear the captured sequences and memory.
166
\begin{enumerate}
172
\begin{enumerate}
167
	\item Load a set of point clouds, by opening the \texttt{*.aln} file in Meshlab (''File $\rightarrow$ Open Project...''). See figure \ref{fig:meshlab0} for an illustration of one full set of scans loaded into Meshlab.
173
	\item Load a set of point clouds, by opening the \texttt{*.aln} file in Meshlab (''File $\rightarrow$ Open Project...''). See figure \ref{fig:meshlab0} for an illustration of one full set of scans loaded into Meshlab.
168
	\item The PLY files do contain XYZ and RGB values for all points. You will need to compute normals, in order for the surface reconstruction to succeed. These normals can be estimated and consistently oriented by considering the camera viewpoint. Select all point cloud in turn and for each, choose ''Filters $\rightarrow$ Point Sets $\rightarrow$ Compute Normals for Point Set''. Make sure the ''Flip normals...'' checkbox is ticked (see fig. \ref{fig:meshlab1}). Suitable neighborhood values are in the order of $10$. You can visualize the estimated normals through the ''Render'' menu.
174
	\item The PLY files do contain XYZ and RGB values for all points. You will need to compute normals, in order for the surface reconstruction to succeed. These normals can be estimated and consistently oriented by considering the camera viewpoint. Select all point cloud in turn and for each, choose ''Filters $\rightarrow$ Point Sets $\rightarrow$ Compute Normals for Point Set''. Make sure the ''Flip normals...'' checkbox is ticked (see fig. \ref{fig:meshlab1}). Suitable neighbourhood values are in the order of $10$. You can visualise the estimated normals through the ''Render'' menu.
169
	\item After estimating normals for all point clouds in a set, choose ''Filters $\rightarrow$ Mesh Layer $\rightarrow$ Flatten Visible Layers''. Make sure to retain unreferences vertices, because at this point, none of the points will be part of any triangles (see figure \ref{fig:meshlab2}). This process will alter all coordinates by applying the pose transformation to all point clouds before merging them.
175
	\item After estimating normals for all point clouds in a set, choose ''Filters $\rightarrow$ Mesh Layer $\rightarrow$ Flatten Visible Layers''. Make sure to retain unreferenced vertices, because at this point, none of the points will be part of any triangles (see figure \ref{fig:meshlab2}). This process will alter all coordinates by applying the pose transformation to all point clouds before merging them.
170
	\item Save the resulting merged point cloud. In the save dialog, make sure to include the normals in the output file (see fig. \ref{fig:meshlab3}).
176
	\item Save the resulting merged point cloud. In the save dialog, make sure to include the normals in the output file (see fig. \ref{fig:meshlab3}).
171
\end{enumerate}
177
\end{enumerate}
172
 
178
 
173
\begin{figure}[H]
179
\begin{figure}[H]
174
	\centering
180
	\centering
Line 193... Line 199...
193
		\includegraphics[width=.7\textwidth]{meshlab3.png}
199
		\includegraphics[width=.7\textwidth]{meshlab3.png}
194
	\caption{Save the merged point clouds, and include the estimated normals in the output file.}
200
	\caption{Save the merged point clouds, and include the estimated normals in the output file.}
195
	\label{fig:meshlab3}
201
	\label{fig:meshlab3}
196
\end{figure}
202
\end{figure}
197
 
203
 
198
If you have aquired multiple $360^\circ$ scans of your object in different position, proceed as above for each set. Then, you will need to align and merge these point cloud. Meshlab has manual coarse and automated ICP alignment integrated. Note that the automatic alignment procedure in Meshlab requires high quality point normal estimates for all point cloud to succeed. If this is not given, the alignment process will fail without warning or errors.
204
If you have acquired multiple $360^\circ$ scans of your object in different position, proceed as above for each set. Then, you will need to align and merge these point cloud. Meshlab has manual coarse and automated ICP alignment integrated. Note that the automatic alignment procedure in Meshlab requires high quality point normal estimates for all point cloud to succeed. If this is not given, the alignment process will fail without warning or errors.
199
\begin{enumerate}
205
\begin{enumerate}
200
	\item Load the point clouds of interest (''File $\rightarrow$ Import Mesh''). The imported point cloud will not be properly aligned. Open the alignment tool (a big yellow A tool button). See figure \ref{fig:meshlab4} for an image of this tool. ''Glueing'' in Meshlab means setting an initial rough alignment. You can ''glue'' the first mesh, and rough ''glue'' the others to it by selecting a small number (minimum 4) of surface point correspondences with the mouse. When all point clouds have been ''glued'', you can initiate automatic fine-alignment (groupwise ICP) by pressing ''Process''. A good alignment should be confirmed by selecting ''False colors'', and seeing a good mix of colors in overlap areas. 
206
	\item Load the point clouds of interest (''File $\rightarrow$ Import Mesh''). The imported point cloud will not be properly aligned. Open the alignment tool (a big yellow A tool button). See figure \ref{fig:meshlab4} for an image of this tool. ''Glueing'' in Meshlab means setting an initial rough alignment. You can ''glue'' the first mesh, and rough ''glue'' the others to it by selecting a small number (minimum 4) of surface point correspondences with the mouse. When all point clouds have been ''glued'', you can initiate automatic fine-alignment (group-wise ICP) by pressing ''Process''. A good alignment should be confirmed by selecting ''False colors'', and seeing a good mix of colours in overlap areas. 
201
	\item Merge the aligned point cloud ''Filters $\rightarrow$ Mesh Layer $\rightarrow$ Flatten Visible Layers''.
207
	\item Merge the aligned point cloud ''Filters $\rightarrow$ Mesh Layer $\rightarrow$ Flatten Visible Layers''.
202
\end{enumerate}
208
\end{enumerate}
203
\begin{figure}[h]
209
\begin{figure}[h]
204
	\centering
210
	\centering
205
		\includegraphics[width=.9\textwidth]{meshlab4.png}
211
		\includegraphics[width=.9\textwidth]{meshlab4.png}
Line 209... Line 215...
209
 
215
 
210
The next step is to reconstruct a surface from a point cloud. This can be done using the Poisson surface reconstruction built into Meshlab. It is accessible through ''File $\rightarrow$ Point Set $\rightarrow$ Surface Reconstruction: Poisson''. You will most probably have to vary the parameters for this step, to obtain pleasing results for your particular data. 
216
The next step is to reconstruct a surface from a point cloud. This can be done using the Poisson surface reconstruction built into Meshlab. It is accessible through ''File $\rightarrow$ Point Set $\rightarrow$ Surface Reconstruction: Poisson''. You will most probably have to vary the parameters for this step, to obtain pleasing results for your particular data. 
211
 
217
 
212
The full Poisson code is available at \url{http://www.cs.jhu.edu/~misha/Code/PoissonRecon/Version6.11/}, and also installed on the scanner computer. The full software allows for finer control over the process, and also to remove mesh membranes with little point support. We refer to the documentation provided by the authors of the PoissonRecon code.
218
The full Poisson code is available at \url{http://www.cs.jhu.edu/~misha/Code/PoissonRecon/Version6.11/}, and also installed on the scanner computer. The full software allows for finer control over the process, and also to remove mesh membranes with little point support. We refer to the documentation provided by the authors of the PoissonRecon code.
213
 
219
 
214
The Poisson reconstruction algorithm does not keep color information. In order to obtain a colored mesh, one needs to reproject the per-point color information from the full point cloud to the mesh. This can be done in Meshlab through the ''Filters $\rightarrow$ Sampling $\rightarrow$ Vertex Attribute Transfer'' functionality. 
220
The Poisson reconstruction algorithm does not keep colour information. In order to obtain a coloured mesh, one needs to re-project the per-point colour information from the full point cloud to the mesh. This can be done in Meshlab through the ''Filters $\rightarrow$ Sampling $\rightarrow$ Vertex Attribute Transfer'' functionality. 
215
\end{document}
221
\end{document}