Line 3... |
Line 3... |
3 |
\usepackage[T1]{fontenc}
|
3 |
\usepackage[T1]{fontenc}
|
4 |
\usepackage{url}
|
4 |
\usepackage{url}
|
5 |
\usepackage{graphicx}
|
5 |
\usepackage{graphicx}
|
6 |
\usepackage{fullpage}
|
6 |
\usepackage{fullpage}
|
7 |
|
7 |
|
- |
|
8 |
\usepackage{color}
|
- |
|
9 |
\newcommand{\dolmes}[1]{\textcolor[rgb]{1,0.0,0}{#1}}
|
- |
|
10 |
|
8 |
|
11 |
|
9 |
% \renewcommand{\chaptermark}[1]{\markboth{#1}{}}
|
12 |
% \renewcommand{\chaptermark}[1]{\markboth{#1}{}}
|
10 |
% \renewcommand{\sectionmark}[1]{\markright{\thesection\ #1}}
|
13 |
% \renewcommand{\sectionmark}[1]{\markright{\thesection\ #1}}
|
11 |
|
14 |
|
12 |
\title{The SeeMa Lab Structured Light Scanner}
|
15 |
\title{The SeeMa Lab Structured Light Scanner}
|
Line 116... |
Line 119... |
116 |
\]
|
119 |
\]
|
117 |
where XPOS is the rotation controller's value, $1.8$ is the number of degrees per step on the motor axis. MS is the current microstep setting, and $72$ the worm-gear ratio. The \texttt{RotationStage} class interface abstracts from this and lets you rotate to a specific angle between $0$ and $360$ using the shortest direction.
|
120 |
where XPOS is the rotation controller's value, $1.8$ is the number of degrees per step on the motor axis. MS is the current microstep setting, and $72$ the worm-gear ratio. The \texttt{RotationStage} class interface abstracts from this and lets you rotate to a specific angle between $0$ and $360$ using the shortest direction.
|
118 |
|
121 |
|
119 |
In order for the SeeMaLab computer to communicate with the rotation stage controller, appropriate udev permissions must be configured.
|
122 |
In order for the SeeMaLab computer to communicate with the rotation stage controller, appropriate udev permissions must be configured.
|
120 |
|
123 |
|
- |
|
124 |
|
121 |
\chapter{Practical scanning}
|
125 |
\chapter{Practical scanning}
|
122 |
Please be very careful with this very expensive equipment, and considerate by not misplacing any parts and not borrowing any components of the scanner hardware.
|
126 |
Please be very careful with this very expensive equipment, and considerate by not misplacing any parts and not borrowing any components of the scanner hardware.
|
123 |
The following guide explains the steps involved in calibration and acquisition of a $360^\circ$ scan of an object.
|
127 |
The following guide explains the setup, and the steps involved in calibration and acquisition of a $360^\circ$ scan of an object.
|
124 |
|
128 |
|
- |
|
129 |
\section{Setup}
|
125 |
Calibration parameters consist of camera focal lengths, central points, lens distortion parameters, camera extrinsics (their relative position and angles), and the location and orientation of the rotation stage axis. These parameters are stored in the GUI, but in most cases, it is recommended to perform a new calibration before acquiring new data. Also, the exact position of cameras may be altered to better fit the object, in which case recalibration must be done. The calibration parameters can be exported into a \texttt{*.xml} file through the top bar menu. The global coordinate system, in which everything is expresses coincides with the left camera.
|
130 |
In contrast to the SeeMa-Scanner located in the Image Lab, the Traveling SeeMa-Scanner first has to be assembled from the single parts shown in Figure \dolmes{XXX (Figure of box)}. Figure \dolmes{XXX} illustrates the final setup of the Traveling SeeMa-Scanner. Take the following points into consideration:
|
- |
|
131 |
\begin{itemize}
|
- |
|
132 |
\item Choose a black, non-shiny background (e.g. provided black fabric). Cover (or paint black) any shiny objects in the scan area such as screws, stands, etc. This will ensure that only the object of interest is scanned.
|
- |
|
133 |
|
- |
|
134 |
\item How to choose the distance between the projector and the circular rotation plate? This distance is dependent on the object size. Make sure the object is inside the field of view of cameras and projector for any rotation angle. \dolmes{Would you rather want a small distance?}
|
- |
|
135 |
|
- |
|
136 |
\item How to choose the distance between the two cameras? The closer the cameras, the better can concavities be scanned (i.e. there is less occlusion). However, this comes with a larger error in the point coordinate determination. As a rule of thumb, this distance should lie in the interval $[\frac{1}{3}x, 3x]$, where $x$ denotes the distance between the projector and the object. Usually, we like to put the cameras rather close together.
|
- |
|
137 |
\end{itemize}
|
126 |
|
138 |
|
127 |
Image aquisition consists of projecting a sequence of patterns onto the object, which are then converted to depth values by means of the specific algorithm.
|
- |
|
128 |
|
139 |
|
129 |
\section{Calibration}
|
140 |
\section{Calibration}
|
- |
|
141 |
Calibration parameters consist of camera focal lengths, central points, lens distortion parameters, camera extrinsics (their relative position and angles), and the location and orientation of the rotation stage axis. These parameters are stored in the GUI, but in most cases, it is recommended to perform a new calibration before acquiring new data. Also, the exact position of cameras may be altered to better fit the object, in which case recalibration must be done. The calibration parameters can be exported into a \texttt{*.xml} file through the top bar menu. The global coordinate system, in which everything is expresses coincides with the left camera.
|
- |
|
142 |
|
130 |
\begin{enumerate}
|
143 |
\begin{enumerate}
|
- |
|
144 |
\item The projector is turned on using the remote control or the touch interface on its top. Make sure the proper HDMI input is chosen as source. Remember to turn on the projector before the scanner computer (otherwise the computer screen is projected)!
|
- |
|
145 |
|
131 |
\item The GUI application is started on the scanner computer. The projector is turned on using the remote control or the touch interface on its top. Make sure the proper HDMI input is chosen as source. Some software settings can be altered through the ''File $\rightarrow$ Preference'' menu, if necessary (the GUI needs to be restarted after altering these settings).
|
146 |
\item The GUI application is started on the scanner computer \dolmes{name/icon?}. Some software settings can be altered through the ''File $\rightarrow$ Preference'' menu, if necessary (the GUI needs to be restarted after altering these settings).
|
- |
|
147 |
|
- |
|
148 |
\item Ensure that both the cameras and the projector are in focus at the middle of the rotation stage. \dolmes{You can check this by looking on the camera images in the GUI, and by holding your hand over the rotation plate for the projector.}
|
- |
|
149 |
|
- |
|
150 |
\item Make sure the size of the calibration plate fits the object. \dolmes{where/how to adjust the parameters of the calibration plate?}
|
- |
|
151 |
|
132 |
\item Position the calibration target on the circular rotation plate, and inside the field of view of cameras and projector. White light will be provided from the projector for guidance. The GUI will show as shown on figure \ref{fig:calibration0}.
|
152 |
\item Position the calibration target on the circular rotation plate parallel to the projector -- make sure the rotation axis approximately intersects the center of the calibration target --, and inside the field of view of cameras and projector. White light will be provided from the projector for guidance. The GUI will show as shown on figure \ref{fig:calibration0}.
|
- |
|
153 |
|
- |
|
154 |
\item Optimally, the camera images in the GUI show a rather grayish calibration plate, and the background is totally black. If any pixels of the calibration plate are completely white, there is too much light. There are two options for adjusting the light:
|
- |
|
155 |
\begin{itemize}
|
- |
|
156 |
\item Adjust the lense aperture by turning the aperture ring on the camera. The narrower the aperture, the less light reaches the image plane, and vice versa.
|
133 |
\item The darkening curtain is lowered, to improve the signal to noise ratio, and to avoid artifacts pertaining from ambient lighting.
|
157 |
\item Adjust the \dolmes{camera/projector?} shutter time by \dolmes{how?}. The shutter time has to be a multiple of 16.666 milliseconds per image: 16.666, 33.333, 50.000, 66.666, 83.333, 100.000, 116.666, 133.333, etc (type \dolmes{how many?} digits after the decimal point!). This is due to the fact that the projector has a \dolmes{frame rate?} of 60 images per second, which corresponds to a \dolmes{projection time?} of 1/60 seconds per image, i.e. 16.666 milliseconds per image. The larger the shutter time, the more light reaches the image plane, and vice versa. Note that changing the shutter time does not affect calibration!
|
- |
|
158 |
\end{itemize}
|
- |
|
159 |
|
- |
|
160 |
\item SeeMaLab-Scanner: The darkening curtain is lowered, to improve the signal to noise ratio, and to avoid artifacts pertaining from ambient lighting.\\ Traveling SeeMa-Scanner: The light is usually no problem, otherwise darken the room.
|
- |
|
161 |
|
134 |
\item A number of calibration sets need to be acquired. The minimum is 3 sets, and more is beneficial. The calibration pattern needs to be fully visible and equally bright in both cameras. The viewing angle must not be too shallow. The preset ''batch acquisition'' gives a reasonable number of calibration sets.
|
162 |
\item A number of calibration sets need to be acquired. The minimum is 3 sets, and more is beneficial. The calibration pattern needs to be fully visible and equally bright in both cameras. The viewing angle must not be too shallow. Press ''batch acquisition'' in order to acquire a reasonable number of calibration sets using default parameters. \dolmes{Figure \ref{fig:calibration0} reveals that default acquisition is at angles $330^\circ, 335^\circ, \dots, 30^\circ$}. %The present ''batch acquisition'' gives a reasonable number of calibration sets.
|
- |
|
163 |
|
135 |
\item After acquisition, individual calibration sets can be re-examined. Calibration parameters are automatically determined by clicking the ''Calibrate'' button. This procedure can take up to a few minutes. The terminal output will show recalibration errors, which measure the quality of calibration.
|
164 |
\item After acquisition, individual calibration sets can be re-examined. Calibration parameters are automatically determined by clicking the ''Calibrate'' button. This procedure can take up to a few minutes. The terminal output will show recalibration errors, which measure the quality of calibration.
|
- |
|
165 |
|
136 |
\item The calibration result can be examined by changing to the ''Point Clouds'' tab in the GUI (see fig. \ref{fig:pointclouds0}). Left and right cameras are represented by coloured coordinate systems (the viewing direction is the positive z-axis, y points down, x to the right). The rotation axis, as determined by the calibration procedure is shown as a white line section.
|
166 |
\item A successful calibration goes along with a colorful pattern on the calibration target as shown in Figure \dolmes{XXX}. \dolmes{If the calibration fails for a calibration set, it is automatically ignored. Thus, it does not matter if the calibration is not successful for a few sets.} In addition,the calibration result can be examined by changing to the ''Point Clouds'' tab in the GUI (see Figure \ref{fig:pointclouds0}). Left and right cameras are represented by coloured coordinate systems (the viewing direction is the positive z-axis, y points down, x to the right). The rotation axis, as determined by the calibration procedure is shown as a white line section.
|
137 |
\end{enumerate}
|
167 |
\end{enumerate}
|
138 |
|
168 |
|
- |
|
169 |
|
139 |
\section{Making a 360 degree scan}
|
170 |
\section{Making a 360 degree scan}
|
- |
|
171 |
Image acquisition consists of projecting a sequence of patterns onto the object, which are then converted to depth values by means of the specific algorithm.\\
|
140 |
Depending on the surface complexity (blind spots, etc.), multiple $360^\circ$ scans may be necessary. In that case, the following procedure is done multiple times with the object in different orientations.
|
172 |
Depending on the surface complexity (blind spots, holes, details, etc.), multiple $360^\circ$ scans may be necessary. In that case, the following procedure is done multiple times with the object in different orientations (poses). Also consider to change the rotation angle in order to obtain a better result.
|
141 |
\begin{enumerate}
|
173 |
\begin{enumerate}
|
142 |
\item Choose the ''Capture'' tab in the GUI -- see figure \ref{fig:capture0} for an illustration.
|
174 |
\item Choose the ''Capture'' tab in the GUI -- see figure \ref{fig:capture0} for an illustration.
|
- |
|
175 |
|
143 |
\item The scan object is now placed on the rotation plate such that it is visible in both cameras, and the darkening curtain again lowered.
|
176 |
\item The scan object is now placed on the rotation plate such that it is visible in both cameras. SeeMaLab-Scanner: Lower the darkening curtain.
|
- |
|
177 |
|
- |
|
178 |
\item Check the \dolmes{light conditions}: Again, the object should appear grayish with a completely black background. \dolmes{If necessary, adjust the light conditions}, preferably by changing the shutter time as described in the calibration part, since re-calibration is not needed.
|
- |
|
179 |
|
144 |
\item Press ''Single Capture'' or ''Batch Capture'' in the GUI.
|
180 |
\item Press ''Single Capture'' or ''Batch Capture'' in the GUI.
|
- |
|
181 |
|
145 |
\item Sequences of patterns are projected onto the object. The captured images can be reviewed, and one or multiple captured sequences reconstructed using the ''Reconstruct'' button.
|
182 |
\item Sequences of patterns are projected onto the object \dolmes{different methods? where to choose?}. The captured images can be reviewed, and one or multiple captured sequences reconstructed using the ''Reconstruct'' button.
|
- |
|
183 |
|
146 |
\item The results will show up in the ''Points Clouds'' tab. Single point clouds can be shown or hidden, see figure \ref{fig:pointclouds1}.
|
184 |
\item The results will show up in the ''Points Clouds'' tab \dolmes{how to zoom in - a button has to be pressed?}. Single point clouds can be shown or hidden, see figure \ref{fig:pointclouds1}.
|
- |
|
185 |
|
147 |
\item All data can be exported from the GUI program by means of the top bar menus. By exporting the point clouds into a folder, a \texttt{*.aln} is stored alongside these, which contains pose information in global coordinate space, which aligns the points clouds correctly and relative to each other.
|
186 |
\item All data can be exported from the GUI program by means of the top bar menus. By exporting the point clouds into a folder, a \texttt{*.aln} is stored alongside these, which contains pose information in global coordinate space, which aligns the points clouds correctly and relative to each other.
|
148 |
\end{enumerate}
|
187 |
\end{enumerate}
|
- |
|
188 |
|
- |
|
189 |
|
149 |
\begin{figure}[h]
|
190 |
\begin{figure}[h]
|
150 |
\centering
|
191 |
\centering
|
151 |
\includegraphics[width=.7\textwidth]{calibration0.png}
|
192 |
\includegraphics[width=.7\textwidth]{calibration0.png}
|
152 |
\caption{The GUI showing the ''Calibration'' tab.}
|
193 |
\caption{The GUI showing the ''Calibration'' tab.}
|
153 |
\label{fig:calibration0}
|
194 |
\label{fig:calibration0}
|
Line 170... |
Line 211... |
170 |
\caption{''Point Clouds'' tab with reconstructed point clouds.}
|
211 |
\caption{''Point Clouds'' tab with reconstructed point clouds.}
|
171 |
\label{fig:pointclouds1}
|
212 |
\label{fig:pointclouds1}
|
172 |
\end{figure}
|
213 |
\end{figure}
|
173 |
\clearpage
|
214 |
\clearpage
|
174 |
|
215 |
|
- |
|
216 |
|
175 |
\section{Reconstructing a mesh surface}
|
217 |
\chapter{Reconstructing a mesh surface}
|
176 |
Multiple point clouds can be merged into a single watertight mesh representation using Meshlab. Meshlab is available on the scanner computer, but also freely available for download for multiple platforms. The basic steps involved in merging and reconstructing are outlined below. The input data will consist of one or more sets of pointclouds acquired with the SeeMaLab GUI. Note that if multiple object poses are desired (for complex geometries/blind spots, etc.), it is recommended to close and restart the GUI for each pose, to clear the captured sequences and memory.
|
218 |
Multiple point clouds can be merged into a single watertight mesh representation using Meshlab. Meshlab is available on the scanner computer, but also freely available for download for multiple platforms. The basic steps involved in merging and reconstructing are outlined below. The input data will consist of one or more sets of pointclouds acquired with the SeeMaLab GUI. Note that if multiple object poses are desired (for complex geometries/blind spots, etc.), it is recommended to close and restart the GUI for each pose, to clear the captured sequences and memory.
|
177 |
\begin{enumerate}
|
219 |
\begin{enumerate}
|
178 |
\item Load a set of point clouds, by opening the \texttt{*.aln} file in Meshlab (''File $\rightarrow$ Open Project...''). See figure \ref{fig:meshlab0} for an illustration of one full set of scans loaded into Meshlab.
|
220 |
\item Load a set of point clouds, by opening the \texttt{*.aln} file in Meshlab (''File $\rightarrow$ Open Project...''). See figure \ref{fig:meshlab0} for an illustration of one full set of scans loaded into Meshlab.
|
179 |
\item The PLY files do contain XYZ and RGB values for all points. You will need to compute normals, in order for the surface reconstruction to succeed. These normals can be estimated and consistently oriented by considering the camera viewpoint. Select all point cloud in turn and for each, choose ''Filters $\rightarrow$ Point Sets $\rightarrow$ Compute Normals for Point Set''. Make sure the ''Flip normals...'' checkbox is ticked (see fig. \ref{fig:meshlab1}). Suitable neighbourhood values are in the order of $10$. You can visualise the estimated normals through the ''Render'' menu.
|
221 |
\item The PLY files do contain XYZ and RGB values for all points. You will need to compute normals, in order for the surface reconstruction to succeed. These normals can be estimated and consistently oriented by considering the camera viewpoint. Select all point cloud in turn and for each, choose ''Filters $\rightarrow$ Point Sets $\rightarrow$ Compute Normals for Point Set''. Make sure the ''Flip normals...'' checkbox is ticked (see fig. \ref{fig:meshlab1}). Suitable neighbourhood values are in the order of $10$. You can visualise the estimated normals through the ''Render'' menu.
|
180 |
\item After estimating normals for all point clouds in a set, choose ''Filters $\rightarrow$ Mesh Layer $\rightarrow$ Flatten Visible Layers''. Make sure to retain unreferenced vertices, because at this point, none of the points will be part of any triangles (see figure \ref{fig:meshlab2}). This process will alter all coordinates by applying the pose transformation to all point clouds before merging them.
|
222 |
\item After estimating normals for all point clouds in a set, choose ''Filters $\rightarrow$ Mesh Layer $\rightarrow$ Flatten Visible Layers''. Make sure to retain unreferenced vertices, because at this point, none of the points will be part of any triangles (see figure \ref{fig:meshlab2}). This process will alter all coordinates by applying the pose transformation to all point clouds before merging them.
|