Subversion Repositories seema-scanner

Rev

Rev 2 | Rev 60 | Go to most recent revision | Show entire file | Ignore whitespace | Details | Blame | Last modification | View Log | RSS feed

Rev 2 Rev 56
Line 1... Line 1...
1
\documentclass[10pt]{article}
1
\documentclass[10pt]{article}
2
\usepackage{url}
2
\usepackage{url}
3
 
-
 
-
 
3
\usepackage{graphicx}
-
 
4
\usepackage{fullpage}
4
 
5
 
5
\title{The SeeMa Lab Structured Light Scanner}
6
\title{The SeeMa Lab Structured Light Scanner}
6
\author{Eytor Eirikson and Jakob Wilm\\
7
\author{Eytor Eirikson and Jakob Wilm\\
7
		\url{{eruei, jakw}@dtu.dk}}
8
		\url{{eruei, jakw}@dtu.dk}}
8
\date{\today}
9
\date{\today}
9
 
10
 
10
\begin{document}
11
\begin{document}
11
 
12
 
12
\maketitle
13
\maketitle
13
 
14
 
-
 
15
\begin{figure}[h]
-
 
16
	\centering
-
 
17
		\includegraphics[width=.9\textwidth]{mesh0.png}
-
 
18
	\label{fig:mesh0}
-
 
19
\end{figure}
-
 
20
 
14
\begin{abstract}
21
\begin{abstract}
15
This document is the official manual for the Seeing Machines Lab Structured Light Scanner. The scanner constitutes of both hardware components (the physical device, including cameras, projector and rotation stage), and the software GUI needed to perform scans. While most of these components should be self-explanatory, we will describe the functional principles of the scanner and give a brief introduction for how to get from a physical object to a complete digital meshed model of it. This document also describes the software components involved, making it possible for students and staff to implement scan software, and possibly extend the software.
22
This document is the official manual for the Seeing Machines Lab Structured Light Scanner, SeeMa-Scanner for short. The scanner constitutes of both hardware components (the physical device, including cameras, projector and rotation stage), and the software GUI needed to perform object surface digitizations in full color with high precision. While most of these components should be self-explanatory, we will describe the functional principles of the scanner and give a brief introduction for how to get from a physical object to a complete digital meshed model of it. This document also describes the software components involved, making it possible for students and staff to implement scan software, and possibly extend the software.
16
\end{abstract}
23
\end{abstract}
17
 
24
 
-
 
25
\section{Getting started}
-
 
26
Welcome to the SeeMaLab 3D scanner documentation. This document describes the main hardware and software parts of the system, and provides short directions for performing scans, and reconstructing surfaces. Please be very careful with this very expensive equipment, and considerate by not misplacing any parts and not borrowing any components of the scanner hardware.
-
 
27
 
-
 
28
If your main objective is to digitize objects, you should be able to do so on your own by reading this documentation, and familiarizing yourself with the scanner. In case you have any doubts, please don't heasitate to contact the authors.
-
 
29
 
-
 
30
Technical projects and contributions are very welcome. Please get in touch with the authors if you plan any alterations to the hardware, or would like write access to the SVN repository containing the software.
18
 
31
 
19
\section{Hardware parts}
32
\section{Hardware parts}
-
 
33
\begin{table}
-
 
34
	\begin{tabular}{l l l p{0.3\textwidth}}
-
 
35
		\textbf{Part}              & \textbf{Manufacturer} & \textbf{Model} & \textbf{Specifications} \\
-
 
36
		\hline
-
 
37
		Industrial Cameras & Point Grey Research & GS3-U3-91S6C-C & Color, 9.1 MP, Sony ICX814 CCD, 1", 3.69 µm, Global shutter, 3376 x 2704 at 9 FPS \\[0.5cm]
-
 
38
		Camera Lenses & Kowa & LM12SC & 1'', 12mm, 6MPix \\[0.5cm]
-
 
39
		Projector		& LG & PF80G & DLP 1080p HD resolution (1920 x 1080), 1,000 ANSI lumen, LED light source \\[0.5cm]
-
 
40
		Rotations Stage & Newmark & RM-5-110 & 0.36 arc-sec resolution, 70 arc-sec accuracy, 5 arc-sec repeatability, stepper motor, 72:1 gear ratio, home switch, no optical encoder \\[0.5cm]
-
 
41
		Rotation Controller & Newmark & NSC-A1 & Single Axis, Serial over USB, C API \\[0.5cm]
-
 
42
		Breadboard & Thorlabs & PBG11111 & 4' x 2.5' x 1.0", 21 kg, 1/4"-20 Holes on 1" Centers\\[0.5cm]
-
 
43
		Computer & Dell & Precision T1700 & 32GB RAM, 256 GB SSD drive, 2 TB data storage HDD, Ubuntu OS
-
 
44
	\end{tabular}
-
 
45
	\label{tbl:hardwareparts}
-
 
46
\end{table}
-
 
47
 
-
 
48
Table \ref{tbl:hardwareparts} lists the main hardware parts of the SeeMaLab 3D scanner with their specifications. The hardware consists of a set of industrial cameras and a projector mounted on a sturdy aluminum optical breadboard. A microtranslation stage holds the circular object plate, which can accurately rotate the scan object, in order to capture point clouds from different angles. 
-
 
49
 
-
 
50
The cameras, projector and rotation stage are mounted rigidly with respect to each other, which is important for high quality results. See figure \ref{fig:hardware0} for an image of the inside of the main scanner assembly. A darkening curtain can be lowered, to prevent ambient light from interfering with the measurement procedure. 
-
 
51
\begin{figure}[h]
-
 
52
	\centering
-
 
53
		\includegraphics[width=.9\textwidth]{hardware0.JPG}
-
 
54
	\caption{The scanner hardware. Two industrial cameras and one projector constitute the optical parts. An angle figure acts as the scan object, and is placed on top of the circular rotation plate. This plate is screwed onto a microrotation stage. The projector remote control and the calibration target are also seen.}
-
 
55
	\label{fig:hardware0}
-
 
56
\end{figure}
-
 
57
 
-
 
58
The geometry of the scanner is illustrated on figure \ref{fig:hardwaredimensions}, which also indicates the minimum focus range of the cameras and projector.
-
 
59
\begin{figure}[h]
-
 
60
	\centering	
-
 
61
		\includegraphics[width=.9\textwidth]{hardwaredimensions.pdf}
-
 
62
	\caption{The physical dimensions of the breadboard, and throw angles of the cameras and projector.}
-
 
63
	\label{fig:hardwaredimensions}
-
 
64
\end{figure}
-
 
65
 
-
 
66
 
-
 
67
A custom calibration target was produced by printing a checkerboard in vector format, and gluing it onto the outer glass surface of a standard picture frame using spray adhesive.
-
 
68
 
20
 
69
 
21
\section{Software components}
70
\section{Software components}
-
 
71
The SeeMaLab 3D scanner has a full graphical user interface for calibration, and scanning. The output from this software is a number of color pointclouds in the PLY format along with a Meshlab alignment project file (file suffix .aln), which contains orientation information as provided from the rotation stage parameters. This allows the user to import the point cloud for further processing in Meshlab, e.g. to produce a full mesh model of the surface. The rotation axis is determined during calibration, which means that usually no manual or algorithm-assisted alignment of partial surfaces is necessary. 
-
 
72
 
-
 
73
To get fine grained control over the scan procedure, the user can modify the source code for the GUI application, or use the supplied Matlab wrappers. These wrappers provide basic functionality to capture images with the cameras, project a specific pattern on the projector, or rotate the rotation stage to a specific position. Using these components, a full structured light scanner can be implemented in Matlab with full design freedom. 
-
 
74
 
-
 
75
\section{Aquiring scans}
-
 
76
The following procedure explains the steps involved in calibration and aquisition of a $360^\circ$ scan of an object. 
-
 
77
 
-
 
78
Calibration parameters consist of camera focal lengths, central points, lens distortion parameters, camera extrinsics (their relative position and angles), and the location and orientation of the rotation stage axis. These parameters are stored in the GUI, but it is highly recommended to perform a new calibration before aquiring new data. Also, the exact position of cameras may be altered to better fit the object, in which case recalibration is mandatory. The calibration parameters can be exported into a \texttt{*.xml} file through the top bar menu. The global coordinate system, in which everything is expresses coincides with the left camera.
-
 
79
 
-
 
80
Image aquisition consists of projecting a sequence of patterns onto the object, which are then converted to depth values by means of the specific algorithm.
-
 
81
 
-
 
82
Depending on the surface complexity (blind spots, etc.), multiple $360^\circ$ scans may be necessary. In that case, the following procedure is done multiple times.
-
 
83
\begin{enumerate}
-
 
84
	\item The GUI application is started on the scanner computer. The projector is turned on using the remote control or the touch interface on its top. Make sure the proper HDMI input is chosen as source. Some software settings can be altered through the ''File $\rightarrow$ Preference'' menu, if necessary (the GUI needs to be restarted after altering these settings).
-
 
85
	\item Position the calibration target on the circular rotation plate, and inside the field of view of cameras and projector. White light will be provided from the projector for guidance. The GUI will show as shown on figure \ref{fig:calibration0}.
-
 
86
	\item The darkening curtain is lowered, to improve the signal to noise ratio, and to avoid artifacts pertaining from ambient lighting.
-
 
87
	\item A number of calibration sets need to be aquired. The bare minium is 3 sets, and more is beneficial. The calibration pattern needs to be fully visible and bright in both cameras. The viewing angle must not be too shallow. The preset ''batch aquisition'' gives a reasonable number of calibration sets.
-
 
88
	\item After aquisition, individual calibration sets can be re-examined. Calibration parameters are automatically determined by clicking the ''Calibrate'' button. This procedure can take up to a few minutes. The terminal output will show recalibration errors, which measure the quality of calibration. 
-
 
89
	\item The calibration result can be examined by changing to the ''Point Clouds'' tab in the GUI (see fig. \ref{fig:pointclouds0}). Left and right cameras are representated by colored coordinate systems (the viewing direction is the positive z-axis, y points down, x to the right). The rotation axis, as determined by the calibration procedure is shown as a white line section. 
-
 
90
	\item After successfull calibration, data can be aquired for later point cloud reconstruction. This is done in the ''Capture'' tab, see figure \ref{fig:capture0} for an illustration. 
-
 
91
	\item The scan object is now placed on the rotation plate, and the darkening curtain again lowered. 
-
 
92
	\item Sequences of patterns are projected onto the object. The captured images can be reviewed, and one or multiple captured sequences reconstructed using the ''Reconstruct'' button. 
-
 
93
	\item The results will show up in the ''Points Clouds'' tab. Single point clouds can be shown or hidden.
-
 
94
	\item All data can be exported from the GUI program by means of the top bar menues. By exporting the point clouds into a folder, a \texttt{*.aln} is stored alongside these, which contains pose information in global coordinate space, which aligns the points clouds correctly and relative to each other.
-
 
95
\end{enumerate}
-
 
96
	
-
 
97
\begin{figure}[h]
-
 
98
	\centering
-
 
99
		\includegraphics[width=.9\textwidth]{calibration0.png}
-
 
100
	\caption{The GUI showing the ''Calibration'' tab.}
-
 
101
	\label{fig:calibration0}
-
 
102
\end{figure}
-
 
103
\begin{figure}[h]
-
 
104
	\centering
-
 
105
		\includegraphics[width=.9\textwidth]{pointclouds0.png}
-
 
106
	\caption{GUI showing the result of calibration in the ''Point Clouds'' tab.}
-
 
107
	\label{fig:pointclouds0}
-
 
108
\end{figure}
-
 
109
\begin{figure}[h]
-
 
110
	\centering
-
 
111
		\includegraphics[width=.9\textwidth]{capture0.png}
-
 
112
	\caption{The ''Capture'' tab in the GUI.}
-
 
113
	\label{fig:capture0}
-
 
114
\end{figure}	
-
 
115
 
-
 
116
\section{Reconstructing a surface}
-
 
117
Multiple point clouds can be merged fused into a single watertight mesh representation using Meshlab. Meshlab is available on the scanner computer, but also freely available for download for multiple platforms. The basic steps involved in merging and reconstructing are outlined below. The input data will consist of one or more sets of pointclouds aquired with the SeeMaLab GUI. Note that if multiple object poses are desired (for complex geometries/blind spots, etc.), it is recommended to close and restart the GUI for each pose, to clear the captured sequences and memory.
-
 
118
\begin{enumerate}
-
 
119
	\item Load a set of point clouds, by opening the \texttt{*.aln} file in Meshlab (''File $\rightarrow$ Open Project...''). See figure \ref{fig:meshlab0} for an illustration of one full set of scans loaded into Meshlab.
-
 
120
	\item The PLY files do contain XYZ and RGB values for all points. You will need to estimate normals, in order for the surface reconstruction to succeed. These normals can be estimated and consistently oriented by considering the camera viewpoint. Select all point cloud in turn and for each, choose ''Filters $\rightarrow$ Point Sets $\rightarrow$ Compute Normals for Point Set''. Make sure the ''Flip normals...'' checkbox is ticked (see fig. \ref{fig:meshlab1}). Suitable neighborhood values are in the order of $10$. You can visualize the estimated normals through the ''Render'' menu.
-
 
121
	\item After estimating normals for all point clouds in a set, choose ''Filters $\rightarrow$ Mesh Layer $\rightarrow$ Flatten Visible Layers''. Make sure to retain unreferences vertices, because at this point, none of the points will be part of any triangles (see figure \ref{fig:meshlab2}). This process will alter all coordinates by applying the pose transformation to all point clouds before merging them.
-
 
122
	\item Save the resulting merged point cloud. In the save dialog, make sure to include the normals in the output file (see fig. \ref{fig:meshlab3}).
-
 
123
\end{enumerate}
-
 
124
 
-
 
125
\begin{figure}[h]
-
 
126
	\centering
-
 
127
		\includegraphics[width=\textwidth]{meshlab0.png}
-
 
128
	\caption{One full set of scans (9 point clouds covering $360^\circ$ in $40^\circ$ intervals).}	
-
 
129
	\label{fig:meshlab0}
-
 
130
\end{figure}
-
 
131
\begin{figure}[h]
-
 
132
	\centering
-
 
133
		\includegraphics[width=.9\textwidth]{meshlab1.png}
-
 
134
	\caption{Estimate normals, and orient them consistenly towards the camera (positive z-axis).}
-
 
135
	\label{fig:meshlab1}
-
 
136
\end{figure}
-
 
137
\begin{figure}[h]
-
 
138
	\centering
-
 
139
		\includegraphics[width=.9\textwidth]{meshlab2.png}
-
 
140
	\caption{Flatten visible layers and retain ''unreferences vertices'', i.e. points not in a triangle.}
-
 
141
	\label{fig:meshlab2}
-
 
142
\end{figure}
-
 
143
\begin{figure}[h]
-
 
144
	\centering
-
 
145
		\includegraphics[width=.9\textwidth]{meshlab3.png}
-
 
146
	\caption{Save the merged point clouds, and include the estimated normals in the output file.}
-
 
147
	\label{fig:meshlab3}
-
 
148
\end{figure}
-
 
149
 
-
 
150
If you have aquired multiple $360^\circ$ scans of your object in different position, proceed as above for each set. Then, you will need to align and merge these point cloud. Meshlab has manual coarse and automated ICP alignment integrated. Note that the automatic alignment procedure in Meshlab requires high quality point normal estimates for all point cloud to succeed. If this is not given, the alignment process will fail without warning or errors.
-
 
151
\begin{enumerate}
-
 
152
	\item Load the point clouds of interest (''File $\rightarrow$ Import Mesh''). The imported point cloud will not be properly aligned. Open the alignment tool (a big yellow A tool button). See figure \ref{fig:meshlab4} for an image of this tool. ''Glueing'' in Meshlab means setting an initial rough alignment. You can ''glue'' the first mesh, and rough ''glue'' the others to it by selecting a small number (minimum 4) of surface point correspondences with the mouse. When all point clouds have been ''glued'', you can initiate automatic fine-alignment (groupwise ICP) by pressing ''Process''. A good alignment should be confirmed by selecting ''False colors'', and seeing a good mix of colors in overlap areas. 
-
 
153
	\item Merge the aligned point cloud ''Filters $\rightarrow$ Mesh Layer $\rightarrow$ Flatten Visible Layers''.
-
 
154
\end{enumerate}
-
 
155
\begin{figure}[h]
-
 
156
	\centering
-
 
157
		\includegraphics[width=.9\textwidth]{meshlab4.png}
-
 
158
	\caption{The alignment tool in Meshlab.}
-
 
159
	\label{fig:meshlab4}
-
 
160
\end{figure}
22
 
161
 
-
 
162
The final step is to reconstruct a surface from a point cloud. This can be done using the Poisson surface reconstruction built into Meshlab. It is accessible through ''File $\rightarrow$ Point Set $\rightarrow$ Surface Reconstruction: Poisson''. You will most probably have to vary the parameters for this step, to obtain pleasing results for your particular data. 
23
 
163
 
-
 
164
The full Poisson code is available at \url{http://www.cs.jhu.edu/~misha/Code/PoissonRecon/Version6.11/}, and also installed on the scanner computer. The full software allows for finer control over the process, and also to remove mesh membranes with little point support.
24
 
165
 
25
\end{document}
166
\end{document}
26
 
167