Subversion Repositories seema-scanner

Rev

Rev 56 | Rev 61 | Go to most recent revision | Show entire file | Ignore whitespace | Details | Blame | Last modification | View Log | RSS feed

Rev 56 Rev 60
Line 1... Line 1...
1
\documentclass[10pt]{article}
1
\documentclass[10pt,notitlepage]{report}
2
\usepackage{url}
2
\usepackage{url}
3
\usepackage{graphicx}
3
\usepackage{graphicx}
4
\usepackage{fullpage}
4
\usepackage{fullpage}
5
 
5
 
-
 
6
\renewcommand{\chaptermark}[1]{\markboth{#1}{}}
-
 
7
\renewcommand{\sectionmark}[1]{\markright{\thesection\ #1}}
-
 
8
 
6
\title{The SeeMa Lab Structured Light Scanner}
9
\title{The SeeMa Lab Structured Light Scanner}
7
\author{Eytor Eirikson and Jakob Wilm\\
10
\author{Eytor Eirikson and Jakob Wilm\\
8
		\url{{eruei, jakw}@dtu.dk}}
11
		\url{{eruei, jakw}@dtu.dk}}
9
\date{\today}
12
\date{\today}
10
 
13
 
Line 20... Line 23...
20
 
23
 
21
\begin{abstract}
24
\begin{abstract}
22
This document is the official manual for the Seeing Machines Lab Structured Light Scanner, SeeMa-Scanner for short. The scanner constitutes of both hardware components (the physical device, including cameras, projector and rotation stage), and the software GUI needed to perform object surface digitizations in full color with high precision. While most of these components should be self-explanatory, we will describe the functional principles of the scanner and give a brief introduction for how to get from a physical object to a complete digital meshed model of it. This document also describes the software components involved, making it possible for students and staff to implement scan software, and possibly extend the software.
25
This document is the official manual for the Seeing Machines Lab Structured Light Scanner, SeeMa-Scanner for short. The scanner constitutes of both hardware components (the physical device, including cameras, projector and rotation stage), and the software GUI needed to perform object surface digitizations in full color with high precision. While most of these components should be self-explanatory, we will describe the functional principles of the scanner and give a brief introduction for how to get from a physical object to a complete digital meshed model of it. This document also describes the software components involved, making it possible for students and staff to implement scan software, and possibly extend the software.
23
\end{abstract}
26
\end{abstract}
24
 
27
 
-
 
28
\chapter{The scanner}
25
\section{Getting started}
29
\section{Getting started}
26
Welcome to the SeeMaLab 3D scanner documentation. This document describes the main hardware and software parts of the system, and provides short directions for performing scans, and reconstructing surfaces. Please be very careful with this very expensive equipment, and considerate by not misplacing any parts and not borrowing any components of the scanner hardware.
30
Welcome to the SeeMaLab 3D scanner documentation. This document describes the main hardware and software parts of the system, and provides short directions for performing scans, and reconstructing surfaces. Please be very careful with this very expensive equipment, and considerate by not misplacing any parts and not borrowing any components of the scanner hardware.
27
 
31
 
28
If your main objective is to digitize objects, you should be able to do so on your own by reading this documentation, and familiarizing yourself with the scanner. In case you have any doubts, please don't heasitate to contact the authors.
32
If your main objective is to digitize objects, you should be able to do so on your own by reading this documentation, and familiarizing yourself with the scanner. The chapter ''Practical Scanning'' gives a step-by-step recipe to perform a complete object digitization. 
29
 
33
 
30
Technical projects and contributions are very welcome. Please get in touch with the authors if you plan any alterations to the hardware, or would like write access to the SVN repository containing the software.
34
Technical projects and contributions are very welcome. Please get in touch with the authors if you plan any alterations to the hardware, or would like write access to the SVN repository containing the software.
31
 
35
 
32
\section{Hardware parts}
36
\section{Hardware parts}
33
\begin{table}
37
\begin{table}
Line 61... Line 65...
61
		\includegraphics[width=.9\textwidth]{hardwaredimensions.pdf}
65
		\includegraphics[width=.9\textwidth]{hardwaredimensions.pdf}
62
	\caption{The physical dimensions of the breadboard, and throw angles of the cameras and projector.}
66
	\caption{The physical dimensions of the breadboard, and throw angles of the cameras and projector.}
63
	\label{fig:hardwaredimensions}
67
	\label{fig:hardwaredimensions}
64
\end{figure}
68
\end{figure}
65
 
69
 
-
 
70
\subsection{Projector}
-
 
71
The SeeMa-Scanner uses a standard commercial Full-HD projector. This is a very cost-effective solution for high resolution projection. The projector is configured to minimal image processing, and the HDMI port configured for ''Notebook''-use, which gives the lowest possible input lag. It should be noted that commercial projector do not posess a linear response, which is necessary for many structured light purposes. Gamma can be set to $1.6$, and if matched in the graphics card configuration of the scan computer, a close to linear response can be achieved. 
-
 
72
 
-
 
73
\subsection{Cameras}
-
 
74
These are high resolution 9MPx industrial CCD color cameras. While color information is usually not necessary in structured light, it enables us to capture color information of the object. In the program code, a white balance is used for the camera, which approximately matches the white light used in the projector. To acchieve true coloring, a rigourous color calibration would have to be done.
66
 
75
 
-
 
76
\subsection{Rotation stage}
67
A custom calibration target was produced by printing a checkerboard in vector format, and gluing it onto the outer glass surface of a standard picture frame using spray adhesive.
77
This is a socalled micro-rotation stage, commonly used in high precision photonic research and production. A larger diameter plate was attached. The rotation stage has a stepper motor which drives a worm-gear. This gives high precision and very high repeatability. Note that the rotation stage does not have an optical encoder. It is reset to 0 degrees at each program start in software. The motor controller can be configured for different levels of microstepping and motor current. Higher motor current provides more torque and less risk of missing steps. Load on the plate should not exceed 20 kg, and be centered around the rotation axis. Objects can be stabilized on the plate using e.g. modeling clay.
68
 
78
 
-
 
79
\subsection{Calibration target}
-
 
80
A calibration target is also part of the scanner. It was produced by printing a checkerboard in vector format, and gluing it onto the outer glass surface of a standard picture frame using spray adhesive. Please note that the target is asymmetrical, which is necessary to uniquely match chessboard corners in both cameras. The calibration target was designed to fill the scan objects space. If you need a smaller scan area, a smaller calibration target would be beneficial, however physical dimensions are currently hardcoded in the scanner GUI. Please also note the minimal focus distance of the projector and cameras.
69
 
81
 
70
\section{Software components}
82
\section{Software components}
71
The SeeMaLab 3D scanner has a full graphical user interface for calibration, and scanning. The output from this software is a number of color pointclouds in the PLY format along with a Meshlab alignment project file (file suffix .aln), which contains orientation information as provided from the rotation stage parameters. This allows the user to import the point cloud for further processing in Meshlab, e.g. to produce a full mesh model of the surface. The rotation axis is determined during calibration, which means that usually no manual or algorithm-assisted alignment of partial surfaces is necessary. 
83
The SeeMaLab 3D scanner has a full graphical user interface for calibration, and scanning. The output from this software is a number of color pointclouds in the PLY format along with a Meshlab alignment project file (file suffix .aln), which contains orientation information as provided from the rotation stage parameters. This allows the user to import the point cloud for further processing in Meshlab, e.g. to produce a full mesh model of the surface. The rotation axis is determined during calibration, which means that usually no manual or algorithm-assisted alignment of partial surfaces is necessary. 
72
 
84
 
73
To get fine grained control over the scan procedure, the user can modify the source code for the GUI application, or use the supplied Matlab wrappers. These wrappers provide basic functionality to capture images with the cameras, project a specific pattern on the projector, or rotate the rotation stage to a specific position. Using these components, a full structured light scanner can be implemented in Matlab with full design freedom. 
85
To get fine grained control over the scan procedure, the user can modify the source code for the GUI application, or use the supplied Matlab wrappers. These wrappers provide basic functionality to capture images with the cameras, project a specific pattern on the projector, or rotate the rotation stage to a specific position. Using these components, a full structured light scanner can be implemented in Matlab with full design freedom. 
74
 
86
 
-
 
87
\section{GUI}
-
 
88
The scanner GUI was developed using Qt, OpenCV and the Pointcloud Library (PCL). It enables the user to perform calibration of the scanner, and to aquire scan data. It is built in a modular fashion, to allow for new structured light strategies to be implemented. It is, however, supposed to be simple and stable, so please keep experimental builds in seperate SVN branches. 
-
 
89
 
-
 
90
GUI functionality heavily depends on Qt. For interoperability with PCL, it is necessary to build against Qt 4.x. Most other components, specifically those with Matlab wrappers, have minimal dependencies, and can be used outside of the GUI framework.
-
 
91
 
-
 
92
\section{\texttt{Projector} Class} 
-
 
93
This class provides a fullscreen OpenGL context, and the ability to project any texture. The window/context creation is operating system dependant. It works very well on Linux with proprietary nVidia drivers, as found on the scan computer. In order to get a completely independant screen output, which does not interfere with the window manager, the projector needs to be set up as a seperate X screen in \texttt{xorg.conf}. The absolute position of this second X screen must provide a small gap to the primary screen. This gives a secondary screen, which is not recognized by Compiz (Unity in Ubuntu), but which can be accessed through the Projector class.
-
 
94
 
-
 
95
\section{\texttt{Camera} Class}
-
 
96
An abstraction from the individual industrial camera APIs was created, in order to easy replacement and enhance modularity. A concrete implementation for Point Grey cameras is provided. The program is currently designed for ''software triggering'' of the cameras. Due to substantial input lag in the projector and cameras, a certain pause must be made in program execution between projecting a certain pattern, and image capture. Close temporal syncronization of both cameras is achieved by calling the trigger method on both cameras, and collecting the images subsequently.
-
 
97
 
-
 
98
\section{\texttt{RotationStage} Class}
75
\section{Aquiring scans}
99
Here a C++ abstraction for the Newmark motion control API was implemented. The C API essentially receives serial commands for serial-over-USB, and full documentation is provided on the Newmark website. Important things to consider are the latencies of many of these calls. Specifically reading and writing ''hardware settings'' such as microstep levels and motor current take considerable amounts of time.
-
 
100
 
-
 
101
\chapter{Practical scanning}
76
The following procedure explains the steps involved in calibration and aquisition of a $360^\circ$ scan of an object. 
102
The following procedure explains the steps involved in calibration and aquisition of a $360^\circ$ scan of an object. 
77
 
103
 
78
Calibration parameters consist of camera focal lengths, central points, lens distortion parameters, camera extrinsics (their relative position and angles), and the location and orientation of the rotation stage axis. These parameters are stored in the GUI, but it is highly recommended to perform a new calibration before aquiring new data. Also, the exact position of cameras may be altered to better fit the object, in which case recalibration is mandatory. The calibration parameters can be exported into a \texttt{*.xml} file through the top bar menu. The global coordinate system, in which everything is expresses coincides with the left camera.
104
Calibration parameters consist of camera focal lengths, central points, lens distortion parameters, camera extrinsics (their relative position and angles), and the location and orientation of the rotation stage axis. These parameters are stored in the GUI, but it is highly recommended to perform a new calibration before aquiring new data. Also, the exact position of cameras may be altered to better fit the object, in which case recalibration is mandatory. The calibration parameters can be exported into a \texttt{*.xml} file through the top bar menu. The global coordinate system, in which everything is expresses coincides with the left camera.
79
 
105
 
80
Image aquisition consists of projecting a sequence of patterns onto the object, which are then converted to depth values by means of the specific algorithm.
106
Image aquisition consists of projecting a sequence of patterns onto the object, which are then converted to depth values by means of the specific algorithm.
Line 94... Line 120...
94
	\item All data can be exported from the GUI program by means of the top bar menues. By exporting the point clouds into a folder, a \texttt{*.aln} is stored alongside these, which contains pose information in global coordinate space, which aligns the points clouds correctly and relative to each other.
120
	\item All data can be exported from the GUI program by means of the top bar menues. By exporting the point clouds into a folder, a \texttt{*.aln} is stored alongside these, which contains pose information in global coordinate space, which aligns the points clouds correctly and relative to each other.
95
\end{enumerate}
121
\end{enumerate}
96
	
122
	
97
\begin{figure}[h]
123
\begin{figure}[h]
98
	\centering
124
	\centering
99
		\includegraphics[width=.9\textwidth]{calibration0.png}
125
		\includegraphics[width=.7\textwidth]{calibration0.png}
100
	\caption{The GUI showing the ''Calibration'' tab.}
126
	\caption{The GUI showing the ''Calibration'' tab.}
101
	\label{fig:calibration0}
127
	\label{fig:calibration0}
102
\end{figure}
128
\end{figure}
103
\begin{figure}[h]
129
\begin{figure}[h]
104
	\centering
130
	\centering
105
		\includegraphics[width=.9\textwidth]{pointclouds0.png}
131
		\includegraphics[width=.7\textwidth]{pointclouds0.png}
106
	\caption{GUI showing the result of calibration in the ''Point Clouds'' tab.}
132
	\caption{GUI showing the result of calibration in the ''Point Clouds'' tab.}
107
	\label{fig:pointclouds0}
133
	\label{fig:pointclouds0}
108
\end{figure}
134
\end{figure}
109
\begin{figure}[h]
135
\begin{figure}[h]
110
	\centering
136
	\centering
111
		\includegraphics[width=.9\textwidth]{capture0.png}
137
		\includegraphics[width=.7\textwidth]{capture0.png}
112
	\caption{The ''Capture'' tab in the GUI.}
138
	\caption{The ''Capture'' tab in the GUI.}
113
	\label{fig:capture0}
139
	\label{fig:capture0}
114
\end{figure}	
140
\end{figure}	
115
 
141
 
116
\section{Reconstructing a surface}
142
\section{Reconstructing a surface}
Line 128... Line 154...
128
	\caption{One full set of scans (9 point clouds covering $360^\circ$ in $40^\circ$ intervals).}	
154
	\caption{One full set of scans (9 point clouds covering $360^\circ$ in $40^\circ$ intervals).}	
129
	\label{fig:meshlab0}
155
	\label{fig:meshlab0}
130
\end{figure}
156
\end{figure}
131
\begin{figure}[h]
157
\begin{figure}[h]
132
	\centering
158
	\centering
133
		\includegraphics[width=.9\textwidth]{meshlab1.png}
159
		\includegraphics[width=.4\textwidth]{meshlab1.png}
134
	\caption{Estimate normals, and orient them consistenly towards the camera (positive z-axis).}
160
	\caption{Estimate normals, and orient them consistenly towards the camera (positive z-axis).}
135
	\label{fig:meshlab1}
161
	\label{fig:meshlab1}
136
\end{figure}
162
\end{figure}
137
\begin{figure}[h]
163
\begin{figure}[h]
138
	\centering
164
	\centering
139
		\includegraphics[width=.9\textwidth]{meshlab2.png}
165
		\includegraphics[width=.4\textwidth]{meshlab2.png}
140
	\caption{Flatten visible layers and retain ''unreferences vertices'', i.e. points not in a triangle.}
166
	\caption{Flatten visible layers and retain ''unreferences vertices'', i.e. points not in a triangle.}
141
	\label{fig:meshlab2}
167
	\label{fig:meshlab2}
142
\end{figure}
168
\end{figure}
143
\begin{figure}[h]
169
\begin{figure}[h]
144
	\centering
170
	\centering
145
		\includegraphics[width=.9\textwidth]{meshlab3.png}
171
		\includegraphics[width=.7\textwidth]{meshlab3.png}
146
	\caption{Save the merged point clouds, and include the estimated normals in the output file.}
172
	\caption{Save the merged point clouds, and include the estimated normals in the output file.}
147
	\label{fig:meshlab3}
173
	\label{fig:meshlab3}
148
\end{figure}
174
\end{figure}
149
 
175
 
150
If you have aquired multiple $360^\circ$ scans of your object in different position, proceed as above for each set. Then, you will need to align and merge these point cloud. Meshlab has manual coarse and automated ICP alignment integrated. Note that the automatic alignment procedure in Meshlab requires high quality point normal estimates for all point cloud to succeed. If this is not given, the alignment process will fail without warning or errors.
176
If you have aquired multiple $360^\circ$ scans of your object in different position, proceed as above for each set. Then, you will need to align and merge these point cloud. Meshlab has manual coarse and automated ICP alignment integrated. Note that the automatic alignment procedure in Meshlab requires high quality point normal estimates for all point cloud to succeed. If this is not given, the alignment process will fail without warning or errors.
Line 157... Line 183...
157
		\includegraphics[width=.9\textwidth]{meshlab4.png}
183
		\includegraphics[width=.9\textwidth]{meshlab4.png}
158
	\caption{The alignment tool in Meshlab.}
184
	\caption{The alignment tool in Meshlab.}
159
	\label{fig:meshlab4}
185
	\label{fig:meshlab4}
160
\end{figure}
186
\end{figure}
161
 
187
 
162
The final step is to reconstruct a surface from a point cloud. This can be done using the Poisson surface reconstruction built into Meshlab. It is accessible through ''File $\rightarrow$ Point Set $\rightarrow$ Surface Reconstruction: Poisson''. You will most probably have to vary the parameters for this step, to obtain pleasing results for your particular data. 
188
The next step is to reconstruct a surface from a point cloud. This can be done using the Poisson surface reconstruction built into Meshlab. It is accessible through ''File $\rightarrow$ Point Set $\rightarrow$ Surface Reconstruction: Poisson''. You will most probably have to vary the parameters for this step, to obtain pleasing results for your particular data. 
163
 
189
 
164
The full Poisson code is available at \url{http://www.cs.jhu.edu/~misha/Code/PoissonRecon/Version6.11/}, and also installed on the scanner computer. The full software allows for finer control over the process, and also to remove mesh membranes with little point support.
190
The full Poisson code is available at \url{http://www.cs.jhu.edu/~misha/Code/PoissonRecon/Version6.11/}, and also installed on the scanner computer. The full software allows for finer control over the process, and also to remove mesh membranes with little point support.
165
 
191
 
-
 
192
The Poisson reconstruction algorithm does not keep color information. In order to obtain a colored mesh, one needs to reproject the per-point color information from the full point cloud to the mesh. This can be done in Meshlab through the ''Filters $\rightarrow$ Color Creation and Processing $\rightarrow$ Disk Vertex Coloring'' functionality. 
166
\end{document}
193
\end{document}
167
 
194