Subversion Repositories seema-scanner

Rev

Rev 102 | Rev 106 | Go to most recent revision | Show entire file | Ignore whitespace | Details | Blame | Last modification | View Log | RSS feed

Rev 102 Rev 105
Line 3... Line 3...
3
\usepackage[T1]{fontenc}
3
\usepackage[T1]{fontenc}
4
\usepackage{url}
4
\usepackage{url}
5
\usepackage{graphicx}
5
\usepackage{graphicx}
6
\usepackage{fullpage}
6
\usepackage{fullpage}
7
 
7
 
-
 
8
 
8
% \renewcommand{\chaptermark}[1]{\markboth{#1}{}}
9
% \renewcommand{\chaptermark}[1]{\markboth{#1}{}}
9
% \renewcommand{\sectionmark}[1]{\markright{\thesection\ #1}}
10
% \renewcommand{\sectionmark}[1]{\markright{\thesection\ #1}}
10
 
11
 
11
\title{The SeeMa Lab Structured Light Scanner}
12
\title{The SeeMa Lab Structured Light Scanner}
12
\author{Jakob Wilm and Eyþór Rúnar Eiríksson\\
13
\author{Jakob Wilm and Eyþór Rúnar Eiríksson\\
Line 23... Line 24...
23
	\label{fig:mesh0}
24
	\label{fig:mesh0}
24
\end{figure}
25
\end{figure}
25
 
26
 
26
\begin{abstract}
27
\begin{abstract}
27
This is the manual for the Seeing Machines Lab Structured Light Scanner (SeeMa-Scanner). The scanner consists of both hardware components (including cameras, projector and rotation stage), and software for calibration, scanning and reconstruction. While most of the components should be self-explanatory, we describe the hardware, and each software component, making it possible for students and staff to extend the scanner with new functionality. We also give a brief step-by-step guide on how to get from a physical object to a digital mesh model of it. 
28
This is the manual for the Seeing Machines Lab Structured Light Scanner (SeeMa-Scanner). The scanner consists of both hardware components (including cameras, projector and rotation stage), and software for calibration, scanning and reconstruction. While most of the components should be self-explanatory, we describe the hardware, and each software component, making it possible for students and staff to extend the scanner with new functionality. We also give a brief step-by-step guide on how to get from a physical object to a digital mesh model of it. 
28
\end{abstract}
29
\end{abstract} 
29
 
30
 
30
\chapter{The scanner}
31
\chapter{The scanner}
31
\section{Getting started}
32
\section{Getting started}
32
This section describes the main hardware and software parts of the system.
33
This section describes the main hardware and software parts of the system.
33
 
34
 
Line 55... Line 56...
55
 
56
 
56
The cameras, projector and rotation stage are mounted rigidly with respect to each other, which is important for high quality results. See figure \ref{fig:hardware0} for an image of the inside of the main scanner assembly. A darkening curtain can be lowered, to prevent ambient light from interfering with the measurement procedure. 
57
The cameras, projector and rotation stage are mounted rigidly with respect to each other, which is important for high quality results. See figure \ref{fig:hardware0} for an image of the inside of the main scanner assembly. A darkening curtain can be lowered, to prevent ambient light from interfering with the measurement procedure. 
57
\begin{figure}[h]
58
\begin{figure}[h]
58
	\centering
59
	\centering
59
		\includegraphics[width=.9\textwidth]{hardware0.jpg}
60
		\includegraphics[width=.9\textwidth]{hardware0.jpg}
60
	\caption{The scanner hardware. Two industrial cameras and one projector constitute the optical parts. An angle figure acts as the scan object, and is placed on top of the circular rotation plate. This plate is screwed onto a microrotation stage. The calibration target is also seen on its holder.}
61
	\caption{The scanner hardware. Two industrial cameras and one projector constitute the optical parts. An angel figurine acts as the scan object, and is placed on top of the circular rotation plate. This plate is screwed onto a microrotation stage. The calibration target is also seen on its holder.}
61
	\label{fig:hardware0}
62
	\label{fig:hardware0}
62
\end{figure}
63
\end{figure}
63
 
64
 
64
The geometry of the scanner is illustrated on figure \ref{fig:hardwaredimensions}, which also indicates the minimum focus range of the cameras and projector.
65
The geometry of the scanner is illustrated on figure \ref{fig:hardwaredimensions}, which also indicates the minimum focus range of the cameras and projector.
65
\begin{figure}[h]
66
\begin{figure}[h]
Line 73... Line 74...
73
The SeeMa-Scanner uses a standard commercial Full-HD projector. This is very cost-effective, but brings a few challenges. The projector is configured to perform minimal image processing, and the HDMI port is set to ''Notebook''-mode, which gives the lowest possible input lag (approx. 80 ms). The projector contains a DLP micromirror array to produce binary patterns with a high refresh rates (kHz range). Intermediate gray-values are created by the projector by altering the relative on-off cycles of each micromirror. A truthful capture of gray-values with the camera, requires an integration time that is a multiple of the 16.7 ms refresh period of the projector. 
74
The SeeMa-Scanner uses a standard commercial Full-HD projector. This is very cost-effective, but brings a few challenges. The projector is configured to perform minimal image processing, and the HDMI port is set to ''Notebook''-mode, which gives the lowest possible input lag (approx. 80 ms). The projector contains a DLP micromirror array to produce binary patterns with a high refresh rates (kHz range). Intermediate gray-values are created by the projector by altering the relative on-off cycles of each micromirror. A truthful capture of gray-values with the camera, requires an integration time that is a multiple of the 16.7 ms refresh period of the projector. 
74
 
75
 
75
Commercial projectors do not have a linear response, which would be necessary for truthful capture of gray-value patterns. Gamma can be set to the lowest possible value of $1.6$, and if matched in the graphics card configuration of the scan computer, a close to linear response can be achieved. By only using binary patterns, this problem is avoided.
76
Commercial projectors do not have a linear response, which would be necessary for truthful capture of gray-value patterns. Gamma can be set to the lowest possible value of $1.6$, and if matched in the graphics card configuration of the scan computer, a close to linear response can be achieved. By only using binary patterns, this problem is avoided.
76
 
77
 
77
\subsection{Cameras}
78
\subsection{Cameras}
78
These are high resolution 9MPx industrial CCD color cameras. While color information is usually not necessary in structured light, it enables us to capture color information of the object. In the program code, a white balance is used for the camera, which approximately matches the white light used in the projector. To acchieve true coloring, a rigourous color calibration would have to be done.
79
These are high resolution 9MPx industrial CCD color cameras. While color information is usually not necessary in structured light, it enables us to full color texture the scanned object. In the program code, a white balance is used for the camera, which was chosen ad-hoc to approximately match the color profile of the projector. To capture real true colors, a color calibration would have to be done.
79
 
80
 
80
\subsection{Rotation stage}
81
\subsection{Rotation stage}
81
This is a socalled micro-rotation stage, commonly used in high precision photonic research and production. A larger diameter plate was attached. The rotation stage has a stepper motor which drives a worm-gear. This gives high precision and very high repeatability. Note that the rotation stage does not have an optical encoder. It is reset to 0 degrees at each program start in software. The motor controller can be configured for different levels of microstepping and motor current. Higher motor current provides more torque and less risk of missing steps. Load on the plate should not exceed 20 kg, and be centered around the rotation axis. Objects can be stabilized on the plate using e.g. modeling clay.
82
This is a socalled micro-rotation stage, commonly used in high precision photonic research and production. A larger diameter plate was attached. The rotation stage has a stepper motor which drives a worm-gear. This gives high precision and very high repeatability. Note that the rotation stage does not have an optical encoder. It is reset to 0 degrees at each program start in software. The motor controller can be configured for different levels of microstepping and motor current. Higher motor current provides more torque and less risk of missing steps. Load on the plate should not exceed 20 kg, and be centered around the rotation axis. Objects can be stabilized on the plate using e.g. modeling clay.
82
 
83
 
83
\subsection{Calibration target}
84
\subsection{Calibration target}
84
A calibration target is also part of the scanner. It was produced by printing a checkerboard in vector format, and gluing it onto a thick piece of float glass using spray adhesive. Please note that the target is asymmetrical, which is necessary to uniquely match chessboard corners in both cameras. The calibration target was designed to fill the scan objects space. If you need a smaller scan area, a smaller calibration target would be beneficial, however physical dimensions are currently hardcoded in the scanner GUI. Please also note the minimal focus distance of the projector and cameras.
85
A calibration target is also part of the scanner. It was produced by printing a checkerboard in vector format, and gluing it onto a thick piece of float glass using spray adhesive. The target is asymmetrical, which is necessary to uniquely match chessboard corners in both cameras. The calibration target was designed to fill the scan objects space. If you need a smaller scan area, a smaller calibration target would be beneficial. In order to use a different chessboard, the field size and count paramters in the GUI configuration file (\texttt{{\textasciitilde}/.config/DTU/seema-scanner.conf}) need to be changed. Also note the minimal focus distance of the projector and cameras.
85
 
86
 
86
\section{Software components}
87
\section{Software components}
87
The SeeMaLab 3D scanner has a full graphical user interface for calibration, and scanning. The output from this software is a number of color pointclouds in the PLY format along with a Meshlab alignment project file (file suffix .aln), which contains orientation information as provided from the rotation stage parameters. This allows the user to import the point cloud for further processing in Meshlab, e.g. to produce a full mesh model of the surface. The rotation axis is determined during calibration, which means that usually no manual or algorithm-assisted alignment of partial surfaces is necessary. 
88
The SeeMaLab 3D scanner has a full graphical user interface for calibration, and scanning. The output from this software is a number of color pointclouds in the PLY format along with a Meshlab alignment project file (file suffix .aln), which contains orientation information as provided from the rotation stage parameters. This allows the user to import the point cloud for further processing in Meshlab, e.g. to produce a full mesh model of the surface. The rotation axis is determined during calibration, which means that usually no manual or algorithm-assisted alignment of partial surfaces is necessary. 
88
 
89
 
89
To get fine grained control over the scan procedure, the user can modify the source code for the GUI application, or use the supplied Matlab wrappers. These wrappers provide basic functionality to capture images with the cameras, project a specific pattern on the projector, or rotate the rotation stage to a specific position. Using these components, a full structured light scanner can be implemented in Matlab with full design freedom. 
90
To get fine grained control over the scan procedure, the user can modify the source code for the GUI application, or use the supplied Matlab wrappers. These wrappers provide basic functionality to capture images with the cameras, project a specific pattern on the projector, or rotate the rotation stage to a specific position. Using these components, a full structured light scanner can be implemented in Matlab with full design freedom. 
Line 102... Line 103...
102
\section{\texttt{RotationStage} Class}
103
\section{\texttt{RotationStage} Class}
103
Here a C++ abstraction for the Newmark motion control API was implemented. The C API essentially receives serial commands for serial-over-USB, and full documentation is provided on the Newmark website. Important things to consider are the latencies of many of these calls. Specifically reading and writing ''hardware settings'' such as microstep levels and motor current take considerable amounts of time. The motor's controllers inherent positional unit is ''number of microsteps''. This can be converted to an angular position, $\alpha$, by means of the following formula:
104
Here a C++ abstraction for the Newmark motion control API was implemented. The C API essentially receives serial commands for serial-over-USB, and full documentation is provided on the Newmark website. Important things to consider are the latencies of many of these calls. Specifically reading and writing ''hardware settings'' such as microstep levels and motor current take considerable amounts of time. The motor's controllers inherent positional unit is ''number of microsteps''. This can be converted to an angular position, $\alpha$, by means of the following formula:
104
\[
105
\[
105
	\alpha = \frac{\textrm{XPOS} \cdot 1.8}{\textrm{MS} \cdot 72} \quad ,
106
	\alpha = \frac{\textrm{XPOS} \cdot 1.8}{\textrm{MS} \cdot 72} \quad ,
106
\]
107
\]
107
where XPOS is the rotation controller's value, $1.8$ is the number of degrees per step on the motor axis. MS is the current microstep setting, and $72$ the worm-gear ratio. The \texttt{RotationStage} class interface abstracts from this and lets you rotate to a specific angle using the shortest path. 
108
where XPOS is the rotation controller's value, $1.8$ is the number of degrees per step on the motor axis. MS is the current microstep setting, and $72$ the worm-gear ratio. The \texttt{RotationStage} class interface abstracts from this and lets you rotate to a specific angle between $0$ and $360$ using the shortest direction. 
108
 
109
 
109
\chapter{Practical scanning}
110
\chapter{Practical scanning}
110
Please be very careful with this very expensive equipment, and considerate by not misplacing any parts and not borrowing any components of the scanner hardware.
111
Please be very careful with this very expensive equipment, and considerate by not misplacing any parts and not borrowing any components of the scanner hardware.
111
The following guide explains the steps involved in calibration and aquisition of a $360^\circ$ scan of an object. 
112
The following guide explains the steps involved in calibration and aquisition of a $360^\circ$ scan of an object. 
112
 
113
 
Line 162... Line 163...
162
 
163
 
163
\section{Reconstructing a mesh surface}
164
\section{Reconstructing a mesh surface}
164
Multiple point clouds can be merged into a single watertight mesh representation using Meshlab. Meshlab is available on the scanner computer, but also freely available for download for multiple platforms. The basic steps involved in merging and reconstructing are outlined below. The input data will consist of one or more sets of pointclouds aquired with the SeeMaLab GUI. Note that if multiple object poses are desired (for complex geometries/blind spots, etc.), it is recommended to close and restart the GUI for each pose, to clear the captured sequences and memory.
165
Multiple point clouds can be merged into a single watertight mesh representation using Meshlab. Meshlab is available on the scanner computer, but also freely available for download for multiple platforms. The basic steps involved in merging and reconstructing are outlined below. The input data will consist of one or more sets of pointclouds aquired with the SeeMaLab GUI. Note that if multiple object poses are desired (for complex geometries/blind spots, etc.), it is recommended to close and restart the GUI for each pose, to clear the captured sequences and memory.
165
\begin{enumerate}
166
\begin{enumerate}
166
	\item Load a set of point clouds, by opening the \texttt{*.aln} file in Meshlab (''File $\rightarrow$ Open Project...''). See figure \ref{fig:meshlab0} for an illustration of one full set of scans loaded into Meshlab.
167
	\item Load a set of point clouds, by opening the \texttt{*.aln} file in Meshlab (''File $\rightarrow$ Open Project...''). See figure \ref{fig:meshlab0} for an illustration of one full set of scans loaded into Meshlab.
167
	\item The PLY files do contain XYZ and RGB values for all points. You will need to estimate normals, in order for the surface reconstruction to succeed. These normals can be estimated and consistently oriented by considering the camera viewpoint. Select all point cloud in turn and for each, choose ''Filters $\rightarrow$ Point Sets $\rightarrow$ Compute Normals for Point Set''. Make sure the ''Flip normals...'' checkbox is ticked (see fig. \ref{fig:meshlab1}). Suitable neighborhood values are in the order of $10$. You can visualize the estimated normals through the ''Render'' menu.
168
	\item The PLY files do contain XYZ and RGB values for all points. You will need to compute normals, in order for the surface reconstruction to succeed. These normals can be estimated and consistently oriented by considering the camera viewpoint. Select all point cloud in turn and for each, choose ''Filters $\rightarrow$ Point Sets $\rightarrow$ Compute Normals for Point Set''. Make sure the ''Flip normals...'' checkbox is ticked (see fig. \ref{fig:meshlab1}). Suitable neighborhood values are in the order of $10$. You can visualize the estimated normals through the ''Render'' menu.
168
	\item After estimating normals for all point clouds in a set, choose ''Filters $\rightarrow$ Mesh Layer $\rightarrow$ Flatten Visible Layers''. Make sure to retain unreferences vertices, because at this point, none of the points will be part of any triangles (see figure \ref{fig:meshlab2}). This process will alter all coordinates by applying the pose transformation to all point clouds before merging them.
169
	\item After estimating normals for all point clouds in a set, choose ''Filters $\rightarrow$ Mesh Layer $\rightarrow$ Flatten Visible Layers''. Make sure to retain unreferences vertices, because at this point, none of the points will be part of any triangles (see figure \ref{fig:meshlab2}). This process will alter all coordinates by applying the pose transformation to all point clouds before merging them.
169
	\item Save the resulting merged point cloud. In the save dialog, make sure to include the normals in the output file (see fig. \ref{fig:meshlab3}).
170
	\item Save the resulting merged point cloud. In the save dialog, make sure to include the normals in the output file (see fig. \ref{fig:meshlab3}).
170
\end{enumerate}
171
\end{enumerate}
171
 
172
 
172
\begin{figure}[H]
173
\begin{figure}[H]