Elsevier

Automatica

Volume 50, Issue 7, July 2014, Pages 1835-1842
Automatica

Brief paper
Non-vector space approach for nanoscale motion control

https://doi.org/10.1016/j.automatica.2014.04.018Get rights and content

Abstract

As the advancement of nanotechnology, it is possible to manipulate structures at nanoscale with various nanomanipulation tools such as scanning probe microscopes. To achieve successful manipulations, precise motion control is required, especially for objects with sizes from subnanometer to several nanometers. To address this issue, this paper presents an image based non-vector space control approach. Considering images obtained from the microscopes as sets, the dynamics of the system can be formulated in the space of sets. Since the linear structure of the vector space is not available in this space, this method is called the non-vector space control. With the dynamics in the non-vector space, we formulate the stabilization problem and design the controller. The stabilization controller is tested with images obtained by atomic force microscopes, and the results verify the proposed theory. The method presented in this paper does not rely on external sensors for position feedback. Moreover, unlike the traditional image based control method, we do not need to extract features from images and track them during the control process. Finally, the control precision can be as good as the imaging resolution. The approach presented in this paper can also be extended to other systems where the states can be represented as sets.

Introduction

Scanning probe microscope (SPM) such as atomic force microscope (AFM) or scanning tunneling microscope (STM) are powerful imaging tools at nanoscale. Recently, SPMs, especially AFMs, have been widely utilized for nanomanipulations to mechanically push, pull, or cut structures by using a sharp tip at the end of a probe as a nanoscale robotic manipulator (Requicha, 2003). With the help of an augmented reality system (Song, Xi, Yang, Lai, & Qu, 2010), AFM has been employed for quantitative cell analysis (Fung et al., 2008), automated nanomanufacturing (Lai et al., 2009), and nanosensor fabrication (Chen, Xi, Lai, Fung, & Yang, 2010).

The accurate point-to-point position control or motion control at nanoscale is a critical requirement for SPM based nanomanipulations because they rely on precisely moving the probe’s tip from one position to a desired position. For example, in AFM based nano-sensor fabrication, carbon nanotubes were pushed to a desired position by a nano-manipulator to form photodetectors, and the position accuracy of the manipulation should be within 10 nanometers to facilitate the integration to a nano-antenna that has a gap of 30 nm (Chen et al., 2012). The SPM based manipulator was utilized to modify DNA molecules with nanoscale resolution because their diameters are smaller than five nanometers (Zhang et al., 2008).

Although the imaging resolution for SPMs can be up to subnanometer (Requicha, 2003), it is challenging, if not impossible, to achieve such a precision for nanoscale motion control due to the spatial uncertainty of the probe’s tip. The main reason for such a deficiency is the piezoelectric actuation method for SPM systems. The inherent nonlinearities of piezo actuators such as hysteresis, creep, vibration, and thermal drift make position control within one nanometer extremely difficult (Croft, Shed, & Devasia, 2001). Additionally, the modeling errors include parameter variation, unmodeled dynamics, and coupling effects also exert extra difficulties for precise position control (Devasia, Eleftheriouand, & Reza Moheimani, 2007).

Generally, researchers address the nanoscale motion control problem using closed-loop control with position feedback from external sensors, which can achieve a high feedback rate (Devasia et al., 2007). There are two potential issues with such approaches. The first issue is the inaccurate feedback from position sensors, which cannot provide the tip’s true position because the sensors can only be added to the piezo actuators instead of the tip. The second issue is the control methods. Most control methods try to eliminate all the adverse effects by a feedforward compensation based on model inversion (Clayton, Tien, Leang, Zou, & Devasia, 2009). This method, however, requires model identifications and the model may also be time varying. Recent efforts include the combined feedback and feedforward control method with advanced control techniques such as robust or adaptive control (Abramovitch, Andersson, Pao, & Schitter, 2007). Nevertheless, the performance of such controllers is limited by the feedback sensor as well.

Different from the traditional approaches, we propose an image based closed-loop control method, which eliminates external position sensors. In fact, the tip can be considered as a single pixel camera with two translational degrees of freedom. By moving the tip locally in a small area, a local scan image can be obtained (Liu et al., 2008). Since the image is obtained from the local scan, it accurately reflects the tip’s true position. If a desired local scan image around a desired tip position is given, then a controller can be designed to steer the tip position to the desired position based on the image feedback.

The image based control method belongs to the literature of visual servoing, which utilizes vision to control the motion of a mechanical system. For traditional image based visual servoing methods, prominent features are first extracted from the image, and then a controller is designed to make the vector of feature positions converge to a desired value (Chaumette & Hutchinson, 2006). Two possible issues exist for this feature based vector control method. On the one hand, robust feature extraction and tracking are difficult in natural environments (Marchand & Chaumette, 2005). On the other hand, feature extraction suffers from information loss because only the feature information is used for control.

Recently, direct or featureless visual servoing methods are proposed to address the above two issues. Such methods design the controllers directly based on all the image intensities instead of some features extracted from the image. Examples include the kernel based method (Kallem, Dewan, Swensen, Hager, & Cowan, 2007), the sum-of-squared-difference method (Collewet & Marchand, 2011), and the mutual information method (Dame & Marchand, 2011).

Different from the above direct visual servoing methods, we present a non-vector space control method in this paper. The general idea is to form a set from an image and formulate the image dynamics in the space of sets. This space is called the non-vector space due to the lack of linear structure that exists in the vector space. Based on the image dynamics, a controller can be designed directly on the image sets. Initial results for the non-vector space controller have been reported in  Zhao, Song, Xi, and Lai (2011), Zhao et al. (2012).

The non-vector space control comes from a general framework called mutation analysis for set evolutions (Aubin, 1998). Mutation analysis provides a natural way to describe various physical phenomena because some objects such as shapes and images are basically sets. Since the introduction of mutation analysis, it has been applied to image segmentation (Lorenz, 2001), visual servoing (Doyen, 1995), and surveillance networks (Goradia, Xi, Cen, & Mutka, 2005).

The visual servoing using mutational analysis is proposed in  Doyen (1995). Nevertheless, possibly due to its abstract nature, no further extension is performed afterwards. In this paper, we extend the results and apply the method to nanoscale motion control. Three major extensions are carried out. First, we establish the general framework for the non-vector space control in this paper. Second, the original formulation only deals with binary images, while gray scale images are considered in this paper. Third, we apply the theory to nanoscale motion control.

Fig. 1 shows the schematic for the image based nanoscale motion control in the non-vector space with the AFM as an example. A desired image set corresponding to the desired tip position is first given. Based on the current image feedback, the non-vector space controller generates a control signal to drive the tip to a new position. An updated current image is obtained at the new position, and the same process is repeated until the tip reaches the desired position.

Since images are used for the nanoscale motion control in this paper, we also briefly review image based methods for applications in micro/nano environments. Clayton and Devasia have conducted extensive research to enable high speed SPMs using images with standard calibration samples to compensate the dynamic effects (Clayton and Devasia, 2005, Clayton and Devasia, 2007, Clayton and Devasia, 2009). Nevertheless, they focus on the accurate and fast imaging applications, while we emphasize the precise position control for nanomanipulations. Another close research uses the direct visual servoing method for automated microassembly with an optical microscope (Tamadazte, Le-Fort Piat, & Marchand, 2012). Nevertheless, the non-vector space approach is different from the vector space ones, and the characteristics of optical microscopes are different from SPMs.

The major contributions of this paper can be summarized in three aspects. First, the image based motion control is proposed for SPM systems to improve the precision. This approach excludes the use of external position sensors, which reduces the cost for the system and mitigates the noise from measurements. Second, the general framework for the stabilization problem in the non-vector space is presented. The framework can also be employed to stabilize other systems if the state can be represented as a set. Third, we apply the non-vector space control method to nanoscale motion control, which, unlike traditional image based control methods, does not require the feature extraction and tracking.

The rest of this paper is organized as follows. First of all, the dynamics in the non-vector space is introduced with tools from mutation analysis in Section  2. After that, the stabilization problem in the non-vector space is introduced in Section  3, where the stabilizing controller is designed. Finally, the testing results using AFM images are given in Section  4 to validate the theory.

Section snippets

Dynamics in the non-vector space

Before examining the motion control problem with the non-vector space approach, we should first formulate the dynamics. For SPM systems, the governing equation for the probe’s tip motion can be modeled as a differential equation in the vector space. If the local scan image is considered as a set, then this set evolves with the tip’s movement. In other words, the differential equation for tip motion induces the set evolution, which can be considered as the dynamics in the space of image sets. In

Stabilization control in the non-vector space

With the dynamics modeling in the non-vector space, we formulate the stabilization problem and design the stabilizing controller in this section.

Testing results

The general strategy for nanoscale motion control using the non-vector space approach is as follows (Zhao et al., 2011). First, a large area of interest is scanned to obtain a large image I. We specify a goal tip position in I, which, for example, may be the center of a cell. Then, we choose a small rectangular patch from image I with the center at the goal tip position as the desired image set Kˆ. After that, the system performs a local scan around its current tip position to obtain the

Conclusions

In this paper, a non-vector space approach for the nanoscale motion control is presented. The control method formulates the system dynamics in the space of sets. Due to the lack of linear structure in such a space, new tools from the mutational analysis are employed. The Lyapunov theory can also be extended in such a space. Based on the dynamics and Lyapunov theory, the stabilization problem is proposed and a stabilizing controller is designed for a general system. The designed controller is

Jianguo Zhao received the B.E. degree in Mechanical Engineering from Harbin Institute of Technology, Harbin, China, in 2005 and the M.E. degree in Mechatronic Engineering from Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, China, in 2007. He is currently working toward the Ph.D. degree with the Robotics and Automation Laboratory, Michigan State University, East Lansing, MI, USA. His research interests include bio-inspired robotics, dynamics and control, visual servoing,

References (34)

  • Hongzhi Chen et al.

    Development of infrared detectors using single carbon-nanotube-based field-effect transistors

    IEEE Trans. on Nanotechnology

    (2010)
  • Luc Doyen

    Shape laypunov functions and stabilization of reachable tubes of control problems

    Journal of Mathematical Analysis and Applications

    (1994)
  • E. Marchand et al.

    Feature tracking for visual servoing purposes

    Robotics and Autonomous Systems

    (2005)
  • Abramovitch, Daniel Y., Andersson, Sean B., Pao, Lucy Y., & Schitter, Georg (2007). A tutorial on the mechanisms,...
  • Jean~Pierre Aubin

    Mutational and Morphological Analysis: Tools for Shape Evolution and Morphogenesis

    (1998)
  • Francois Chaumette et al.

    Visual servo control part I: Basic approaches

    IEEE Robotics & Automation Magazine

    (2006)
  • Hongzhi Chen et al.

    Gate dependent photo-responses of carbon nanotube field effect phototransistors

    Nanotechnology

    (2012)
  • Garrett~M. Clayton et al.

    Image-based control of dynamic effects in scanning tunneling microscopes

    Nanotechnology

    (2005)
  • Garrett~M. Clayton et al.

    Iterative image-based modeling and control for higher scanning probe microscope performance

    The Review of Scientific Instruments

    (2007)
  • Garrett~M. Clayton et al.

    Conditions for image-based identification of spm-nanopositioner dynamics

    IEEE/ASME Trans. Mechatronics

    (2009)
  • Garrett~M. Clayton et al.

    A review of feedforward control approaches in nanopositioning for high-speed SPM

    ASME Journal of Dynamic Systems, Measurement, and Control

    (2009)
  • Christophe Collewet et al.

    Photometric visual servoing

    IEEE Transactions on Robotics

    (2011)
  • D. Croft et al.

    Creep, hysteresis, and vibration compensation for piezoactuators: atomic force microscopy application

    ASME Journal of Dynamic Systems, Measurement, and Control

    (2001)
  • Amaury Dame et al.

    Mutual information-based visual servoing

    IEEE Transactions on Robotics

    (2011)
  • Michael~C. Delfour et al.

    Shapes and Geometries: Metrics, Analysis, Differential Calculus, and Optimization

    (2011)
  • Santosh Devasia et al.

    A survey of control issues in nanopositioning

    IEEE Transactions on Control System Technology

    (2007)
  • Luc Doyen

    Mutational equations for shapes and vision-based control

    Journal of Mathematical Imaging and Vision

    (1995)
  • Cited by (17)

    • A high-efficiency Kalman filtering imaging mode for an atomic force microscopy with hysteresis modeling and compensation

      2018, Mechatronics
      Citation Excerpt :

      The basic operational principle of an AFM is to keep the interaction force between a sharp tip and the sample surface at a constant for each scanning point [4], where the scanning point refers to the image pixel, which is sampled by the probe tip of an AFM. In this way, an AFM acquires the data of each scanning point, including the input voltage of the piezoelectric scanner in z-direction and the feedback error between the given reference voltage and the voltage of the reflected laser beam detected by the laser detector in z-direction, to construct three dimensional sample images with nano-scale resolution [5,6], which reflects the surface topography of samples reliably and enables humans to understand the world in nanometer scale. Due to these merits, an AFM has recently played an even more important role in lots of fields, such as nanomanipulation, nanotechnology [7], biotechnology [8], and so on.

    • Nanomanufacturing Automation

      2023, Springer Handbooks
    • Process Automation

      2023, Springer Handbooks
    • External Ellipsoidal Approximations for Set Evolution Equations

      2022, Journal of Optimization Theory and Applications
    View all citing articles on Scopus

    Jianguo Zhao received the B.E. degree in Mechanical Engineering from Harbin Institute of Technology, Harbin, China, in 2005 and the M.E. degree in Mechatronic Engineering from Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, China, in 2007. He is currently working toward the Ph.D. degree with the Robotics and Automation Laboratory, Michigan State University, East Lansing, MI, USA. His research interests include bio-inspired robotics, dynamics and control, visual servoing, compressive sensing, and cyber physical systems.

    Bo Song received the B.S. degree in Mechanical Engineering from Dalian University of Technology, Dalian, China, in 2005, and his M.S. degree in Electrical Engineering from University of Science and Technology of China, Hefei, China, in 2009. He is currently working toward the Ph.D. degree in the Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI. His current research interests include micro/nanorobotics and systems, micro/nanomanufacturing, nanomechanics, biomechanics, imaging and characterization in nanoscale, compressive sensing and control with limited information.

    Ning Xi received the D.Sc. degree in Systems Science and Mathematics from Washington University in St. Louis, St. Louis, MO, USA in 1993 and the B.S. degree in Electrical Engineering from the Beijing University of Aeronautics and Astronautics, Beijing, China. He is the University Distinguished Professor and John D. Ryder Professor of Electrical and Computer Engineering with Michigan State University, East Lansing, MI, USA. His research interests include robotics, manufacturing automation, micro/nanomanufacturing, nano sensors and devices, and intelligent control and systems.

    Liang Sun received the B.E. and the M.E. degrees from Harbin Institute of Technology and the Ph.D. degree from Beihang University. He works at Beihang University and was a visiting scholar at Michigan State University. His research interests include nonlinear dynamics and control, robot control, spacecraft dynamics and control.

    Hongzhi Chen received the B.Eng. degree in Information Engineering from the Guangdong University of Technology, Guangzhou, China, in 2005, the M.Sc. degree in Electronics from Queen’s University Belfast, Belfast, UK, in 2006, and the Ph.D. degree in Electrical Engineering from Michigan State University, East Lansing, MI, USA, in 2012. He currently works as an engineer at Intel Corporation. His current research interests include nanoelectronics, nanophotonics, sensors, micro/nanofabrication and manufacturing, scanning probe microscopy, device characterization, MEMS/NEMS, and micro/nanorobotics and systems.

    Yunyi Jia received his M.S. in Control Theory and Control Engineering from South China University of Technology in 2008 and B.S. in Automation from National University of Defense Technology in 2005. He is currently pursuing his Ph.D. degree in the Department of Electrical and Computer Engineering at Michigan State University. His research interests include robotics, teleoperation, multi-robot systems, and human–robot interactions.

    This work is partially supported by NSF Grant No. IIS-0713346 and ONR Grant Nos. N00014-07-1-0935 and N00014-04-1-0799. The material in this paper was partially presented at the 50th IEEE Conference on Decision and Control (CDC) and European Control Conference, December 12–15, 2011, Orlando, FL. This paper was recommended for publication in revised form by Associate Editor Huaguang Zhang under the direction of Editor Toshiharu Sugie.

    1

    Tel.: +1 517 432 7589; fax: +1 517 353 1980.

    View full text