Unmanned aerial vehicle short-range laser pointer radar imager

The Army Research Laboratory (ARL) has been committed to the research and development of small unmanned ground vehicles (UGV) and small unmanned aerial vehicles (UAV) short-range laser pointer radar imagers. Current lidar test machines are based on microelectromechanical systems (MEMS) mirrors coupled to low-cost pulsed erbium-doped fiber lasers. The parameters are: 5-6 Hz frame rate, image size of 256 (h) x 128 (v) pixels, field of view of 42o x 21o, 35 meters range, safe operation of the human eye, 40 cm ultra high resolution. The experimental experience of driving small ground robots and working hard to extend Lidar to UAV applications further encourages researchers to improve the performance of Lidar.

The data acquisition system can now capture the data of three return pulses (that is, the first, last, and largest echo) within one pixel, as well as information such as elapsed time, operating parameters, and data in the inertial navigation system. The author will mention the subsystems added to obtain eye safety certification and their performance. To meet the range enhancement requirements of UAV applications, the author describes a new receiver circuit that can increase the signal-to-noise ratio (SNR) several times more than the existing design.

In conjunction with this work, the author will discuss the establishment of a low-capacitance large-area detector, which may further improve the signal-to-noise ratio of the receiver. Finally, the author will build a test lidar to prove that the range is increased to 160 meters and outline the process. If successful, the lidar will be integrated with color cameras and inertial navigation systems to build a data collection package to determine the imaging performance of a small drone.

650nm 10mW Red Light Laser Pointer Pocket

Security measures sometimes require permanent surveillance of the government, military, and public places. Borders, bridges, stadiums, airports and other locations are usually monitored by low-cost cameras. Their low-light performance can be enhanced with laser illuminators, but various operating scenarios may require low-intensity green laser pointer illumination, so that the light intensity scattered by the object is lower than the sensitivity of the lidar image detector. This article discusses a new type of high gain optical image amplifier.

This method can achieve the time synchronization accuracy of incoming and amplified signals ≤ 1 nanosecond. This technology allows the input signal to be amplified without matching the input spectrum with the cavity mode, as long as the input signal is within the spectral band of the amplifier. The author has measured the performance of this amplifier through experiments: a gain of 40 dB and a field of view of 20 milliradians. In many application areas such as disaster response, digital ground models, target recognition, and cultural heritage, the importance of creating 3D images is gradually increasing. Several methods have been proposed to generate texture pixel images, including fusion lidar and digital imaging. Previous methods were limited to generating two texture images or multiple textures with only one lidar data each.

One focus is still to generate multiple texture images to create 3D models. The article describes the process of using multiple texture images to create a true 3D image. The texture camera combines two-dimensional digital images and calibrated three-dimensional lidar data to form a texture image. This image is generated by shooting from multiple angles. Compared with 3D or 2D methods, the advantage of using multiple full-frame texture images is that the images can be better registered because the 3D points and the 2D texture overlap during the joint generation process. Calculate the individual position and rotation direction of each image and draw it on a common coordinate system, and optimize it. The proposed method combines cluster adjustment to optimize the generation of multiple images. Because of the lack of interaction between different camera parameters, a sparse 3D model is used. An example of the 3D model is given and its numerical accuracy is analyzed.