Simultaneous Localization and Mapping

Drag to rearrange sections
Rich Text Content

In this article we will make a brief comparison between various flavors, categories, and methods of Simultaneous Localization and Mapping. 

In an Article titled “How are Visual SLAM and LiDAR used in Robotic Navigation” a high-level comparison is made between the two methods VSLAM vs SLAM and presents the distinction as being whether the system uses a Camera or LiDAR as it is sensor. While the type of sensor used, could be considered as one criterion, there could be other (and perhaps better) classification criteria such as if the algorithm uses parametric or non-parametric methods, or if the system is Realtime, or if the system combines SLAM with a navigation stack. Other criteria for classification could be type of usage of the algorithm and its final application which could be in any of commercial robotic industry, self-driving cars, augmented reality, mixed reality, real estate showings, or robotic vacuums.  

For example, Air B & B has a patent in which it describes a method for the homeowner to create a complete 3D presentation of the house she/he intends to rent out using their cell phone. 

While 3D mapping is the term widely used, most SLAM methods are only 2.5D meaning that they have limitations of the height (3rd dimension). A 3D mapping could be combined with a 2D navigation and path planning. A gyroscope or IMU can be used to enhance the odometer readings of the robot thus giving birth to terms such as VI SLAM (Visual Inertial SLAM), or VIO SLAM (Visual Inertial Odometry). Visual odometry takes advantage of optical flow, structure from motion, or Bundle adjustment. A single camera when used alone, does not provide a reliable depth and performs poorly in loop closure. Therefore, if an IMU, Gyro, or Optical Tracking Sensor (OTS) is not used, often a second camera (Stereo Vision) is required. 

Diving a little deeper, N. Karlsson et Al referred to Visual Simultaneous Localization and Mapping with the abbreviation vSLAM. 

Robotics and Autonomous Systems journal published an article in its Volume 72 (Pages 29-36) that discusses SLAM and Path planning in one holistic approach. In the abstract, the author writes: “In this paper, we aim to integrate the two attributes for application on a humanoid robot. The SLAM problem is solved with the EKF-SLAM algorithm whereas the path planning problem is tackled via Q-learning.” Q-learning is a model-free reinforcement learning algorithm to learn the value of an action in a particular state. 

According to Florentin Woergoetter and Bernd Porr (2008), Scholarpedia, 3(3):1448, Reinforcement learning (RL) is [defined as] learning by interacting with an environment. With every trial the agent either receives a positive or negative reward which influences its next operations. 

However, in his Doctoral dissertation published in September 2010 (Princeton University), Umar Ali Syed introduces Reinforcement Learning Without Rewards. Our main focus in this thesis is the design and analysis of reinforcement learning algorithms which do not require complete knowledge of the rewards. 

In addition to partial observability of rewards, partial observability of environment was discussed by Azizzadenesheli (UC Irvine-2019) in his Doctoral dissertation. In the abstract the author writes: “While most of the practical application of interests in RL are high dimensions, we study RL problems from theory to practice in high dimensional, structured, and partially observable settings.”

Combination of Q-Learning with SLAM and relaxing the RL criteria in the learning part may have paved the road for QSLAM. 

In commercial applications, Obstacle recognition, Real-time requirements, lightweight-ness, were added as additional requirements of Q-SLAM giving it the meaning of a complete navigational stack.  

Shalini et al published an article on April 11, 2021 titled Comparison of FAST SLAM 2.0 and QSLAM and while it presents a good comparison of Fast SLAM Algorithm with FAST SLAM 2.0 without diving into too much detail, it suffices to presenting a few sources and leaves the reader to find out more. 

Collaborative AI Technology, CAIT, or Collaborative SLAM seems to be yet another criteria. 

The article published in Telegraph writes “We solved this issue using Hebbian learning, a weight-variable describing the strength of each relationship which evolves based on the outcome of previous states and rewards.” and yet introduces another definition and writes: Quantum SLAM considers the complete state of the mechanical system at a given time, encoded as a phase point or a pure quantum state vector along with an equation of motion which carries the state forward in time”. It further boasts about robots being able to collaborate as a team. 

I will continue to add to this article as I learn more about the topic and will write a follow up article about the future of Simultaneous Localization and Mapping and collaborative systems.  

  1. https://www.ceva-dsp.com/ourblog/how-are-visual-slam-and-lidar-used-in-robotic-navigation/by Charles Pao
  2. https://en.wikipedia.org/wiki/Real-time_computing
  3. https://uspto.report/patent/grant/10,930,057
  4. https://ieeexplore.ieee.org/document/1570091
  5. https://www.journals.elsevier.com/robotics-and-autonomous-systems
  6. https://www.sciencedirect.com/science/article/abs/pii/S0921889015000858
  7. ShuhuanWen et al.
  8. https://en.wikipedia.org/wiki/Q-learning
  9. http://www.scholarpedia.org/article/Reinforcement_learning
  10.  https://www.cs.princeton.edu/research/techreps/TR-883-10
  11.  https://escholarship.org/uc/item/4sx3s1ph
  12.  https://patents.justia.com/patent/20200225673
  13.  https://uspto.report/patent/app/20210089040
  14.  https://www.telegraph.co.uk/business/business-reporter/ai-robots/
rich_text    
Drag to rearrange sections
Rich Text Content
rich_text    

Page Comments