当前期刊: arXiv - CS - Robotics Go to current issue    加入关注   
显示样式:        排序: 导出
我的关注
我的收藏
您暂时未登录!
登录
  • Multi-agent Motion Planning for Dense and Dynamic Environments via Deep Reinforcement Learning
    arXiv.cs.RO Pub Date : 2020-01-18
    Samaneh Hosseini Semnani; Hugh Liu; Michael Everett; Anton de Ruiter; Jonathan P. How

    This paper introduces a hybrid algorithm of deep reinforcement learning (RL) and Force-based motion planning (FMP) to solve distributed motion planning problem in dense and dynamic environments. Individually, RL and FMP algorithms each have their own limitations. FMP is not able to produce time-optimal paths and existing RL solutions are not able to produce collision-free paths in dense environments. Therefore, we first tried improving the performance of recent RL approaches by introducing a new reward function that not only eliminates the requirement of a pre supervised learning (SL) step but also decreases the chance of collision in crowded environments. That improved things, but there were still a lot of failure cases. So, we developed a hybrid approach to leverage the simpler FMP approach in stuck, simple and high-risk cases, and continue using RL for normal cases in which FMP can't produce optimal path. Also, we extend GA3C-CADRL algorithm to 3D environment. Simulation results show that the proposed algorithm outperforms both deep RL and FMP algorithms and produces up to 50% more successful scenarios than deep RL and up to 75% less extra time to reach goal than FMP.

    更新日期:2020-01-22
  • A Transfer Learning Approach to Cross-Modal Object Recognition: From Visual Observation to Robotic Haptic Exploration
    arXiv.cs.RO Pub Date : 2020-01-18
    Pietro Falco; Shuang Lu; Ciro Natale; Salvatore Pirozzi; Dongheui Lee

    In this work, we introduce the problem of cross-modal visuo-tactile object recognition with robotic active exploration. With this term, we mean that the robot observes a set of objects with visual perception and, later on, it is able to recognize such objects only with tactile exploration, without having touched any object before. Using a machine learning terminology, in our application we have a visual training set and a tactile test set, or vice versa. To tackle this problem, we propose an approach constituted by four steps: finding a visuo-tactile common representation, defining a suitable set of features, transferring the features across the domains, and classifying the objects. We show the results of our approach using a set of 15 objects, collecting 40 visual examples and five tactile examples for each object. The proposed approach achieves an accuracy of 94.7%, which is comparable with the accuracy of the monomodal case, i.e., when using visual data both as training set and test set. Moreover, it performs well compared to the human ability, which we have roughly estimated carrying out an experiment with ten participants.

    更新日期:2020-01-22
  • Effects of sparse rewards of different magnitudes in the speed of learning of model-based actor critic methods
    arXiv.cs.RO Pub Date : 2020-01-18
    Juan Vargas; Lazar Andjelic; Amir Barati Farimani

    Actor critic methods with sparse rewards in model-based deep reinforcement learning typically require a deterministic binary reward function that reflects only two possible outcomes: if, for each step, the goal has been achieved or not. Our hypothesis is that we can influence an agent to learn faster by applying an external environmental pressure during training, which adversely impacts its ability to get higher rewards. As such, we deviate from the classical paradigm of sparse rewards and add a uniformly sampled reward value to the baseline reward to show that (1) sample efficiency of the training process can be correlated to the adversity experienced during training, (2) it is possible to achieve higher performance in less time and with less resources, (3) we can reduce the performance variability experienced seed over seed, (4) there is a maximum point after which more pressure will not generate better results, and (5) that random positive incentives have an adverse effect when using a negative reward strategy, making an agent under those conditions learn poorly and more slowly. These results have been shown to be valid for Deep Deterministic Policy Gradients using Hindsight Experience Replay in a well known Mujoco environment, but we argue that they could be generalized to other methods and environments as well.

    更新日期:2020-01-22
  • Gradient Surgery for Multi-Task Learning
    arXiv.cs.RO Pub Date : 2020-01-19
    Tianhe Yu; Saurabh Kumar; Abhishek Gupta; Sergey Levine; Karol Hausman; Chelsea Finn

    While deep learning and deep reinforcement learning (RL) systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge. Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks to enable more efficient learning. However, the multi-task setting presents a number of optimization challenges, making it difficult to realize large efficiency gains compared to learning tasks independently. The reasons why multi-task learning is so challenging compared to single-task learning are not fully understood. In this work, we identify a set of three conditions of the multi-task optimization landscape that cause detrimental gradient interference, and develop a simple yet general approach for avoiding such interference between task gradients. We propose a form of gradient surgery that projects a task's gradient onto the normal plane of the gradient of any other task that has a conflicting gradient. On a series of challenging multi-task supervised and multi-task RL problems, this approach leads to substantial gains in efficiency and performance. Further, it is model-agnostic and can be combined with previously-proposed multi-task architectures for enhanced performance.

    更新日期:2020-01-22
  • Optimization-Based On-Road Path Planning for Articulated Vehicles
    arXiv.cs.RO Pub Date : 2020-01-19
    Rui Oliveira; Oskar Ljungqvist; Pedro F. Lima; Bo Wahlberg

    Maneuvering an articulated vehicle on narrow road stretches is often a challenging task for a human driver. Unless the vehicle is accurately steered, parts of the vehicle's bodies may exceed its assigned drive lane, resulting in an increased risk of collision with surrounding traffic. In this work, an optimization-based path-planning algorithm is proposed targeting on-road driving scenarios for articulated vehicles composed of a tractor and a trailer. To this end, we model the tractor-trailer vehicle in a road-aligned coordinate frame suited for on-road planning. Based on driving heuristics, a set of different optimization objectives is proposed, with the overall goal of designing a path planner that computes paths which minimize the off-track of the vehicle bodies swept area, while remaining on the road and avoiding collision with obstacles. The proposed optimization-based path-planning algorithm, together with the different optimization objectives, is evaluated and analyzed in simulations on a set of complicated and practically relevant on-road planning scenarios using the most challenging tractor-trailer dimensions.

    更新日期:2020-01-22
  • RGB-D Odometry and SLAM
    arXiv.cs.RO Pub Date : 2020-01-19
    Javier Civera; Seong Hun Lee

    The emergence of modern RGB-D sensors had a significant impact in many application fields, including robotics, augmented reality (AR) and 3D scanning. They are low-cost, low-power and low-size alternatives to traditional range sensors such as LiDAR. Moreover, unlike RGB cameras, RGB-D sensors provide the additional depth information that removes the need of frame-by-frame triangulation for 3D scene reconstruction. These merits have made them very popular in mobile robotics and AR, where it is of great interest to estimate ego-motion and 3D scene structure. Such spatial understanding can enable robots to navigate autonomously without collisions and allow users to insert virtual entities consistent with the image stream. In this chapter, we review common formulations of odometry and Simultaneous Localization and Mapping (known by its acronym SLAM) using RGB-D stream input. The two topics are closely related, as the former aims to track the incremental camera motion with respect to a local map of the scene, and the latter to jointly estimate the camera trajectory and the global map with consistency. In both cases, the standard approaches minimize a cost function using nonlinear optimization techniques. This chapter consists of three main parts: In the first part, we introduce the basic concept of odometry and SLAM and motivate the use of RGB-D sensors. We also give mathematical preliminaries relevant to most odometry and SLAM algorithms. In the second part, we detail the three main components of SLAM systems: camera pose tracking, scene mapping and loop closing. For each component, we describe different approaches proposed in the literature. In the final part, we provide a brief discussion on advanced research topics with the references to the state-of-the-art.

    更新日期:2020-01-22
  • Reinforcement Learning with Probabilistically Complete Exploration
    arXiv.cs.RO Pub Date : 2020-01-20
    Philippe Morere; Gilad Francis; Tom Blau; Fabio Ramos

    Balancing exploration and exploitation remains a key challenge in reinforcement learning (RL). State-of-the-art RL algorithms suffer from high sample complexity, particularly in the sparse reward case, where they can do no better than to explore in all directions until the first positive rewards are found. To mitigate this, we propose Rapidly Randomly-exploring Reinforcement Learning (R3L). We formulate exploration as a search problem and leverage widely-used planning algorithms such as Rapidly-exploring Random Tree (RRT) to find initial solutions. These solutions are used as demonstrations to initialize a policy, then refined by a generic RL algorithm, leading to faster and more stable convergence. We provide theoretical guarantees of R3L exploration finding successful solutions, as well as bounds for its sampling complexity. We experimentally demonstrate the method outperforms classic and intrinsic exploration techniques, requiring only a fraction of exploration samples and achieving better asymptotic performance.

    更新日期:2020-01-22
  • BARNet: Bilinear Attention Network with Adaptive Receptive Field for Surgical Instrument Segmentation
    arXiv.cs.RO Pub Date : 2020-01-20
    Zhen-Liang Ni; Gui-Bin Bian; Guan-An Wang; Xiao-Hu Zhou; Zeng-Guang Hou; Xiao-Liang Xie; Zhen Li; Yu-Han Wang

    Surgical instrument segmentation is extremely important for computer-assisted surgery. Different from common object segmentation, it is more challenging due to the large illumination and scale variation caused by the special surgical scenes. In this paper, we propose a novel bilinear attention network with adaptive receptive field to solve these two challenges. For the illumination variation, the bilinear attention module can capture second-order statistics to encode global contexts and semantic dependencies between local pixels. With them, semantic features in challenging areas can be inferred from their neighbors and the distinction of various semantics can be boosted. For the scale variation, our adaptive receptive field module aggregates multi-scale features and automatically fuses them with different weights. Specifically, it encodes the semantic relationship between channels to emphasize feature maps with appropriate scales, changing the receptive field of subsequent convolutions. The proposed network achieves the best performance 97.47% mean IOU on Cata7 and comes first place on EndoVis 2017 by 10.10% IOU overtaking second-ranking method.

    更新日期:2020-01-22
  • Extent-Compatible Control Barrier Functions
    arXiv.cs.RO Pub Date : 2020-01-20
    Mohit Srinivasan; Matthew Abate; Gustav Nilsson; Samuel Coogan

    Safety requirements in dynamical systems are commonly enforced with set invariance constraints over a safe region of the state space. Control barrier functions, which are Lyapunov-like functions for guaranteeing set invariance, are an effective tool to enforce such constraints and guarantee safety when the system is represented as a point in the state space. In this paper, we introduce extent-compatible control barrier functions as a tool to enforce safety for the system including its volume (extent) in the physical world. In order to implement the extent-compatible control barrier functions framework, a sum-of-squares based optimization program is proposed. Since sum-of-squares programs can be computationally prohibitive, we additionally introduce a sampling based method in order to retain the computational advantage of a traditional barrier function based quadratic program controller. We prove that the proposed sampling based controller retains the guarantee for safety. Simulation and robotic implementation results are also provided.

    更新日期:2020-01-22
  • Counter-example Guided Learning of Bounds on Environment Behavior
    arXiv.cs.RO Pub Date : 2020-01-20
    Yuxiao Chen; Sumanth Dathathri; Tung Phan-Minh; Richard M. Murray

    There is a growing interest in building autonomous systems that interact with complex environments. The difficulty associated with obtaining an accurate model for such environments poses a challenge to the task of assessing and guaranteeing the system's performance. We present a data-driven solution that allows for a system to be evaluated for specification conformance without an accurate model of the environment. Our approach involves learning a conservative reactive bound of the environment's behavior using data and specification of the system's desired behavior. First, the approach begins by learning a conservative reactive bound on the environment's actions that captures its possible behaviors with high probability. This bound is then used to assist verification, and if the verification fails under this bound, the algorithm returns counter-examples to show how failure occurs and then uses these to refine the bound. We demonstrate the applicability of the approach through two case-studies: i) verifying controllers for a toy multi-robot system, and ii) verifying an instance of human-robot interaction during a lane-change maneuver given real-world human driving data.

    更新日期:2020-01-22
  • Autocamera Calibration for traffic surveillance cameras with wide angle lenses
    arXiv.cs.RO Pub Date : 2020-01-20
    Aman Gajendra Jain; Nicolas Saunier

    We propose a method for automatic calibration of a traffic surveillance camera with wide-angle lenses. Video footage of a few minutes is sufficient for the entire calibration process to take place. This method takes in the height of the camera from the ground plane as the only user input to overcome the scale ambiguity. The calibration is performed in two stages, 1. Intrinsic Calibration 2. Extrinsic Calibration. Intrinsic calibration is achieved by assuming an equidistant fisheye distortion and an ideal camera model. Extrinsic calibration is accomplished by estimating the two vanishing points, on the ground plane, from the motion of vehicles at perpendicular intersections. The first stage of intrinsic calibration is also valid for thermal cameras. Experiments have been conducted to demonstrate the effectiveness of this approach on visible as well as thermal cameras. Index Terms: fish-eye, calibration, thermal camera, intelligent transportation systems, vanishing points

    更新日期:2020-01-22
  • Lyceum: An efficient and scalable ecosystem for robot learning
    arXiv.cs.RO Pub Date : 2020-01-21
    Colin Summers; Kendall Lowrey; Aravind Rajeswaran; Siddhartha Srinivasa; Emanuel Todorov

    We introduce Lyceum, a high-performance computational ecosystem for robot learning. Lyceum is built on top of the Julia programming language and the MuJoCo physics simulator, combining the ease-of-use of a high-level programming language with the performance of native C. In addition, Lyceum has a straightforward API to support parallel computation across multiple cores and machines. Overall, depending on the complexity of the environment, Lyceum is 5-30x faster compared to other popular abstractions like OpenAI's Gym and DeepMind's dm-control. This substantially reduces training time for various reinforcement learning algorithms; and is also fast enough to support real-time model predictive control through MuJoCo. The code, tutorials, and demonstration videos can be found at: www.lyceum.ml.

    更新日期:2020-01-22
  • From Planes to Corners: Multi-Purpose Primitive Detection in Unorganized 3D Point Clouds
    arXiv.cs.RO Pub Date : 2020-01-21
    Christiane Sommer; Yumin Sun; Leonidas Guibas; Daniel Cremers; Tolga Birdal

    We propose a new method for segmentation-free joint estimation of orthogonal planes, their intersection lines, relationship graph and corners lying at the intersection of three orthogonal planes. Such unified scene exploration under orthogonality allows for multitudes of applications such as semantic plane detection or local and global scan alignment, which in turn can aid robot localization or grasping tasks. Our two-stage pipeline involves a rough yet joint estimation of orthogonal planes followed by a subsequent joint refinement of plane parameters respecting their orthogonality relations. We form a graph of these primitives, paving the way to the extraction of further reliable features: lines and corners. Our experiments demonstrate the validity of our approach in numerous scenarios from wall detection to 6D tracking, both on synthetic and real data.

    更新日期:2020-01-22
  • Instance Segmentation of Visible and Occluded Regions for Finding and Picking Target from a Pile of Objects
    arXiv.cs.RO Pub Date : 2020-01-21
    Kentaro Wada; Shingo Kitagawa; Kei Okada; Masayuki Inaba

    We present a robotic system for picking a target from a pile of objects that is capable of finding and grasping the target object by removing obstacles in the appropriate order. The fundamental idea is to segment instances with both visible and occluded masks, which we call `instance occlusion segmentation'. To achieve this, we extend an existing instance segmentation model with a novel `relook' architecture, in which the model explicitly learns the inter-instance relationship. Also, by using image synthesis, we make the system capable of handling new objects without human annotations. The experimental results show the effectiveness of the relook architecture when compared with a conventional model and of the image synthesis when compared to a human-annotated dataset. We also demonstrate the capability of our system to achieve picking a target in a cluttered environment with a real robot.

    更新日期:2020-01-22
  • Joint Learning of Instance and Semantic Segmentation for Robotic Pick-and-Place with Heavy Occlusions in Clutter
    arXiv.cs.RO Pub Date : 2020-01-21
    Kentaro Wada; Kei Okada; Masayuki Inaba

    We present joint learning of instance and semantic segmentation for visible and occluded region masks. Sharing the feature extractor with instance occlusion segmentation, we introduce semantic occlusion segmentation into the instance segmentation model. This joint learning fuses the instance- and image-level reasoning of the mask prediction on the different segmentation tasks, which was missing in the previous work of learning instance segmentation only (instance-only). In the experiments, we evaluated the proposed joint learning comparing the instance-only learning on the test dataset. We also applied the joint learning model to 2 different types of robotic pick-and-place tasks (random and target picking) and evaluated its effectiveness to achieve real-world robotic tasks.

    更新日期:2020-01-22
  • Design, Analysis & Prototyping of a Semi-Automated Staircase-Climbing Rehabilitation Robot
    arXiv.cs.RO Pub Date : 2018-01-10
    Siddharth Jha; Himanshu Chaudhary; Swapnil Satardey; Piyush Kumar; Ankush Roy; Aditya Deshmukh; Dishank Bansal; Gopabandhu Hota; Saurabh Mirani

    In this paper, we describe the mechanical design, system overview, integration and control techniques associated with SKALA, a unique large-sized robot for carrying a person with physical disabilities, up and down staircases. As a regular wheelchair is unable to perform such a maneuver, the system functions as a non-conventional wheelchair with several intelligent features. We describe the unique mechanical design and the design choices associated with it. We showcase the embedded control architecture that allows for several different modes of teleoperation, all of which have been described in detail. We further investigate the architecture associated with the autonomous operation of the system.

    更新日期:2020-01-22
  • Sensor Aware Lidar Odometry
    arXiv.cs.RO Pub Date : 2019-07-22
    Dmitri Kovalenko; Mikhail Korobkin; Andrey Minin

    A lidar odometry method, integrating into the computation the knowledge about the physics of the sensor, is proposed. A model of measurement error enables higher precision in estimation of the point normal covariance. Adjacent laser beams are used in an outlier correspondence rejection scheme. The method is ranked in the KITTI's leaderboard with 1.37% positioning error. 3.67% is achieved in comparison with the LOAM method on the internal dataset.

    更新日期:2020-01-22
  • Bayesian Local Sampling-based Planning
    arXiv.cs.RO Pub Date : 2019-09-08
    Tin Lai; Philippe Morere; Fabio Ramos; Gilad Francis

    Sampling-based planning is the predominant paradigm for motion planning in robotics. Most sampling-based planners use a global random sampling scheme to guarantee probabilistic completeness. However, most schemes are often inefficient as the samples drawn from the global proposal distribution, and do not exploit relevant local structures. Local sampling-based motion planners, on the other hand, take sequential decisions of random walks to samples valid trajectories in configuration space. However, current approaches do not adapt their strategies according to the success and failures of past samples. In this work, we introduce a local sampling-based motion planner with a Bayesian learning scheme for modelling an adaptive sampling proposal distribution. The proposal distribution is sequentially updated based on previous samples, consequently shaping it according to local obstacles and constraints in the configuration space. Thus, through learning from past observed outcomes, we maximise the likelihood of sampling in regions that have a higher probability to form trajectories within narrow passages. We provide the formulation of a sample-efficient distribution, along with theoretical foundation of sequentially updating this distribution. We demonstrate experimentally that by using a Bayesian proposal distribution, a solution is found faster, requiring fewer samples, and without any noticeable performance overhead.

    更新日期:2020-01-22
  • DeepTIO: A Deep Thermal-Inertial Odometry with Visual Hallucination
    arXiv.cs.RO Pub Date : 2019-09-16
    Muhamad Risqi U. Saputra; Pedro P. B. de Gusmao; Chris Xiaoxuan Lu; Yasin Almalioglu; Stefano Rosa; Changhao Chen; Johan Wahlström; Wei Wang; Andrew Markham; Niki Trigoni

    Visual odometry shows excellent performance in a wide range of environments. However, in visually-denied scenarios (e.g. heavy smoke or darkness), pose estimates degrade or even fail. Thermal cameras are commonly used for perception and inspection when the environment has low visibility. However, their use in odometry estimation is hampered by the lack of robust visual features. In part, this is as a result of the sensor measuring the ambient temperature profile rather than scene appearance and geometry. To overcome this issue, we propose a Deep Neural Network model for thermal-inertial odometry (DeepTIO) by incorporating a visual hallucination network to provide the thermal network with complementary information. The hallucination network is taught to predict fake visual features from thermal images by using Huber loss. We also employ selective fusion to attentively fuse the features from three different modalities, i.e thermal, hallucination, and inertial features. Extensive experiments are performed in hand-held and mobile robot data in benign and smoke-filled environments, showing the efficacy of the proposed model.

    更新日期:2020-01-22
  • Robust Humanoid Contact Planning with Learned Zero- and One-Step Capturability Prediction
    arXiv.cs.RO Pub Date : 2019-09-19
    Yu-Chi Lin; Ludovic Righetti; Dmitry Berenson

    Humanoid robots maintain balance and navigate by controlling the contact wrenches applied to the environment. While it is possible to plan dynamically-feasible motion that applies appropriate wrenches using existing methods, a humanoid may also be affected by external disturbances. Existing systems typically rely on controllers to reactively recover from disturbances. However, such controllers may fail when the robot cannot reach contacts capable of rejecting a given disturbance. In this paper, we propose a search-based footstep planner which aims to maximize the probability of the robot successfully reaching the goal without falling as a result of a disturbance. The planner considers not only the poses of the planned contact sequence, but also alternative contacts near the planned contact sequence that can be used to recover from external disturbances. Although this additional consideration significantly increases the computation load, we train neural networks to efficiently predict multi-contact zero-step and one-step capturability, which allows the planner to generate robust contact sequences efficiently. Our results show that our approach generates footstep sequences that are more robust to external disturbances than a conventional footstep planner in four challenging scenarios.

    更新日期:2020-01-22
  • Modeling, Identification and Control of Model Jet Engines for Jet Powered Robotics
    arXiv.cs.RO Pub Date : 2019-09-29
    Giuseppe L'Erario; Luca Fiorio; Gabriele Nava; Fabio Bergonti; Hosameldin Awadalla Omer Mohamed; Silvio Traversaro; Daniele Pucci

    The paper contributes towards the modeling, identification, and control of model jet engines. We propose a nonlinear, second order model in order to capture the model jet engines governing dynamics. The model structure is identified by applying sparse identification of nonlinear dynamics, and then the parameters of the model are found via gray-box identification procedures. Once the model has been identified, we approached the control of the model jet engine by designing two control laws. The first one is based on the classical Feedback Linearization technique while the second one on the Sliding Mode control. The overall methodology has been verified by modeling, identifying and controlling two model jet engines, i.e. P100-RX and P220-RXi developed by JetCat, which provide a maximum thrust of 100 N and 220 N, respectively.

    更新日期:2020-01-22
  • Sparse tree search optimality guarantees in POMDPs with continuous observation spaces
    arXiv.cs.RO Pub Date : 2019-10-10
    Michael H. Lim; Claire J. Tomlin; Zachary N. Sunberg

    Partially observable Markov decision processes (POMDPs) with continuous state and observation spaces have powerful flexibility for representing real-world decision and control problems but are notoriously difficult to solve. Recent online sampling-based algorithms that use observation likelihood weighting have shown unprecedented effectiveness in domains with continuous observation spaces. However there has been no formal theoretical justification for this technique. This work offers such a justification, proving that a simplified algorithm, partially observable weighted sparse sampling (POWSS), will estimate Q-values accurately with high probability and can be made to perform arbitrarily near the optimal solution by increasing computational power.

    更新日期:2020-01-22
  • A Hybrid Compact Neural Architecture for Visual Place Recognition
    arXiv.cs.RO Pub Date : 2019-10-15
    Marvin Chancán; Luis Hernandez-Nunez; Ajay Narendra; Andrew B. Barron; Michael Milford

    State-of-the-art algorithms for visual place recognition, and related visual navigation systems, can be broadly split into two categories: computer-science-oriented models including deep learning or image retrieval-based techniques with minimal biological plausibility, and neuroscience-oriented dynamical networks that model temporal properties underlying spatial navigation in the brain. In this letter, we propose a new compact and high-performing place recognition model that bridges this divide for the first time. Our approach comprises two key neural models of these categories: (1) FlyNet, a compact, sparse two-layer neural network inspired by brain architectures of fruit flies, Drosophila melanogaster, and (2) a one-dimensional continuous attractor neural network (CANN). The resulting FlyNet+CANN network incorporates the compact pattern recognition capabilities of our FlyNet model with the powerful temporal filtering capabilities of an equally compact CANN, replicating entirely in a hybrid neural implementation the functionality that yields high performance in algorithmic localization approaches like SeqSLAM. We evaluate our model, and compare it to three state-of-the-art methods, on two benchmark real-world datasets with small viewpoint variations and extreme environmental changes - achieving 87% AUC results under day to night transitions compared to 60% for Multi-Process Fusion, 46% for LoST-X and 1% for SeqSLAM, while being 6.5, 310, and 1.5 times faster, respectively.

    更新日期:2020-01-22
  • Teaching Vehicles to Anticipate: A Systematic Study on Probabilistic Behavior Prediction using Large Data Sets
    arXiv.cs.RO Pub Date : 2019-10-17
    Florian Wirthmüller; Julian Schlechtriemen; Jochen Hipp; Manfred Reichert

    Observations of traffic participants and their environment enable humans to drive road vehicles safely. However, when being driven, there is a notable difference between having a non-experienced vs. an experienced driver. One may get the feeling, that the latter one anticipates what may happen in the next few moments and considers these foresights in his driving behavior. To make the driving style of automated vehicles comparable to a human driver in the sense of comfort and perceived safety, the aforementioned anticipation skills need to become a built-in feature of self-driving vehicles. This article provides a systematic comparison of methods and strategies to generate this intention for self-driving cars using machine learning techniques. To implement and test these algorithms we use a large data set collected over more than 30000 km of highway driving and containing approximately 40000 real world driving situations. Moreover, we show that it is possible to certainly detect more than 47 % of all lane changes on German highways 3 or more seconds in advance with a false positive rate of less than 1 %. This enables us to predict the lateral position with a prediction horizon of 5 s with a median error of less than 0.21 m.

    更新日期:2020-01-22
  • Self-Supervised Learning of State Estimation for Manipulating Deformable Linear Objects
    arXiv.cs.RO Pub Date : 2019-11-14
    Mengyuan Yan; Yilin Zhu; Ning Jin; Jeannette Bohg

    We demonstrate model-based, visual robot manipulation of linear deformable objects. Our approach is based on a state-space representation of the physical system that the robot aims to control. This choice has multiple advantages, including the ease of incorporating physics priors in the dynamics model and perception model, and the ease of planning manipulation actions. In addition, physical states can naturally represent object instances of different appearances. Therefore, dynamics in the state space can be learned in one setting and directly used in other visually different settings. This is in contrast to dynamics learned in pixel space or latent space, where generalization to visual differences are not guaranteed. Challenges in taking the state-space approach are the estimation of the high-dimensional state of a deformable object from raw images, where annotations are very expensive on real data, and finding a dynamics model that is both accurate, generalizable, and efficient to compute. We are the first to demonstrate self-supervised training of rope state estimation on real images, without requiring expensive annotations. This is achieved by our novel self-supervising learning objective, which is generalizable across a wide range of visual appearances. With estimated rope states, we train a fast and differentiable neural network dynamics model that encodes the physics of mass-spring systems. Our method has a higher accuracy in predicting future states compared to models that do not involve explicit state estimation and do not use any physics prior, while only using 3\% of training data. We also show that our approach achieves more efficient manipulation, both in simulation and on a real robot, when used within a model predictive controller.

    更新日期:2020-01-22
  • Blockchain-Powered Collaboration in Heterogeneous Swarms of Robots
    arXiv.cs.RO Pub Date : 2019-11-23
    Jorge Peña Queralta; Tomi Westerlund

    One of the key challenges in the collaboration within heterogeneous multi-robot systems is the optimization of the amount and type of data to be shared between robots with different sensing capabilities and computational resources. In this paper, we present a novel approach to managing collaboration terms in heterogeneous multi-robot systems with blockchain technology. Leveraging the extensive research of consensus algorithms in the blockchain domain, we exploit key technologies in this field to be integrated for consensus in robotic systems. We propose the utilization of proof of work systems to have an online estimation of the available computational resources at different robots. Furthermore, we define smart contracts that integrate information about the environment from different robots in order to evaluate and rank the quality and accuracy of each of the robots' sensor data. This means that the key parameters involved in heterogeneous robotic collaboration are integrated within the Blockchain and estimated at all robots equally without explicitly sharing information about the robots' hardware or sensors. Trustability is based on the verification of data samples that are submitted to the blockchain within each data exchange transaction and validated by other robots operating in the same environment. Initial results are reported which show the viability of the concepts presented in this paper.

    更新日期:2020-01-22
  • Measurement Scheduling for Cooperative Localization in Resource-Constrained Conditions
    arXiv.cs.RO Pub Date : 2019-12-10
    Qi Yan; Li Jiang; Solmaz Kia

    This paper studies the measurement scheduling problem for a group of N mobile robots moving on a flat surface that are preforming cooperative localization (CL). We consider a scenario in which due to the limited on-board resources such as battery life and communication bandwidth only a given number of relative measurements per robot are allowed at observation and update stage. Optimal selection of which teammates a robot should take a relative measurement from such that the updated joint localization uncertainty of the team is minimized is an NP-hard problem. In this paper, we propose a suboptimal greedy approach that allows each robot to choose its landmark robots locally in polynomial time. Our method, unlike the known results in the literature, does not assume full-observability of CL algorithm. Moreover, it does not require inter-robot communication at scheduling stage. That is, there is no need for the robots to collaborate to carry out the landmark robot selections. We discuss the application of our method in the context of an state-of-the-art decentralized CL algorithm and demonstrate its effectiveness through numerical simulations. Even though our solution does not come with rigorous performance guarantees, its low computational cost along with no communication requirement makes it an appealing solution for operatins with resource constrained robots.

    更新日期:2020-01-22
  • The Penetration of Internet of Things in Robotics: Towards a Web of Robotic Things
    arXiv.cs.RO Pub Date : 2020-01-15
    Andreas Kamilaris; Nicolo Botteghi

    As the Internet of Things (IoT) penetrates different domains and application areas, it has recently entered also the world of robotics. Robotics constitutes a modern and fast-evolving technology, increasingly being used in industrial, commercial and domestic settings. IoT, together with the Web of Things (WoT) could provide many benefits to robotic systems. Some of the benefits of IoT in robotics have been discussed in related work. This paper moves one step further, studying the actual current use of IoT in robotics, through various real-world examples encountered through a bibliographic research. The paper also examines the potential ofWoT, together with robotic systems, investigating which concepts, characteristics, architectures, hardware, software and communication methods of IoT are used in existing robotic systems, which sensors and actions are incorporated in IoT-based robots, as well as in which application areas. Finally, the current application of WoT in robotics is examined and discussed.

    更新日期:2020-01-17
  • Synergetic Reconstruction from 2D Pose and 3D Motion for Wide-Space Multi-Person Video Motion Capture in the Wild
    arXiv.cs.RO Pub Date : 2020-01-16
    Takuya Ohashi; Yosuke Ikegami; Yoshihiko Nakamura

    Although many studies have been made on markerless motion capture, it has not been applied to real sports or concerts. In this paper, we propose a markerless motion capture method with spatiotemporal accuracy and smoothness from multiple cameras, even in wide and multi-person environments. The key idea is predicting each person's 3D pose and determining the bounding box of multi-camera images small enough. This prediction and spatiotemporal filtering based on human skeletal structure eases 3D reconstruction of the person and yields accuracy. The accurate 3D reconstruction is then used to predict the bounding box of each camera image in the next frame. This is a feedback from 3D motion to 2D pose, and provides a synergetic effect to the total performance of video motion capture. We demonstrate the method using various datasets and a real sports field. The experimental results show the mean per joint position error was 31.6mm and the percentage of correct parts was 99.3% under five people moving dynamically, with satisfying the range of motion. Video demonstration, datasets, and additional materials are posted on our project page.

    更新日期:2020-01-17
  • Predicting Target Feature Configuration of Non-stationary Objects for Grasping with Image-Based Visual Servoing
    arXiv.cs.RO Pub Date : 2020-01-16
    Jesse Haviland; Feras Dayoub; Peter Corke

    In this paper we consider the problem of the final approach stage of closed-loop grasping where RGB-D cameras are no longer able to provide valid depth information. This is essential for grasping non-stationary objects; a situation where current robotic grasping controllers fail. We predict the image-plane coordinates of observed image features at the final grasp pose and use image-based visual servoing to guide the robot to that pose. Image-based visual servoing is a well established control technique that moves a camera in 3D space so as to drive the image-plane feature configuration to some goal state. In previous works the goal feature configuration is assumed to be known but for some applications this may not be feasible, if for example the motion is being performed for the first time with respect to a scene. Our proposed method provides robustness with respect to scene motion during the final phase of grasping as well as to errors in the robot kinematic control. We provide experimental results in the context of dynamic closed-loop grasping.

    更新日期:2020-01-17
  • Probabilistic 3D Multi-Object Tracking for Autonomous Driving
    arXiv.cs.RO Pub Date : 2020-01-16
    Hsu-kuang Chiu; Antonio Prioletti; Jie Li; Jeannette Bohg

    3D multi-object tracking is a key module in autonomous driving applications that provides a reliable dynamic representation of the world to the planning module. In this paper, we present our on-line tracking method, which made the first place in the NuScenes Tracking Challenge, held at the AI Driving Olympics Workshop at NeurIPS 2019. Our method estimates the object states by adopting a Kalman Filter. We initialize the state covariance as well as the process and observation noise covariance with statistics from the training set. We also use the stochastic information from the Kalman Filter in the data association step by measuring the Mahalanobis distance between the predicted object states and current object detections. Our experimental results on the NuScenes validation and test set show that our method outperforms the AB3DMOT baseline method by a large margin in the Average Multi-Object Tracking Accuracy (AMOTA) metric.

    更新日期:2020-01-17
  • A Markerless Deep Learning-based 6 Degrees of Freedom PoseEstimation for with Mobile Robots using RGB Data
    arXiv.cs.RO Pub Date : 2020-01-16
    Linh Kästner; Daniel Dimitrov; Jens Lambrecht

    Augmented Reality has been subject to various integration efforts within industries due to its ability to enhance human machine interaction and understanding. Neural networks have achieved remarkable results in areas of computer vision, which bear great potential to assist and facilitate an enhanced Augmented Reality experience. However, most neural networks are computationally intensive and demand huge processing power thus, are not suitable for deployment on Augmented Reality devices. In this work we propose a method to deploy state of the art neural networks for real time 3D object localization on augmented reality devices. As a result, we provide a more automated method of calibrating the AR devices with mobile robotic systems. To accelerate the calibration process and enhance user experience, we focus on fast 2D detection approaches which are extracting the 3D pose of the object fast and accurately by using only 2D input. The results are implemented into an Augmented Reality application for intuitive robot control and sensor data visualization. For the 6D annotation of 2D images, we developed an annotation tool, which is, to our knowledge, the first open source tool to be available. We achieve feasible results which are generally applicable to any AR device thus making this work promising for further research in combining high demanding neural networks with Internet of Things devices.

    更新日期:2020-01-17
  • Probabilistic 3D Multilabel Real-time Mapping for Multi-object Manipulation
    arXiv.cs.RO Pub Date : 2020-01-16
    Kentaro Wada; Kei Okada; Masayuki Inaba

    Probabilistic 3D map has been applied to object segmentation with multiple camera viewpoints, however, conventional methods lack of real-time efficiency and functionality of multilabel object mapping. In this paper, we propose a method to generate three-dimensional map with multilabel occupancy in real-time. Extending our previous work in which only target label occupancy is mapped, we achieve multilabel object segmentation in a single looking around action. We evaluate our method by testing segmentation accuracy with 39 different objects, and applying it to a manipulation task of multiple objects in the experiments. Our mapping-based method outperforms the conventional projection-based method by 40 - 96\% relative (12.6 mean $IU_{3d}$), and robot successfully recognizes (86.9\%) and manipulates multiple objects (60.7\%) in an environment with heavy occlusions.

    更新日期:2020-01-17
  • Human Action Recognition and Assessment via Deep Neural Network Self-Organization
    arXiv.cs.RO Pub Date : 2020-01-04
    German I. Parisi

    The robust recognition and assessment of human actions are crucial in human-robot interaction (HRI) domains. While state-of-the-art models of action perception show remarkable results in large-scale action datasets, they mostly lack the flexibility, robustness, and scalability needed to operate in natural HRI scenarios which require the continuous acquisition of sensory information as well as the classification or assessment of human body patterns in real time. In this chapter, I introduce a set of hierarchical models for the learning and recognition of actions from depth maps and RGB images through the use of neural network self-organization. A particularity of these models is the use of growing self-organizing networks that quickly adapt to non-stationary distributions and implement dedicated mechanisms for continual learning from temporally correlated input.

    更新日期:2020-01-17
  • End-to-End Pixel-Based Deep Active Inference for Body Perception and Action
    arXiv.cs.RO Pub Date : 2019-12-28
    Cansu Sancaktar; Pablo Lanillos

    We present a pixel-based deep Active Inference algorithm (PixelAI) inspired in human body perception and successfully validated in robot body perception and action as a use case. Our algorithm combines the free energy principle from neuroscience, rooted in variational inference, with deep convolutional decoders to scale the algorithm to directly deal with images input and provide online adaptive inference. The approach enables the robot to perform 1) dynamical body estimation of arm using only raw monocular camera images and 2) autonomous reaching to "imagined" arm poses in the visual space. We statistically analyzed the algorithm performance in a simulated and a real Nao robot. Results show how the same algorithm deals with both perception an action, modelled as an inference optimization problem.

    更新日期:2020-01-17
  • Domain Independent Unsupervised Learning to grasp the Novel Objects
    arXiv.cs.RO Pub Date : 2020-01-09
    Siddhartha Vibhu Pharswan; Mohit Vohra; Ashish Kumar; Laxmidhar Behera

    One of the main challenges in the vision-based grasping is the selection of feasible grasp regions while interacting with novel objects. Recent approaches exploit the power of the convolutional neural network (CNN) to achieve accurate grasping at the cost of high computational power and time. In this paper, we present a novel unsupervised learning based algorithm for the selection of feasible grasp regions. Unsupervised learning infers the pattern in data-set without any external labels. We apply k-means clustering on the image plane to identify the grasp regions, followed by an axis assignment method. We define a novel concept of Grasp Decide Index (GDI) to select the best grasp pose in image plane. We have conducted several experiments in clutter or isolated environment on standard objects of Amazon Robotics Challenge 2017 and Amazon Picking Challenge 2016. We compare the results with prior learning based approaches to validate the robustness and adaptive nature of our algorithm for a variety of novel objects in different domains.

    更新日期:2020-01-17
  • Establishing Human-Robot Trust through Music-Driven Robotic Emotion Prosody and Gesture
    arXiv.cs.RO Pub Date : 2020-01-11
    Richard Savery; Ryan Rose; Gil Weinberg

    As human-robot collaboration opportunities continue to expand, trust becomes ever more important for full engagement and utilization of robots. Affective trust, built on emotional relationship and interpersonal bonds is particularly critical as it is more resilient to mistakes and increases the willingness to collaborate. In this paper we present a novel model built on music-driven emotional prosody and gestures that encourages the perception of a robotic identity, designed to avoid uncanny valley. Symbolic musical phrases were generated and tagged with emotional information by human musicians. These phrases controlled a synthesis engine playing back pre-rendered audio samples generated through interpolation of phonemes and electronic instruments. Gestures were also driven by the symbolic phrases, encoding the emotion from the musical phrase to low degree-of-freedom movements. Through a user study we showed that our system was able to accurately portray a range of emotions to the user. We also showed with a significant result that our non-linguistic audio generation achieved an 8% higher mean of average trust than using a state-of-the-art text-to-speech system.

    更新日期:2020-01-17
  • Adversarially Guided Self-Play for Adopting Social Conventions
    arXiv.cs.RO Pub Date : 2020-01-16
    Mycal Tucker; Yilun Zhou; Julie Shah

    Robotic agents must adopt existing social conventions in order to be effective teammates. These social conventions, such as driving on the right or left side of the road, are arbitrary choices among optimal policies, but all agents on a successful team must use the same convention. Prior work has identified a method of combining self-play with paired input-output data gathered from existing agents in order to learn their social convention without interacting with them. We build upon this work by introducing a technique called Adversarial Self-Play (ASP) that uses adversarial training to shape the space of possible learned policies and substantially improves learning efficiency. ASP only requires the addition of unpaired data: a dataset of outputs produced by the social convention without associated inputs. Theoretical analysis reveals how ASP shapes the policy space and the circumstances (when behaviors are clustered or exhibit some other structure) under which it offers the greatest benefits. Empirical results across three domains confirm ASP's advantages: it produces models that more closely match the desired social convention when given as few as two paired datapoints.

    更新日期:2020-01-17
  • Convex Controller Synthesis for Robot Contact
    arXiv.cs.RO Pub Date : 2019-09-10
    Hung Pham; Quang-Cuong Pham

    Controlling contacts is truly challenging, and this has been a major hurdle to deploying industrial robots into unstructured/human-centric environments. More specifically, the main challenges are: (i) how to ensure stability at all times; (ii) how to satisfy task-specific performance specifications; (iii) how to achieve (i) and (ii) under environment uncertainty, robot parameters uncertainty, sensor and actuator time delays, external perturbations, etc. Here, we propose a new approach -- Convex Controller Synthesis (CCS) -- to tackle the above challenges based on robust control theory and convex optimization. In two physical interaction tasks -- robot hand guiding and sliding on surfaces with different and unknown stiffnesses -- we show that CCS controllers outperform their classical counterparts in an essential way.

    更新日期:2020-01-17
  • Path tracking control of self-reconfigurable robot hTetro with four differential drive units
    arXiv.cs.RO Pub Date : 2019-11-20
    Yuyao Shi; Mohan Rajesh Elara; Anh Vu Le; Veerajagadheswar Prabakaran; Kristin L. Wood

    The research interest in mobile robots with independent steering wheels has been increasing over recent years due to their high mobility and better payload capacity over the systems using omnidirectional wheels. However, with more controllable degrees of freedom, almost all of the platforms include redundancy which is modeled using the instantaneous center of rotation (ICR). This paper deals with a Tetris inspired floor cleaning robot hTetro which consists of four interconnected differential-drive units, i.e., each module has a differential drive unit, which can steer individually. Differing from most other steerable wheeled mobile robots, the wheel arrangement of this robot changes because of its self-reconfigurability. In this paper, we proposed a robust path tracking controller that can handle discontinuous trajectories and sudden orientation changes. Singularity problems are resolved on both the mechanical aspect and control aspect. The controller is tested experimentally with the self-reconfigurable robotic platform hero, and results are discussed.

    更新日期:2020-01-17
  • Pose-Assisted Multi-Camera Collaboration for Active Object Tracking
    arXiv.cs.RO Pub Date : 2020-01-15
    Jing Li; Jing Xu; Fangwei Zhong; Xiangyu Kong; Yu Qiao; Yizhou Wang

    Active Object Tracking (AOT) is crucial to many visionbased applications, e.g., mobile robot, intelligent surveillance. However, there are a number of challenges when deploying active tracking in complex scenarios, e.g., target is frequently occluded by obstacles. In this paper, we extend the single-camera AOT to a multi-camera setting, where cameras tracking a target in a collaborative fashion. To achieve effective collaboration among cameras, we propose a novel Pose-Assisted Multi-Camera Collaboration System, which enables a camera to cooperate with the others by sharing camera poses for active object tracking. In the system, each camera is equipped with two controllers and a switcher: The vision-based controller tracks targets based on observed images. The pose-based controller moves the camera in accordance to the poses of the other cameras. At each step, the switcher decides which action to take from the two controllers according to the visibility of the target. The experimental results demonstrate that our system outperforms all the baselines and is capable of generalizing to unseen environments. The code and demo videos are available on our website https://sites.google.com/view/pose-assistedcollaboration.

    更新日期:2020-01-16
  • Direct Visual-Inertial Ego-Motion Estimation via Iterated Extended Kalman Filter
    arXiv.cs.RO Pub Date : 2020-01-15
    Shangkun Zhong; Pakpong Chirarattananon

    This letter proposes a reactive navigation strategy for recovering the altitude, translational velocity and orientation of Micro Aerial Vehicles. The main contribution lies in the direct and tight fusion of Inertial Measurement Unit (IMU) measurements with monocular feedback under an assumption of a single planar scene. An Iterated Extended Kalman Filter (IEKF) scheme is employed. The state prediction makes use of IMU readings while the state update relies directly on photometric feedback as measurements. Unlike feature-based methods, the photometric difference for the innovation term renders an inherent and robust data association process in a single step. The proposed approach is validated using real-world datasets. The results show that the proposed method offers better robustness, accuracy, and efficiency than a feature-based approach. Further investigation suggests that the accuracy of the flight velocity estimates from the proposed approach is comparable to those of two state-of-the-art Visual Inertial Systems (VINS) while the proposed framework is $\approx15-30$ times faster thanks to the omission of reconstruction and mapping.

    更新日期:2020-01-16
  • DGCM-Net: Dense Geometrical Correspondence Matching Network for Incremental Experience-based Robotic Grasping
    arXiv.cs.RO Pub Date : 2020-01-15
    Timothy Patten; Kiru Park; Markus Vincze

    This article presents a method for grasping novel objects by learning from experience. Successful attempts are remembered and then used to guide future grasps such that more reliable grasping is achieved over time. To generalise the learned experience to unseen objects, we introduce the dense geometric correspondence matching network (DGCM-Net). This applies metric learning to encode objects with similar geometry nearby in feature space. Retrieving relevant experience for an unseen object is thus a nearest neighbour search with the encoded feature maps. DGCM-Net also reconstructs 3D-3D correspondences using the view-dependent normalised object coordinate space to transform grasp configurations from retrieved samples to unseen objects. In comparison to baseline methods, our approach achieves an equivalent grasp success rate. However, the baselines are significantly improved when fusing the knowledge from experience with their grasp proposal strategy. Offline experiments with a grasping dataset highlight the capability to generalise within and between object classes as well as to improve success rate over time from increasing experience. Lastly, by learning task-relevant grasps, our approach can prioritise grasps that enable the functional use of objects.

    更新日期:2020-01-16
  • 3D Object Segmentation for Shelf Bin Picking by Humanoid with Deep Learning and Occupancy Voxel Grid Map
    arXiv.cs.RO Pub Date : 2020-01-15
    Kentaro Wada; Masaki Murooka; Kei Okada; Masayuki Inaba

    Picking objects in a narrow space such as shelf bins is an important task for humanoid to extract target object from environment. In those situations, however, there are many occlusions between the camera and objects, and this makes it difficult to segment the target object three dimensionally because of the lack of three dimentional sensor inputs. We address this problem with accumulating segmentation result with multiple camera angles, and generating voxel model of the target object. Our approach consists of two components: first is object probability prediction for input image with convolutional networks, and second is generating voxel grid map which is designed for object segmentation. We evaluated the method with the picking task experiment for target objects in narrow shelf bins. Our method generates dense 3D object segments even with occlusions, and the real robot successfuly picked target objects from the narrow space.

    更新日期:2020-01-16
  • Robotic Grasp Manipulation Using Evolutionary Computing and Deep Reinforcement Learning
    arXiv.cs.RO Pub Date : 2020-01-15
    Priya Shukla; Hitesh Kumar; G. C. Nandi

    Intelligent Object manipulation for grasping is a challenging problem for robots. Unlike robots, humans almost immediately know how to manipulate objects for grasping due to learning over the years. A grown woman can grasp objects more skilfully than a child because of learning skills developed over years, the absence of which in the present day robotic grasping compels it to perform well below the human object grasping benchmarks. In this paper we have taken up the challenge of developing learning based pose estimation by decomposing the problem into both position and orientation learning. More specifically, for grasp position estimation, we explore three different methods - a Genetic Algorithm (GA) based optimization method to minimize error between calculated image points and predicted end-effector (EE) position, a regression based method (RM) where collected data points of robot EE and image points have been regressed with a linear model, a PseudoInverse (PI) model which has been formulated in the form of a mapping matrix with robot EE position and image points for several observations. Further for grasp orientation learning, we develop a deep reinforcement learning (DRL) model which we name as Grasp Deep Q-Network (GDQN) and benchmarked our results with Modified VGG16 (MVGG16). Rigorous experimentations show that due to inherent capability of producing very high-quality solutions for optimization problems and search problems, GA based predictor performs much better than the other two models for position estimation. For orientation learning results indicate that off policy learning through GDQN outperforms MVGG16, since GDQN architecture is specially made suitable for the reinforcement learning. Based on our proposed architectures and algorithms, the robot is capable of grasping all rigid body objects having regular shapes.

    更新日期:2020-01-16
  • CIAO$^\star$: MPC-based Safe Motion Planning in Predictable Dynamic Environments
    arXiv.cs.RO Pub Date : 2020-01-15
    Tobias Schoels; Per Rutquist; Luigi Palmieri; Andrea Zanelli; Kai O. Arras; Moritz Diehl

    Robots have been operating in dynamic environments and shared workspaces for decades. Most optimization based motion planning methods, however, do not consider the movement of other agents, e.g. humans or other robots, and therefore do not guarantee collision avoidance in such scenarios. This paper builds upon the Convex Inner ApprOximation (CIAO) method and proposes a motion planning algorithm that guarantees collision avoidance in predictable dynamic environments. Furthermore it generalizes CIAO's free region concept to arbitrary norms and proposes a cost function to approximate time-optimal motion planning. The proposed method, CIAO$^\star$, finds kinodynamically feasible and collision free trajectories for constrained robots using a \ac*{mpc} framework and accounts for the predicted movement of other agents. The experimental evaluation shows that CIAO$^\star$ reaches close to time optimal behavior.

    更新日期:2020-01-16
  • Offline Grid-Based Coverage path planning for guards in games
    arXiv.cs.RO Pub Date : 2020-01-15
    Wael Al Enezi; Clark Verbrugge

    Algorithmic approaches to exhaustive coverage have application in video games, enabling automatic game level exploration. Current designs use simple heuristics that frequently result in poor performance or exhibit unnatural behaviour. In this paper, we introduce a novel algorithm for covering a 2D polygonal (with holes) area. We assume prior knowledge of the map layout and use a grid-based world representation. Experimental analysis over several scenarios ranging from simple layouts to more complex maps used in actual games show good performance. This work serves as an initial step towards building a more efficient coverage path planning algorithm for non-player characters.

    更新日期:2020-01-16
  • A laser-microfabricated electrohydrodynamic thruster for centimeter-scale aerial robots
    arXiv.cs.RO Pub Date : 2019-06-24
    Hari Krishna Hari Prasad; Ravi Sankar Vaddi; Yogesh M Chukewad; Elma Dedic; Igor Novosselov; Sawyer B Fuller

    To date, insect scale robots capable of controlled flight have used flapping wings for generating lift, but this requires a complex and failure-prone mechanism. A simpler alternative is electrohydrodynamic (EHD) thrust, which requires no moving mechanical parts. In EHD, corona discharge generates a flow of ions in an electric field between two electrodes; the high-velocity ions transfer their kinetic energy to neutral air molecules through collisions, accelerating the gas and creating thrust. We introduce a fabrication process for EHD thruster based on 355 nm laser micromachining and our approach allows for greater flexibility in materials selection. Our four-thruster device measures 1.8 x 2.5 cm and is composed of steel emitters and a lightweight carbon fiber mesh. The current and thrust characteristics of each individual thruster of the quad thruster is determined and agrees with Townsend relation. The mass of the quad thruster is 37 mg and the measured thrust is greater than its weight (362.6 uN). The robot is able to lift off at a voltage of 4.6 kV with a thrust to weight ratio of 1.38.

    更新日期:2020-01-16
  • Gaussians on Riemannian Manifolds for Robot Learning and Adaptive Control
    arXiv.cs.RO Pub Date : 2019-09-12
    Sylvain Calinon; Noémie Jaquier

    This paper presents an overview of robot learning and adaptive control applications that can benefit from a joint use of Riemannian geometry and probabilistic representations. We first discuss the roles of Riemannian manifolds, geodesics and parallel transport in robotics. We then present several forms of manifolds that are already employed in robotics, by also listing manifolds that have been underexploited so far but that have potentials in future robot learning applications. A varied range of techniques employing Gaussian distributions on Riemannian manifolds are then introduced, including clustering, regression, information fusion, planning and control problems. Two examples of applications are presented, involving the control of a prosthetic hand from surface electromyography (sEMG) data, and the teleoperation of a bimanual underwater robot. Further perspectives are finally discussed with suggestions of promising research directions.

    更新日期:2020-01-16
  • An NMPC Approach using Convex Inner Approximations for Online Motion Planning with Guaranteed Collision Avoidance
    arXiv.cs.RO Pub Date : 2019-09-18
    Tobias Schoels; Luigi Palmieri; Kai O. Arras; Moritz Diehl

    Even though mobile robots have been around for decades, trajectory optimization and continuous time collision avoidance remains subject of active research. Existing methods trade off between path quality, computational complexity, and kinodynamic feasibility. This work approaches the problem using a model predictive control (MPC) framework, that is based on a novel convex inner approximation of the collision avoidance constraint. The proposed Convex Inner ApprOximation (CIAO) method finds kinodynamically feasible and continuous time collision free trajectories, in few iterations, typically one. For a feasible initialization, the approach is guaranteed to find a feasible solution, i.e. it preserves feasibility. Our experimental evaluation shows that CIAO outperforms state of the art baselines in terms of planning efficiency and path quality. Experiments on a robot with 12 states show that it also scales to high-dimensional systems. Furthermore real-world experiments demonstrate its capability of unifying trajectory optimization and tracking for safe motion planning in dynamic environments.

    更新日期:2020-01-16
  • Shared Autonomy in Web-based Human Robot Interaction
    arXiv.cs.RO Pub Date : 2019-12-21
    Yug Ajmera; Arshad Javed

    In this paper, we aim to achieve a human-robot work balance by implementing shared autonomy through a web interface. Shared autonomy integrates user input with the autonomous capabilities of the robot and therefore increases the overall performance of the robot. Presenting only the relevant information to the user on the web page lowers the cognitive load of the operator. Through our web interface, we provide a mechanism for the operator to directly interact using the displayed information by applying a point-and-click paradigm. Further, we propose our idea to employ a human-robot mutual adaptation in a shared autonomy setting through our web interface for effective team collaboration.

    更新日期:2020-01-16
  • Examining the Effects of Emotional Valence and Arousal on Takeover Performance in Conditionally Automated Driving
    arXiv.cs.RO Pub Date : 2020-01-13
    Na Du; Feng Zhou; Elizabeth Pulver; Dawn M. Tilbury; Lionel P. Robert; Anuj K. Pradhan; X. Jessie Yang

    In conditionally automated driving, drivers have difficulty in takeover transitions as they become increasingly decoupled from the operational level of driving. Factors influencing takeover performance, such as takeover lead time and the engagement of non-driving related tasks, have been studied in the past. However, despite the important role emotions play in human-machine interaction and in manual driving, little is known about how emotions influence drivers takeover performance. This study, therefore, examined the effects of emotional valence and arousal on drivers takeover timeliness and quality in conditionally automated driving. We conducted a driving simulation experiment with 32 participants. Movie clips were played for emotion induction. Participants with different levels of emotional valence and arousal were required to take over control from automated driving, and their takeover time and quality were analyzed. Results indicate that positive valence led to better takeover quality in the form of a smaller maximum resulting acceleration and a smaller maximum resulting jerk. However, high arousal did not yield an advantage in takeover time. This study contributes to the literature by demonstrating how emotional valence and arousal affect takeover performance. The benefits of positive emotions carry over from manual driving to conditionally automated driving while the benefits of arousal do not.

    更新日期:2020-01-15
  • Multi-Robot Formation Control Using Reinforcement Learning
    arXiv.cs.RO Pub Date : 2020-01-13
    Abhay Rawat; Kamalakar Karlapalem

    In this paper, we present a machine learning approach to move a group of robots in a formation. We model the problem as a multi-agent reinforcement learning problem. Our aim is to design a control policy for maintaining a desired formation among a number of agents (robots) while moving towards a desired goal. This is achieved by training our agents to track two agents of the group and maintain the formation with respect to those agents. We consider all agents to be homogeneous and model them as unicycle [1]. In contrast to the leader-follower approach, where each agent has an independent goal, our approach aims to train the agents to be cooperative and work towards the common goal. Our motivation to use this method is to make a fully decentralized multi-agent formation system and scalable for a number of agents.

    更新日期:2020-01-15
  • Simulate forest trees by integrating L-system and 3D CAD files
    arXiv.cs.RO Pub Date : 2020-01-13
    M. Hassan Tanveer; Antony Thomas; Xiaowei Wu; Hongxiao Zhu

    In this article, we propose a new approach for simulating trees, including their branches, sub-branches, and leaves. This approach combines the theory of biological development, mathematical models, and computer graphics, producing simulated trees and forest with full geometry. Specifically, we adopt the Lindenmayer process to simulate the branching pattern of trees and modify the available measurements and dimensions of 3D CAD developed object files to create natural looking sub-branches and leaves. Randomization has been added to the placement of all branches, sub branches and leaves. To simulate a forest, we adopt Inhomogeneous Poisson process to generate random locations of trees. Our approach can be used to create complex structured 3D virtual environment for the purpose of testing new sensors and training robotic algorithms. We look forward to applying this approach to test biosonar sensors that mimick bats' fly in the simulated environment.

    更新日期:2020-01-15
  • Companion Unmanned Aerial Vehicles: A Survey
    arXiv.cs.RO Pub Date : 2020-01-14
    Chun Fui Liew; Takehisa Yairi

    Recent technological advancements in small-scale unmanned aerial vehicles (UAVs) have led to the development of companion UAVs. Similar to conventional companion robots, companion UAVs have the potential to assist us in our daily lives and to help alleviating social loneliness issue. In contrast to ground companion robots, companion UAVs have the capability to fly and possess unique interaction characteristics. Our goals in this work are to have a bird's-eye view of the companion UAV works and to identify lessons learned and guidelines for the design of companion UAVs. We tackle two major challenges towards these goals, where we first use a coordinated way to gather top-quality human-drone interaction (HDI) papers from three sources, and then propose to use a perceptual map of UAVs to summarize current research efforts in HDI. While being simple, the proposed perceptual map can cover the efforts have been made to realize companion UAVs in a comprehensive manner and lead our discussion coherently. We also discuss patterns we noticed in the literature and some lessons learned throughout the review. In addition, we recommend several areas that are worth exploring and suggest a few guidelines to enhance HDI researches with companion UAVs.

    更新日期:2020-01-15
  • Wasserstein Distributionally Robust Motion Control for Collision Avoidance Using Conditional Value-at-Risk
    arXiv.cs.RO Pub Date : 2020-01-14
    Astghik Hakobyan; Insoon Yang

    In this paper, a risk-aware motion control scheme is considered for mobile robots to avoid randomly moving obstacles when the true probability distribution of uncertainty is unknown. We propose a novel model predictive control (MPC) method for limiting the risk of unsafety even when the true distribution of the obstacles' movements deviates, within an ambiguity set, from the empirical distribution obtained using a limited amount of sample data. By choosing the ambiguity set as a statistical ball with its radius measured by the Wasserstein metric, we achieve a probabilistic guarantee of the out-of-sample risk, evaluated using new sample data generated independently of the training data. To resolve the infinite-dimensionality issue inherent in the distributionally robust MPC problem, we reformulate it as a finite-dimensional nonlinear program using modern distributionally robust optimization techniques based on the Kantorovich duality principle. To find a globally optimal solution in the case of affine dynamics and output equations, a spatial branch-and-bound algorithm is designed using McCormick relaxation. The performance of the proposed method is demonstrated and analyzed through simulation studies using a nonlinear car-like vehicle model and a linearized quadrotor model.

    更新日期:2020-01-15
  • Knowledge Representations in Technical Systems -- A Taxonomy
    arXiv.cs.RO Pub Date : 2020-01-14
    Kristina Scharei; Florian Heidecker; Maarten Bieshaar

    The recent usage of technical systems in human-centric environments leads to the question, how to teach technical systems, e.g., robots, to understand, learn, and perform tasks desired by the human. Therefore, an accurate representation of knowledge is essential for the system to work as expected. This article mainly gives insight into different knowledge representation techniques and their categorization into various problem domains in artificial intelligence. Additionally, applications of presented knowledge representations are introduced in everyday robotics tasks. By means of the provided taxonomy, the search for a proper knowledge representation technique regarding a specific problem should be facilitated.

    更新日期:2020-01-15
  • Parameterized and GPU-Parallelized Real-Time Model Predictive Control for High Degree of Freedom Robots
    arXiv.cs.RO Pub Date : 2020-01-14
    Phillip Hyatt; Connor S. Williams; Marc D. Killpack

    This work presents and evaluates a novel input parameterization method which improves the tractability of model predictive control (MPC) for high degree of freedom (DoF) robots. Experimental results demonstrate that by parameterizing the input trajectory more than three quarters of the optimization variables used in traditional MPC can be eliminated with practically no effect on system performance. This parameterization also leads to trajectories which are more conservative, producing less overshoot in underdamped systems with modeling error. In this paper we present two MPC solution methods that make use of this parameterization. The first uses a convex solver, and the second makes use of parallel computing on a graphics processing unit (GPU). We show that both approaches drastically reduce solve times for large DoF, long horizon MPC problems allowing solutions at real-time rates. Through simulation and hardware experiments, we show that the parameterized convex solver MPC has faster solve times than traditional MPC for high DoF cases while still achieving similar performance. For the GPU-based MPC solution method, we use an evolutionary algorithm and that we call Evolutionary MPC (EMPC). EMPC is shown to have even faster solve times for high DoF systems. Solve times for EMPC are shown to decrease even further through the use of a more powerful GPU. This suggests that parallelized MPC methods will become even more advantageous with the improvement and prevalence of GPU technology.

    更新日期:2020-01-15
  • Learning to Prevent Monocular SLAM Failure using Reinforcement Learning
    arXiv.cs.RO Pub Date : 2016-07-26
    Vignesh Prasad; Karmesh Yadav; Rohitashva Singh Saurabh; Swapnil Daga; Nahas Pareekutty; K. Madhava Krishna; Balaraman Ravindran; Brojeshwar Bhowmick

    Monocular SLAM refers to using a single camera to estimate robot ego motion while building a map of the environment. While Monocular SLAM is a well studied problem, automating Monocular SLAM by integrating it with trajectory planning frameworks is particularly challenging. This paper presents a novel formulation based on Reinforcement Learning (RL) that generates fail safe trajectories wherein the SLAM generated outputs do not deviate largely from their true values. Quintessentially, the RL framework successfully learns the otherwise complex relation between perceptual inputs and motor actions and uses this knowledge to generate trajectories that do not cause failure of SLAM. We show systematically in simulations how the quality of the SLAM dramatically improves when trajectories are computed using RL. Our method scales effectively across Monocular SLAM frameworks in both simulation and in real world experiments with a mobile robot.

    更新日期:2020-01-15
  • Graph Optimization Approach to Range-based Localization
    arXiv.cs.RO Pub Date : 2018-02-28
    Xu Fang; Chen Wang; Thien-Minh Nguyen; Lihua Xie

    In this paper, we propose a general graph optimization based framework for localization, which can accommodate different types of measurements with varying measurement time intervals. Special emphasis will be on range-based localization. Range and trajectory smoothness constraints are constructed in a position graph, then the robot trajectory over a sliding window is estimated by a graph based optimization algorithm. Moreover, convergence analysis of the algorithm is provided, and the effects of the number of iterations and window size in the optimization on the localization accuracy are analyzed. Extensive experiments on quadcopter under a variety of scenarios verify the effectiveness of the proposed algorithm and demonstrate a much higher localization accuracy than the existing range-based localization methods, especially in the altitude direction.

    更新日期:2020-01-15
Contents have been reproduced by permission of the publishers.
导出
全部期刊列表>>
2020新春特辑
限时免费阅读临床医学内容
ACS材料视界
科学报告最新纳米科学与技术研究
清华大学化学系段昊泓
自然科研论文编辑服务
中国科学院大学楚甲祥
上海纽约大学William Glover
中国科学院化学研究所
课题组网站
X-MOL
北京大学分子工程苏南研究院
华东师范大学分子机器及功能材料
中山大学化学工程与技术学院
试剂库存
天合科研
down
wechat
bug