Найдено 116077 документов. Показаны первые 1000 результатов.

Autonomy software architecture for LORAX (Life On ice Robotic Antarctic eXplorer) Ari Jonsson1, ConorMcGann2, Liam Pedersen2, Michael Iatauro2, Srikanth Rajagopalan2 NASA Ames Research Center, Mailstop 269-2 Moffett Field, CA 94035 LORAX is a robotic astrobiological study of the ice field surrounding the Carapace Nunatak near the Allan Hills in Antarctica. The study culminates in a 100km traverse, sampling the ice at various depths (from surface to 10cm) at over 100 sites to survey microbial ecology and to record environmental parameters. Numerous factors drive the need for autonomy in the LORAX mission. First, it will demonstrate robotic science technologies applicable to future Mars mission, and as such, must operate with limited human oversight and interaction. Secondly, the LORAX science goals require minimizing biological contamination risk from human presence. Finally, the mission takes place in a highly uncertain and dynamic environment with limited resources, requiring the rover to adapt its plan of action to ensure successful mission completion, while maximizing the number of samples analyzed. The autonomy requirements from LORAX are shared by many robotic exploration tasks. Consequently, the LORAX autonomy architecture is a general architecture for on-board planning and execution in environments where science return is to be maximized against resource limitations and other constraints. Three key elements set it apart from other general planning-execution architectures used for rover operations: 1. Flexible plans describe families of plans having the same structure and outcomes. This flexibility increases the applicability of a plan in changing environments and reduces need for re-planning due to minor variations. 2. Continuous re-planning to do on-line plan optimization. This allows the autonomy system to seamlessly modify plans in response to outcomes that differ from what was expected. 3. Resource envelopes bound expected resource profiles. These ...


ROBOSPHERE: SELF-SUSTAINING ROBOTIC ECOLOGIES AS PRECURSORS TO HUMAN PLANETARY EXPLORATION Silvano P. Colombano* Automation and Robotics Area Computational Science Division NASA Ames Research Center Moffet Field CA 94035, USA [email protected] ABSTRACT The present sequential "mission oriented" approach to robotic planetary exploration, could be changed to an "infrastructure building" approach where a robotic presence is permanent, self sustaining and growing with each mission. We call this self-sustaining robotic ecology approach "robosphere" and discuss the technological issues that need to be addressed before this concept can be realized. One of the major advantages of this approach is that a robosphere would include much of the infrastructure required by human explorers and would thus lower the preparation and risk threshold inherent in the transition from robotic to human exploration. In this context we discuss some implications for space architecture. 1. INTRODUCTION Human presence on planetary surfaces or in deep space colonies will need to be preceded by robotic explorers and builders. This is will be needed for a complete understanding of the environment to be explored and for preparing a safe habitation complex for the first human explorers, including the means for in situ resource utilization. Robotic exploration of Mars has been a "one shot" approach where each surface mission is planned typically with a lander or rover that will perform a series of experiment for a few weeks, until the robot becomes unable to operate in the harsh Mars conditions and simply "dies". It would clearly be desirable to have robots on Mars that can last for much longer periods of time. I propose that there is an approach to sustained robotic exploration that can also pave the way to future human presence. The idea is to continue building a robotic infrastructure with every mission we send. The approach is to built teams of modular robots that could repair individual ...


Graphical Animation of Behavior Models Jeff Magee, Nat Pryce, Dimitra Giannakopoulou and Jeff Kramer {jnm, np2, dg1, jk}@doc.ic.ac.uk Department of Computing, Imperial College of Science, Technology and Medicine 180 Queen's Gate, London SW7 2BZ, UK. ABSTRACT Graphical animation is a way of visualizing the behavior of design models. This visualization is of use in validating a design model against informally specified requirements and in interpreting the meaning and significance of analysis results in relation to the problem domain. In this paper we describe how behavior models specified by Labeled Transition Systems (LTS) can drive graphical animations. The semantic framework for the approach is based on Timed Automata. Animations are described by an XML document that is used to generate a set of JavaBeans. The elaborated JavaBeans perform the animation actions as directed by the LTS model. Keywords Labeled Transition System, Graphic Animation, Behavior Analysis 1 INTRODUCTION A model-based design approach involves building analysis models early in the software lifecycle. These models can be developed shortly after the initial requirements capture and refined in parallel with further requirements elicitation so that early feedback on the operation of a proposed system can be fed back to customers and so that potential design problems are highlighted early. We have proposed such an approach, in relation to Software Architecture[1, 2], in which component behavior is modeled using Labeled Transitions Systems (LTS) and the overall behavior of a system can be formed by the parallel composition of these component models. We have developed the Labelled Transition System Analysis (LTSA) tool to support the approach. The behavior of a model can be interactively explored using the LTSA tool. The output of such an execution is essentially a trace of action names. Each action is the abstract representation in the model of an input or output of the proposed system. In common ...


Automated Data Processing as an AI Planning Problem Keith Golden Wanlin Pang1 Ramakrishna Nemani NASA Ames Research Center Moffett Field, CA 94035 [email protected] Petr Votava2 1 Introduction NASA's vision for Earth Science is to build a "sensor web": an adaptive array of heterogeneous satellites and other sensors that will track important events, such as storms, and provide real-time information about the state of the Earth to a wide variety of customers. Achieving this vision will require automation not only in the scheduling of the observations but also in the processing of the resulting data. To address this need, we have developed a planner-based agent to automatically generate and execute data-flow programs to produce the requested data products. Data processing domains are substantially different from other planning domains that have been explored, and this has led us to substantially different choices in terms of representation and algorithms. We discuss some of these differences and discuss the approach we have adopted. 1.1 TOPS Case Study As a demonstration of our approach, we are applying our agent, called IMAGEbot, to the Terrestrial Observation and Prediction System (TOPS, http://www.forestry.umt.edu/ntsg/Projects/TOPS/), an ecological forecasting system that assimilates data from Earth-orbiting satellites and ground weather stations to model and forecast conditions on the surface, such as soil moisture, vegetation growth and plant stress (Nemani et al. 2002). Prospective customers of TOPS include scientists, farmers and fire fighters. With such a variety of customers and data sources, there is a strong need for a flexible mechanism for producing the desired data products for the customers, taking into account the information needs of the customer, data availability, deadlines, resource usage (some scientific models take many hours to execute) and constraints based on context (a scientist with a palmtop computer in the field has different display ...


Gigapan Voyage for Robotic Recon Gigapan Voyage for Robotic Recon Susan Young Lee SGT, Inc. / NASA Ames [email protected] Ted Morse CMU / NASA Ames [email protected] Eric Park CMU / NASA Ames [email protected] ABSTRACT Gigapan Voyage (GV) is a self-contained remotely-operable Gigapan capturing system that is currently being developed by the Intelligent Robotics Group (IRG) at NASA Ames Research Center. Gigapan Voyage was primarily designed to be integrated onto Johnson Space Center's Lunar Electric Rovers (LER). While on LER, Gigapan Voyage was used by scientists and astronauts during the 2009 and 2010 Desert RATS field tests. The concept behind Gigapan Voyage is to merge all the sub-components of the commercial GigaPan system into an all-inone system that can capture, stitch, and display Gigapans in an automated way via a simple web interface. The GV system enables NASA to quickly and easily add remote-controlled Gigapan capturing capability onto rovers with minimal integration effort. Keywords Geology, NASA, Black Point Lava Flow, Robot, K10, LER, Gigapan Voyage, Desert RATS, Intelligent Robotics Group INTRODUCTION Gigapan Voyage (GV) is a web-controllable, stand-alone Gigapan capturing, stitching, and browsing system developed at NASA Ames Research Center by the Intelligent Robotics Group. In 2009, the Gigapan Voyage system was installed on Johnson Space Center's Lunar Electric Rover (LER) for the Desert Research and Technology Studies (D-RATS) field test. During the two and half week field test approximately 275 Gigapans were captured by the science and astronaut teams. Gigapan Voyage encapsulates all the sub- components of the commercial GigaPan system Figure 1: Gigapan Voyage mounted on the Lunar Electric Rover in 2010 and delivers an all-in-one package that can be remotely controlled via a simple web-interface. By designing the system in this way, integration onto the two Lunar Electric Rovers only required mounting the hardware, accessing power ...


1 I Describing the Design Contributors to Mode Error Asaf Degani' and Alex Kirlik2 lNASA Ames Research Center, Moffett Field, CA USA adegani @ mail.arc.nasa.gov 2Center for Human-Machine Systems Research School of Industrial & Systems Engineering Georgia Institute of Technology, Atlanta, GA USA [email protected] Abstract Although mode error has attracted a great deal of recent interest from those involved with complex systems, the design factors that contribute to this problems are not well understood. In this paper we provide an introduction to a modeling framework we use to describe complex, multi-modalhuman-machine systems. The modeling upproach is based on the statechart formalism, which is an extension of the finite state machine formalism to allow representation of concurrency, hierarchy, default transitions, and broadcast of parameter information. After having used this approach to model a number of humanmachine systems known to create mode control problems for the operator, we have identified a number of system design features that contribute to mode error. Introduction The purpose of this paper is to describe a methodololgy we have developed for the analysis and design of the interface to complex, high-technology control systems. This methodology has been motivated by our experiences over the past 10 years conducting cognitive engineering research in domains such as commercial aviation, military command and control, and emergency medical systems in the health care industry. In domains such as these, responsibility for system control is shared by human operators and a variety of semi-automated control systems (e.g., autopilots, decision support systems, automatic blood pressure measurement devices). We have observed that a key contributor to operator confusion and error in these systems is the design of the interface between the human operator and the semi-automated control systems which jointly share responsibility for effective system performance. In ...


Policy Transfer in Mobile Robots using Neuro-Evolutionary Navigation Matt Knudson Carnegie Mellon University [email protected] Kagan Tumer Oregon State University [email protected] ABSTRACT In this paper, we first present a state/action representation that allows robots to learn good navigation policies, but also allows them to transfer the policy to new and more complex situations. In particular, we show how the evolved policies can transfer to situations with: (i) new tasks (different obstacle and target configurations and densities); and (ii) new sets of sensors (different resolution). Our results show that in all cases, policies evolved in simple environments and transferred to more complex situations outperform policies directly evolved in the complex situation both in terms of overall performance (up to 30%) and convergence speed (up to 90%). Categories and Subject Descriptors I.2.6 [AI]: Learning Keywords Neural Networks, Incremental Evolution, Navigation, Robotics 1. INTRODUCTION Advances in mobile autonomous robots have provided solutions to complex tasks previously only considered achievable by humans. Such domains include planetary exploration and unmanned flight where autonomous navigation plays a key role in the success of the robots. One of the most popular ways to mitigate the complex nature of robotic tasks is to focus on simple tasks first and then transfer that knowledge into more complex tasks. Such an approach is termed transfer learning and is gaining more attention as of late [6]. The primary reason for the success of transfer learning is in the decomposition of a task into either stages [6] or into subtasks the knowledge of which is combined to accomplish the more complex task [5]. In this work, we explore a neuro-evolutionary approach whereby policies are incrementally evolved through tasks with increasing degrees of difficulty [4]. Neuro-evolutionary approaches fall in the policy search category where the aim is to search ...


LISA Framework for Enhancing Gravitational Wave Signal Extraction Techniques David E. Thompson1 and Rajkumar Thirumalainambi2 1NASA Ames Research Center, MS 269-1, Moffett Field, CA 94035-1000 2QSS Group at NASA Ames, MS 269-2, Moffett Field, CA 94035-1000 Abstract. This paper describes the development of a Framework for benchmarking and comparing signal-extraction and noise-interference-removal methods that are applicable to interferometric Gravitational Wave detector systems. The primary use is towards comparing signal and noise extraction techniques at LISA frequencies from multiple (possibly confused) gravitational wave sources. The Framework includes extensive hybrid learning/classification algorithms, as well as post-processing regularization methods, and is based on a unique plug-and-play (component) architecture. Published methods for signal extraction and interference removal at LISA frequencies are being encoded, as well as multiple source noise models, so that the stiffness of GW Sensitivity Space can be explored under each combination of methods. Furthermore, synthetic datasets and source models can be created and imported into the Framework, and specific degraded numerical experiments can be run to test the flexibility of the analysis methods. The Framework also supports use of full current LISA Testbeds, Synthetic data systems, and Simulators already in existence through plug-ins and wrappers, thus preserving those legacy codes and systems in tact. Keywords: component-based exploration framework; LISA signal and noise identification methods; MLDC; intelligent systems classification techniques; regularization of time-series patterns PACS: 95.75.-z; 95.55.Ym; 95.85.Sz; 95.75.Wx; 95.30.Sf. INTRODUCTION A team at NASA Ames is developing a Framework for benchmarking and comparing signal-extraction and noise-interference-removal methods that are applicable to interferometric Gravitational Wave (GW) detector systems. The main target is developing new concepts ...


Portfolios in Stochastic Local Search: E ciently Computing Most Probable Explanations in Bayesian Networks Ole J. Mengshoel Carnegie Mellon University NASA-Ames Research Center Mail Stop 269-3 Bldg. T35-B, Rm. 107 P.O. Box 1 Moett Field, CA 94035-0001 [email protected] Dan Roth Department of Computer Science University of Illinois at Urbana-Champaign 3322 Siebel Center 1304 West Spring...eld Avenue Urbana, IL 61801 [email protected] David C. Wilkins Symbolic Systems Program Stanford University Bldg. 460, Rm. 127 Stanford, CA 94305 [email protected] Abstract Portfolio methods support the combination of dierent algorithms and heuristics, including stochastic local search (SLS) heuristics, and have been identi...ed as a promising approach to solve computationally hard problems. While successful in experiments, theoretical foundations and analytical results for portfolio-based SLS heuristics are less developed. This article aims to improve the understanding of the role of portfolios of heuristics in SLS. We emphasize the problem of computing most probable explanations (MPEs) in Bayesian networks (BNs). Algorithmically, we discuss a portfolio-based SLS algorithm for MPE computation, Stochastic Greedy Search (SGS). SGS supports the integration of dierent initialization operators (or initialization heuristics) and dierent search operators (greedy and noisy heuristics), thereby enabling new analytical and experimental results. Analytically, we introduce a novel Markov chain model tailored to portfolio-based SLS algorithms including SGS, thereby enabling us to analytically form expected hitting time results that explain empirical run time results. For a speci...c BN, we show the bene...t of using a homogenous initialization portfolio. To further illustrate the portfolio approach, we consider novel additive search heuristics for handling determinism in the form of zero entries in conditional probability tables in BNs. Our additive approach adds rather than ...


ALPS: The Age-Layered Population Structure for Reducing the Problem of Premature Convergence [Genetic Programming Track] Gregory S. Hornby University Affiliated Research Center, UC Santa Cruz NASA Ames Research Center, Mailstop 269-3 Moffett Field, CA [email protected] ABSTRACT To reduce the problem of premature convergence we define a new attribute of an individual, its age, and propose the Age-Layered Population Structure (ALPS), in which age is used to restrict competition and breeding between members of the population. ALPS differs from a typical EA by segregating individuals into different age-layers by their "age" - a measure of how long the genetic material has been in the population - and by regularly replacing all individuals in the bottom layer with randomly generated ones. The introduction of new, randomly generated individuals at regular intervals results in an EA that is never completely converged and is always looking at new parts of the fitness landscape. By using age to restrict competition and breeding search is able to develop promising young individuals without them being dominated by older ones. We demonstrate the effectiveness of the ALPS algorithm on an antenna design problem in which evolution with ALPS produces antennas more than twice as good as does evolution with two other types of EAs. Further analysis shows that the ALPS model does allow the offspring of newly generated individuals to move the population out of mediocre local-optima to better parts of the fitness landscape. Categories and Subject Descriptors I.2 [Artificial Intelligence]: General General Terms Algorithms, Design Keywords Computer-Automated Design, Design, Evolutionary Algorithms, Evolutionary Design, Open-ended design Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the ...

<<< [21-30] >>>