Loading...
Please wait, while we are loading the content...
Similar Documents
Learning to coordinate fuzzy behaviors for autonomous agents
| Content Provider | Semantic Scholar |
|---|---|
| Author | Bonarini, Andrea |
| Copyright Year | 2007 |
| Abstract | Abstract We developed a system learning behaviors represented as sets of fuzzy rules for autonomousagents. In the past, we adopted our approach to learn successfully simple reactive behaviors, also inthose cases when the evaluation function used in our reinforcement learning schema judgesunevenly the different situations the autonomous agents operate on. In this paper we present a newversion of our approach that can learn to coordinate many different behaviors organized in classes ofmutually exclusive behaviors. The present version of our algorithm gives satisfactory results alsoon this new task. 1. Introduction In (Bonarini, 1993), (Bonarini, 1994a), (Bonarini, 1994b) we presented ELF (Evolutionary Learning for Fuzzyrules) a Reinforcement Learning approach we used to generate behaviors for autonomous agents. Our behaviors are setsof fuzzy rules. Therefore they have the desirable features Fuzzy Logic Controllers have, such that smoothness in theoutput, robustness, and so on.Up to now, we applied ELF on simple reactive behaviors, obtaining interesting results. In particular, in (Bonarini,1994a) we shown in detail as the ELF approach to learning is especially suited to support the development ofautonomous agents, since it converges rapidly, and it is resistant to variations of the learning parameters. Moreover, theapproach has been especially designed to recover problems coming from ill-defined evaluation functions. The valueprovided by the evaluation function is used to evaluate the state reached performing the action proposed by some rules.On the basis of this evaluation, the rules receive a reward that makes them survive or die, leaving space for new rules. Ifthe evaluation function is ill-defined and gives evaluations without considering the possibilities of the agent in a givenenvironment, a reinforcement learning schema may bring wrong populations of rules, in particular eliminating the rulescorresponding to the states under-evaluated. ELF is based on the cooperation (through the traditional fuzzycombination) of different populations of rules, each population covering a given set of situations described by fuzzysets. Competition necessary for the evaluation of the system is only present in the sub populations. This providesresistance to the problems of ill-defined evaluation functions, and efficiency in the convergence to an optimal solution.In a real application, an autonomous agent may have to perform complex tasks. According to the BehaviorEngineering approach (Bonarini, 1994), it is better to decompose a complex behavior into simpler ones, each oneworking on a sub task. The decomposition is done in accordance with the approach first proposed by Brooks (Brooks,1991).Taking this approach, we have first identify how to decompose a behavior in simpler behaviors, and then we have todecide how to compose the simpler behaviors we obtained.We take the assumption that the simple behaviors may be either programmed or learnt independently from each other.Anyway, we treat here the case where we have a set of simple behaviors and we have to coordinate them to achieve acomplex one. In the following, we first introduce some simple definitions concerning Behavior Engineering, showingtheir role in the organization of an architecture for complex behaviors. Then, we will present such an architecture, andhow ELF may learn to coordinate the simple behaviors within it. Finally, we present some of the results we obtained ina simulated environment. While we are writing, we have started experiments with a real robot (the "Fuzzy CAT"),whose model has been used in simulation.. |
| File Format | PDF HTM / HTML |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |