Loading...
Please wait, while we are loading the content...
Similar Documents
How I Learned to Lightweight Vision-or-Stop Worrying and Love My Camera
| Content Provider | Semantic Scholar |
|---|---|
| Author | Horswill |
| Copyright Year | 2001 |
| Abstract | Sensing is often the limiting factor in mobile robot performance, particularly for robots that operate in dynamic environments. Both the robot’s opportunities for behavior and its ability to respond to contingencies are limited by the breadth and reliability of its percepts. While it can be difficult to get people to admit to it in writing, there is a common attitude in both the AI and robotics communities that vision is not even worth considering as a sensor because of its expense and unreliability. I will refer to this as "fear of vision." Vision has had a particularly bad reputation in some communities because of the huge volume of data which it produces (a single full-resolution frame grabber produces about 27MBytes/second) and the complex, iterative, floating-point computations required to interpret the data. Vision is, of course, often expensive by necessity. If one is faced with the task of constructing digital terrain maps from high resolution satellite images, then the only option is to build a full stereo or shape from shading system and use whatever computational resources are necessary to do the job. If one only has to make a robot drive down the corridor however, then one has many more options. A vision researcher might wish to use an expensive technique because of its intrinsic research interest. An engineer building a simple vacuum cleaning robot, however, will presumably be content with anything that does the job. Many robot builders have put up with sensors which give them less information than they need, simply because they felt that vision was impractical. I believe that it is practical to build very simple and reliable vision systems to perform a variety of tasks. Over the past five years I have developed a number of simple vision systems for piloting mobile robots in real time by taking advantage of the structure of robot’s task and environment. The most recent of the systems, the Polly system (see Horswill [5][6]), gives simple tours in an unmodified office environment using real-time vision processing performed by an inexpensive on-board computer. The robot was build for less than $20K. Gavin and Yamamoto have recently developed a delivery platform which we hope will be able to run much of the algorithms from Polly for less than $1K (see Gavin and Yamamoto [4]). Such a system is sufficiently inexpensive that it can realistically be incorporated into mass-market consumer products. To date, we have ported the low level navigation code from Polly to the platform and used it to control an IS Robotics 1~2 robot. Polly’s vision algorithms operate on low resolution (64 × 64 or less) images and involve no iterative optimization computations whatsoever. In fact, there are not even any floating point computations. |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | http://www.aaai.org/Papers/Symposia/Fall/1993/FS-93-03/FS93-03-014.pdf |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |