Please wait, while we are loading the content...
Please wait, while we are loading the content...
| Content Provider | IEEE Xplore Digital Library |
|---|---|
| Author | Nair, Ravi |
| Copyright Year | 2010 |
| Description | Author affiliation: IBM Thomas J. Watson Research Center, Yorktown Heights, NY 10598 (Nair, Ravi) |
| Abstract | We are at the threshold of an explosion in new data, produced not only by large, powerful scientific and commercial computers, but also by the billions of low-power devices, from notebooks and mobile interface devices to cell phones and sensors of various kinds. The traditional techniques of processing such information by first storing them in databases and then manipulating and serving them through large computers are becoming too expensive. These complex large systems have a high acquisition cost, but, in addition, suffer also from high running costs, especially in power consumption. Both these costs can be contained by recognizing that there is a precision implied by traditional computing that is not needed in the processing of most new types of data. The relaxation of precision, such as the strict guarantees of memory coherence and consistency, can help in the wider exploitation of known energy-efficient modes of computing like throughput computing. More important, the relaxation of requirement of deterministic execution provides us an opportunity to deploy in the processing of this vast new data the same low-power, low-cost technology that was used to generate the data in the first place. Such energy-efficient circuits suffer from greater unreliability and variability in performance when used in the high-throughput mode, but these problems can be addressed by changing the way we design such systems, changing the nature of the algorithms for such systems, and by modifying the expectation of the quality of results produced by such systems. We have called this the approximate computing paradigm [1]. Solutions that provide non-exact results have long been used in computing. The precision of numbers represented in computers is limited and hence values of quantities represented in a computer are only an approximation of their actual values. Heuristic algorithms produce solutions to hard problems in much shorter time compared to optimal algorithms in exchange for results that may only approach in quality those produced by optimal algorithms. The difference is that these approaches, while approximate, produce the same results each time they are used on a piece of data, whereas in approximate computing we do not expect to produce the same results each time the algorithm is exercised on the same data. There are two sources of imperfection in approximate computing. The first arises from imperfect execution of an algorithm. This can be due to problems with the design of the algorithm or of the hardware, due to faults that occur after deployment of the hardware, due to the variability of operation of circuits when pushed to their design limits, or due to malicious attacks on systems. The second arises from imperfection in the data stream itself because of missing data or modified data, produced intentionally, as through data compression, or unintentionally, as through faulty communication channels. All these imperfections could potentially be rectified through the use of expensive techniques such as redundancy, conservative design, or conservative device operating range. The goal of approximate computing, however, is to combat these sources of imperfection inexpensively and in an energy-efficient manner while producing results that may be different, yet acceptable. Computing models that achieve this goal have to address both the detection and the correction of such imperfections. The detection of such imperfections can be done either by the user observing and reacting to a wrong result as in media applications, by the algorithm expecting a range of correct results as in the estimation technique of [2], or by the run-time monitoring of the execution of the system as in [3]. The correction of system behavior can be done either by attempting a different algorithm as in [2], by patching the code as in [3], or by repeating the execution. We will argue in this talk that future systems will need to combine all these techniques and integrate new ones into a single dynamically optimized system that employs feedback from the user to guide the high-level choice of energy-efficient algorithms, and that employs prediction based on past experience to guide the low-level energy-efficient execution of the system. This has a tantalizing similarity to some models of functioning of a remarkably efficient approximate computing appliance we all know — the human brain. [1] R. Nair and D. A. Prener, “Computing, Approximately,” Wild and Crazy Ideas VI, ASPLOS-XIII, Seattle, WA, March 2008. [2] S. P. Narayanan et al, “Computation as estimation: Estimation-theoretic IC design improves robustness and reduces power,” Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, March 30-April 4, 2008. [3] J. H. Perkins et al, “Automatically Patching Errors in Deployed Software,” 22nd ACM Symposium on Operating Systems Principles, Big Sky, Montana, October 2009. |
| Starting Page | 359 |
| Ending Page | 360 |
| File Size | 365858 |
| Page Count | 2 |
| File Format | |
| ISBN | 9781424485888 |
| DOI | 10.1145/1840845.1840921 |
| Language | English |
| Publisher | Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher Date | 2010-08-18 |
| Publisher Place | USA |
| Access Restriction | Subscribed |
| Rights Holder | Association for Computing Machinery, Inc. (ACM) |
| Subject Keyword | Approximation algorithms Energy efficiency Algorithm design and analysis Approximation methods Heuristic algorithms Computers Signal processing algorithms prediction systems Approximate computing energy-efficient computing |
| Content Type | Text |
| Resource Type | Article |
National Digital Library of India (NDLI) is a virtual repository of learning resources which is not just a repository with search/browse facilities but provides a host of services for the learner community. It is sponsored and mentored by Ministry of Education, Government of India, through its National Mission on Education through Information and Communication Technology (NMEICT). Filtered and federated searching is employed to facilitate focused searching so that learners can find the right resource with least effort and in minimum time. NDLI provides user group-specific services such as Examination Preparatory for School and College students and job aspirants. Services for Researchers and general learners are also provided. NDLI is designed to hold content of any language and provides interface support for 10 most widely used Indian languages. It is built to provide support for all academic levels including researchers and life-long learners, all disciplines, all popular forms of access devices and differently-abled learners. It is designed to enable people to learn and prepare from best practices from all over the world and to facilitate researchers to perform inter-linked exploration from multiple sources. It is developed, operated and maintained from Indian Institute of Technology Kharagpur.
Learn more about this project from here.
NDLI is a conglomeration of freely available or institutionally contributed or donated or publisher managed contents. Almost all these contents are hosted and accessed from respective sources. The responsibility for authenticity, relevance, completeness, accuracy, reliability and suitability of these contents rests with the respective organization and NDLI has no responsibility or liability for these. Every effort is made to keep the NDLI portal up and running smoothly unless there are some unavoidable technical issues.
Ministry of Education, through its National Mission on Education through Information and Communication Technology (NMEICT), has sponsored and funded the National Digital Library of India (NDLI) project.
| Sl. | Authority | Responsibilities | Communication Details |
|---|---|---|---|
| 1 | Ministry of Education (GoI), Department of Higher Education |
Sanctioning Authority | https://www.education.gov.in/ict-initiatives |
| 2 | Indian Institute of Technology Kharagpur | Host Institute of the Project: The host institute of the project is responsible for providing infrastructure support and hosting the project | https://www.iitkgp.ac.in |
| 3 | National Digital Library of India Office, Indian Institute of Technology Kharagpur | The administrative and infrastructural headquarters of the project | Dr. B. Sutradhar bsutra@ndl.gov.in |
| 4 | Project PI / Joint PI | Principal Investigator and Joint Principal Investigators of the project |
Dr. B. Sutradhar bsutra@ndl.gov.in Prof. Saswat Chakrabarti will be added soon |
| 5 | Website/Portal (Helpdesk) | Queries regarding NDLI and its services | support@ndl.gov.in |
| 6 | Contents and Copyright Issues | Queries related to content curation and copyright issues | content@ndl.gov.in |
| 7 | National Digital Library of India Club (NDLI Club) | Queries related to NDLI Club formation, support, user awareness program, seminar/symposium, collaboration, social media, promotion, and outreach | clubsupport@ndl.gov.in |
| 8 | Digital Preservation Centre (DPC) | Assistance with digitizing and archiving copyright-free printed books | dpc@ndl.gov.in |
| 9 | IDR Setup or Support | Queries related to establishment and support of Institutional Digital Repository (IDR) and IDR workshops | idr@ndl.gov.in |
|
Loading...
|