NDLI logo
  • Content
  • Similar Resources
  • Metadata
  • Cite This
  • Log-in
  • Fullscreen
Log-in
Do not have an account? Register Now
Forgot your password? Account recovery
  1. Proceedings of the 7th Forum for Information Retrieval Evaluation (FIRE '15)
  2. Construction of a Semi-Automated model for FAQ Retrieval via Short Message Service
Loading...

Please wait, while we are loading the content...

Context-driven Dimensionality Reduction for Clustering Text Documents
An Empirical Comparison of Statistical Term Association Graphs with DBpedia and ConceptNet for Query Expansion
Document Retrieval Metrics for Program Understanding
HBE: Hashtag-Based Emotion Lexicons for Twitter Sentiment Analysis
Automatic Identification of Conceptual Structures using Deep Boltzmann Machines
Construction of a Semi-Automated model for FAQ Retrieval via Short Message Service
OnForumS: The Shared Task on Online Forum Summarisation at MultiLing'15
word2vec or JoBimText?: A Comparison for Lexical Expansion of Hindi Words
A Comparative Study on Different Translation Approaches for Query Formation in the Source Retrieval Task
MESS: A Multilingual Error based String Similarity measure for transliterated name variants

Similar Documents

...
Efficiently denoising SMS text for FAQ retrieval

Article

...
Frequently asked question 06-0026 (2007).

...
Frequently asked questions retrieval for croatian based on semantic textual similarity.

Article

...
An Effective Similarity Measurement for FAQ Question Answering System

Article

...
Question Answering from Frequently-Asked Question Files: Experiences with the FAQ Finder System (1997)

Technical Report

...
Question Answering from Frequently-Asked Question Files: Experiences with the FAQ Finder System (1996)

Technical Report

...
Retrieving answers from frequently asked questions pages on the web

Article

...
Modular automated transport -- frequently asked questions (faq) (2000).

...
Knowledge-Based Information Retrieval From Semi-Structured Text (1996)

Article

Construction of a Semi-Automated model for FAQ Retrieval via Short Message Service

Content Provider ACM Digital Library
Author Agarwal, Amit Bhatt, Gaurav Mittal, Ankush Gupta, Bhumika
Abstract Mobile phones, currently, are one of the most extensive medium for the communication of any kind of information to the general public. Being one of the fastest spreading technologies, even to the remotest of areas, this highly sought after contemporary resource has started seeking its application in areas like healthcare, education, banking and internet crime. On this account Short Message Service via mobile phones can aid as an efficient tool to retrieve answers to various Frequently Asked Questions (FAQs) in multiple domains. This application of text messages using mobile phones can be quite substantial only if the limitations that occur due to the large amount of noise in the SMS text can be eliminated. The solution proposed in this paper tries to effectively denoise the text using a similarity measure that aggregates results from prefix and suffix matching and a similarity ratio. To further refine these results supervised machine learning using Naïve Bayes theorem on the N-Gram Markov model is implemented. For this we use the training database of FAQs in various domains to compute probabilities of consecutive occurrence of bigrams of words. Further, using set operations like intersection and minus the corrected query is matched in the FAQ corpus to generate the most proximate questions corresponding to it. To demonstrate the accuracy of the proposed algorithm it was experimented upon a set of queries collected from some mobile phone users and the results were compared with that of certain existing methodologies.
Starting Page 35
Ending Page 38
Page Count 4
File Format PDF
ISBN 9781450340045
DOI 10.1145/2838706.2838717
Language English
Publisher Association for Computing Machinery (ACM)
Publisher Date 2015-12-04
Publisher Place New York
Access Restriction Subscribed
Subject Keyword Suffix Similarity measure Frequently asked question (faq) noise removal Naïve bayes Prefix N-gram model Short message service (sms)
Content Type Text
Resource Type Article
  • About
  • Disclaimer
  • Feedback
  • Sponsor
  • Contact
  • Chat with Us
About National Digital Library of India (NDLI)
NDLI logo

National Digital Library of India (NDLI) is a virtual repository of learning resources which is not just a repository with search/browse facilities but provides a host of services for the learner community. It is sponsored and mentored by Ministry of Education, Government of India, through its National Mission on Education through Information and Communication Technology (NMEICT). Filtered and federated searching is employed to facilitate focused searching so that learners can find the right resource with least effort and in minimum time. NDLI provides user group-specific services such as Examination Preparatory for School and College students and job aspirants. Services for Researchers and general learners are also provided. NDLI is designed to hold content of any language and provides interface support for 10 most widely used Indian languages. It is built to provide support for all academic levels including researchers and life-long learners, all disciplines, all popular forms of access devices and differently-abled learners. It is designed to enable people to learn and prepare from best practices from all over the world and to facilitate researchers to perform inter-linked exploration from multiple sources. It is developed, operated and maintained from Indian Institute of Technology Kharagpur.

Learn more about this project from here.

Disclaimer

NDLI is a conglomeration of freely available or institutionally contributed or donated or publisher managed contents. Almost all these contents are hosted and accessed from respective sources. The responsibility for authenticity, relevance, completeness, accuracy, reliability and suitability of these contents rests with the respective organization and NDLI has no responsibility or liability for these. Every effort is made to keep the NDLI portal up and running smoothly unless there are some unavoidable technical issues.

Feedback

Sponsor

Ministry of Education, through its National Mission on Education through Information and Communication Technology (NMEICT), has sponsored and funded the National Digital Library of India (NDLI) project.

Contact National Digital Library of India
Central Library (ISO-9001:2015 Certified)
Indian Institute of Technology Kharagpur
Kharagpur, West Bengal, India | PIN - 721302
See location in the Map
03222 282435
Mail: support@ndl.gov.in
Sl. Authority Responsibilities Communication Details
1 Ministry of Education (GoI),
Department of Higher Education
Sanctioning Authority https://www.education.gov.in/ict-initiatives
2 Indian Institute of Technology Kharagpur Host Institute of the Project: The host institute of the project is responsible for providing infrastructure support and hosting the project https://www.iitkgp.ac.in
3 National Digital Library of India Office, Indian Institute of Technology Kharagpur The administrative and infrastructural headquarters of the project Dr. B. Sutradhar  bsutra@ndl.gov.in
4 Project PI / Joint PI Principal Investigator and Joint Principal Investigators of the project Dr. B. Sutradhar  bsutra@ndl.gov.in
Prof. Saswat Chakrabarti  will be added soon
5 Website/Portal (Helpdesk) Queries regarding NDLI and its services support@ndl.gov.in
6 Contents and Copyright Issues Queries related to content curation and copyright issues content@ndl.gov.in
7 National Digital Library of India Club (NDLI Club) Queries related to NDLI Club formation, support, user awareness program, seminar/symposium, collaboration, social media, promotion, and outreach clubsupport@ndl.gov.in
8 Digital Preservation Centre (DPC) Assistance with digitizing and archiving copyright-free printed books dpc@ndl.gov.in
9 IDR Setup or Support Queries related to establishment and support of Institutional Digital Repository (IDR) and IDR workshops idr@ndl.gov.in
I will try my best to help you...
Cite this Content
Loading...