WebSite Logo
  • Content
  • Similar Resources
  • Metadata
  • Cite This
  • Log-in
  • Fullscreen
Log-in
Do not have an account? Register Now
Forgot your password? Account recovery
  1. 4OR
  2. 4OR : Volume 7
  3. 4OR : Volume 7, Issue 2, June 2009
  4. Kernel-based learning methods for preference aggregation
Loading...

Please wait, while we are loading the content...

4OR : Volume 15
4OR : Volume 14
4OR : Volume 13
4OR : Volume 12
4OR : Volume 11
4OR : Volume 10
4OR : Volume 9
4OR : Volume 8
4OR : Volume 7
4OR : Volume 7, Issue 4, November 2009
4OR : Volume 7, Issue 3, October 2009
4OR : Volume 7, Issue 2, June 2009
Constraint programming-based column generation
A family of easy polyhedra
Integer extended ABS algorithms and possible control of intermediate results for linear Diophantine systems
Kernel-based learning methods for preference aggregation
Progressive methods in multiple criteria decision analysis
Game Theory applied to gene expression analysis
Agent scheduling in a multiskill call center
Topics in real-time fleet management
4OR : Volume 7, Issue 1, March 2009
4OR : Volume 6
4OR : Volume 5
4OR : Volume 4
4OR : Volume 3
4OR : Volume 2
4OR : Volume 1

Similar Documents

...
Quadratic convex reformulations for quadratic 0–1 programming

Article

...
Partial Lagrangian relaxation for general quadratic programming

Article

...
Learning to rank: a ROC-based graph-theoretic approach

Article

...
Progressive methods in multiple criteria decision analysis

Article

...
Multiple criteria decision aiding: a dialectical perspective

Article

...
Stochastic semidefinite programming: a new paradigm for stochastic optimization

Article

...
Nonsmooth optimization: theory and algorithms

Article

...
Optimization of modular machining lines

Article

...
Compact linearization for binary quadratic problems

Article

Kernel-based learning methods for preference aggregation

Content Provider Springer Nature Link
Author Waegeman, Willem Baets, Bernard Boullart, Luc
Copyright Year 2008
Abstract The mathematical representation of human preferences has been a subject of study for researchers in different fields. In multi-criteria decision making (MCDM) and fuzzy modeling, preference models are typically constructed by interacting with the human decision maker (DM). However, it is known that a DM often has difficulties to specify precise values for certain parameters of the model. He/she instead feels more comfortable to give holistic judgements for some of the alternatives. Inference and elicitation procedures then assist the DM to find a satisfactory model and to assess unjudged alternatives. In a related but more statistical way, machine learning algorithms can also infer preference models with similar setups and purposes, but here less interaction with the DM is required/allowed. In this article we discuss the main differences between both types of inference and, in particular, we present a hybrid approach that combines the best of both worlds. This approach consists of a very general kernel-based framework for constructing and inferring preference models. Additive models, for which interpretability is preserved, and utility models can be considered as special cases. Besides generality, important benefits of this approach are its robustness to noise and good scalability. We show in detail how this framework can be utilized to aggregate single-criterion outranking relations, resulting in a flexible class of preference models for which domain knowledge can be specified by a DM.
Starting Page 169
Ending Page 189
Page Count 21
File Format PDF
ISSN 16194500
Journal 4OR
Volume Number 7
Issue Number 2
e-ISSN 16142411
Language English
Publisher Springer-Verlag
Publisher Date 2008-09-18
Publisher Place Berlin, Heidelberg
Access Restriction One Nation One Subscription (ONOS)
Subject Keyword Preference relations Kernel methods Aggregation of criteria Inference procedures Quadratic programming Preference learning Industrial and Production Engineering Optimization Operations Research/Decision Theory
Content Type Text
Resource Type Article
Subject Theoretical Computer Science Management Science and Operations Research Management Information Systems Computational Theory and Mathematics
  • About
  • Disclaimer
  • Feedback
  • Sponsor
  • Contact
  • Chat with Us
About National Digital Library of India (NDLI)
NDLI logo

National Digital Library of India (NDLI) is a virtual repository of learning resources which is not just a repository with search/browse facilities but provides a host of services for the learner community. It is sponsored and mentored by Ministry of Education, Government of India, through its National Mission on Education through Information and Communication Technology (NMEICT). Filtered and federated searching is employed to facilitate focused searching so that learners can find the right resource with least effort and in minimum time. NDLI provides user group-specific services such as Examination Preparatory for School and College students and job aspirants. Services for Researchers and general learners are also provided. NDLI is designed to hold content of any language and provides interface support for 10 most widely used Indian languages. It is built to provide support for all academic levels including researchers and life-long learners, all disciplines, all popular forms of access devices and differently-abled learners. It is designed to enable people to learn and prepare from best practices from all over the world and to facilitate researchers to perform inter-linked exploration from multiple sources. It is developed, operated and maintained from Indian Institute of Technology Kharagpur.

Learn more about this project from here.

Disclaimer

NDLI is a conglomeration of freely available or institutionally contributed or donated or publisher managed contents. Almost all these contents are hosted and accessed from respective sources. The responsibility for authenticity, relevance, completeness, accuracy, reliability and suitability of these contents rests with the respective organization and NDLI has no responsibility or liability for these. Every effort is made to keep the NDLI portal up and running smoothly unless there are some unavoidable technical issues.

Feedback

Sponsor

Ministry of Education, through its National Mission on Education through Information and Communication Technology (NMEICT), has sponsored and funded the National Digital Library of India (NDLI) project.

Contact National Digital Library of India
Central Library (ISO-9001:2015 Certified)
Indian Institute of Technology Kharagpur
Kharagpur, West Bengal, India | PIN - 721302
See location in the Map
03222 282435
Mail: support@ndl.gov.in
Sl. Authority Responsibilities Communication Details
1 Ministry of Education (GoI),
Department of Higher Education
Sanctioning Authority https://www.education.gov.in/ict-initiatives
2 Indian Institute of Technology Kharagpur Host Institute of the Project: The host institute of the project is responsible for providing infrastructure support and hosting the project https://www.iitkgp.ac.in
3 National Digital Library of India Office, Indian Institute of Technology Kharagpur The administrative and infrastructural headquarters of the project Dr. B. Sutradhar  bsutra@ndl.gov.in
4 Project PI / Joint PI Principal Investigator and Joint Principal Investigators of the project Dr. B. Sutradhar  bsutra@ndl.gov.in
Prof. Saswat Chakrabarti  will be added soon
5 Website/Portal (Helpdesk) Queries regarding NDLI and its services support@ndl.gov.in
6 Contents and Copyright Issues Queries related to content curation and copyright issues content@ndl.gov.in
7 National Digital Library of India Club (NDLI Club) Queries related to NDLI Club formation, support, user awareness program, seminar/symposium, collaboration, social media, promotion, and outreach clubsupport@ndl.gov.in
8 Digital Preservation Centre (DPC) Assistance with digitizing and archiving copyright-free printed books dpc@ndl.gov.in
9 IDR Setup or Support Queries related to establishment and support of Institutional Digital Repository (IDR) and IDR workshops idr@ndl.gov.in
I will try my best to help you...
Cite this Content
Loading...