Loading...
Please wait, while we are loading the content...
Similar Documents
Discrimination, Bias, Fairness, and Trustworthy AI
| Content Provider | MDPI |
|---|---|
| Author | Juan, Luis Suárez Varona, Daniel |
| Copyright Year | 2022 |
| Description | In this study, we analyze “Discrimination”, ”Bias”, “Fairness”, and “Trustworthiness” as working variables in the context of the social impact of AI. It has been identified that there exists a set of specialized variables, such as security, privacy, responsibility, etc., that are used to operationalize the principles in the Principled AI International Framework. These variables are defined in such a way that they contribute to others of more general scope, for example, the ones studied in this study, in what appears to be a generalization–specialization relationship. Our aim in this study is to comprehend how we can use available notions of bias, discrimination, fairness, and other related variables that will be assured during the software project’s lifecycle (security, privacy, responsibility, etc.) when developing trustworthy algorithmic decision-making systems (ADMS). Bias, discrimination, and fairness are mainly approached with an operational interest by the Principled AI International Framework, so we included sources from outside the framework to complement (from a conceptual standpoint) their study and their relationship with each other. |
| Starting Page | 5826 |
| e-ISSN | 20763417 |
| DOI | 10.3390/app12125826 |
| Journal | Applied Sciences |
| Issue Number | 12 |
| Volume Number | 12 |
| Language | English |
| Publisher | MDPI |
| Publisher Date | 2022-06-08 |
| Access Restriction | Open |
| Subject Keyword | Applied Sciences Information and Library Science Discrimination Bias Fairness Trustworthy Adms Principled Ai Social Impact of Ai Ethics and Ai |
| Content Type | Text |
| Resource Type | Article |