On Fairness and Machine Learning: Can (and Should) the Algorithm Be Fair?
DOI:
https://doi.org/10.30827/acfs.v57i.25250Keywords:
Fairness, Machine Learning, Algorithm, Bias, EqualityAbstract
The increasingly frequent use of Artificial Intelligence in the field of law, forces us to consider whether automated decisions can, and should, be fair. The algorithm, in Machine Learning, has the potential to learn, which gives it a certain degree of autonomy. Biases, discriminations and inequalities that derive from automated decisions show the myth of the fair algorithm. The standard of justice that is required in the analogical conception of Law must also be required in the digital dimension. In this paper, from the initial difficulty of a lack of agreement on what fairness is, I examine how to incorporate fairness into the algorithm. This will require a previous analysis of the legal philosophical foundations and some of the theories of justice (utilitarians, contractualists, communitarians, egalitarians) from which parameters, correctors and guarantees can be established to achieve the essential correlation between artificial fairness and legal fairness.
Downloads
References
Anderson, Elizabeth S. (1999). What Is the Point of Equality? Ethics. International Journal of Social, Political, and Legal Philosophy, 109(2), 287-337.
Añón Roig, María José (2022). Desigualdades algorítmicas: conductas de alto riesgo para los derechos humanos. Derechos y Libertades, Época II, 47, 17-49.
Barocas, Solow & Selbst, Andrew D. (2016). Big data’s disparate impact. California Law Review 104 (3), 671-732.
Barocas, Solon; Hardt, Moritz & Narayanan, Arvind (2018). Fairness and Machine Learning. fairmlbook.org.
Barrio Andrés, M. (ed.) (2019). Legal Tech. La transformación digital de la abogacía. Wolters Kluwer.
Barry, Brian (1989). Theories of Justice. Hemel Hempstead: Harvester-Wheatsheaf.
Barry, Brian (1995). Justice as Impartiality. Oxford: Oxford University Press.
Bellver Capella, Vicente (2021) “Transhumanismo, discurso transgénero y digitalismo: ¿exigencias de justicia o efectos del espíritu de abstracción?, Persona y Derecho, vol.84, pp. 17-53. https://revistas.unav.edu/index.php/persona-yderecho/article/view/40651
Binns, Reuben (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of Machine Learning Research, 81, 149-159.
Borges de Mazedo, U. (2001). A ética do futuro. En: A presença da moral na cultura brasieira. Ensaio de Ética e Historia das Idéias no Brasil, Londrina, UEL.
Buchanan, Bruce G. & Headrick, Thomas E. (1970). Some Speculation about Artificial Intelligence and Legal Reasoning, Stanford Law Review, 23(1), 40-62.
Campione, Roger (2021) La plausibilidad del Derecho en la era de la Inteligencia Artificial. Filosofía carbónica y filosofía silícica del derecho. Madrid: Dykinson.
Carey, Alycia N. & Wu, Xintao (2022). The fairness field guide: perspectives from social and formal sciences, arXiv:2201.05216v2 [cs.AI] 8.
Cárdenas Krenz, Ronald (2021). ¿Jueces robots? Inteligencia artificial y derecho ¿Judges robots? Artificial intelligence and law. Justicia &Derecho, 4, 1-10.
Casadei, Thomas & Pietropaoli, Stefano (2021). Intelligenza artificiale: fine o confine del diritto? En: Cassadey, Thomas; Pietropaoli, Stefano (a cura di), Diritto e Tecnologie Informatiche (pp. 219-232). Milán: Wolters Kluwer.
Caton, Simon & Haas, Christian (2020). Fairness in Machine Learning: a survey, arXiv preprint. arXiv:2010.04053, 1-33.
Cohen, G.A. (2011). Fairness and Legitimacy in Justice, and: Does Option Luck ever Preserve Justice? En: On the Currency of Egalitarian Justice and Other Essays in Political Philosophy, edited by Michael Otsuka. Princeton NJ: Princeton University Press.
Courtland, Rachel (2018). Bias detectives: the researchers striving to make algorithms fair. Nature, 558, 357-360.
Crawford, Kate (2021). Atlas of AI. Power, Politics and the Planetary Costs of Artificial Intelligence.
Crenshaw, Kimberle (1991). Mapping the Margins: Intersectionality, Identity Politics, and Violence against Women of Color, Stanford Law Review, 43(6) 1241-1299. https://doi.org/10.2307/1229039
Chouldechova, Alexandra (2016). Fair Prediction with Disparate Impact: A study of Bias in recidivism prediction instruments, Big Data.
Dastin, Jeffrey (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/amazoncom-jobsautomation-idUSL2N1VB1FQ
Dhasarathy. A., Jain, S. & Khan, N. (2020). When governments turn to AI: Algorithms, trade-offs, and trust. McKinsey & Company. https://www.mckinsey. com/industries/public-and-social-sector/our-insights/when-governments-turnto-ai-algorithms-trade-offs-and-trust.
De Asís Roig, Rafael (2014). Una mirada a la robótica desde los derechos humanos. Madrid: Dykinson.
De Asís Roig, Rafael (2022). Derechos y tecnologías. Madrid: Dykinson.
Dietvorst, Berkeley J; Simmons, Joseph P. & Massey (2015). Cade. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144 (1) 114-126. doi: 10.1037/xge0000033.
Dignum, Virginia (2021). The Myth of Complete AI-Fairness, cs.CY, 1-6.
Dwork, Cynthia; Hardt, Moritz; M., Pitassi, Toniann; Reingold, Omer & Zemel, Rich. (2012). Fairness through awareness. Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 214-226.
Dworkin, Ronald (2003). Virtud soberana: la teoría y la práctica de la igualdad. Barcelona: Paidós.
Eubanks, Virginia (2021). La automatización de la desigualdad. Herramientas de tecnología avanzada para supervisar y castigar a los pobres, 2.ª ed., trad. de Gemma Deza. Madrid: Capitan Swing.
Ferguson, A. G. (2017). The rise of big data policing: Surveillance, race, and the future of law enforcement. New York University Press.
Floridi, Luciano (2022). Etica dell’intelligenza artificiale Sviluppi, opportunità, sfide. Cortina Raffaelo.
Friedler, Sorelle A; Scheidegger, Carlos & Venkatasubramanian, Suresh (2016). On the (im)possibility of fairness. https://arxiv.org/abs/1609.07236
Goldman, Barry & Cropanzano, Russell (2015). “Justice” and “fairness” are not the same thing”, Journal of Organizational Behavior, 36, 313-318. DOI: 10.1002/job.1956
Hardt, Moritz; Price, Eric & Srebro, Nati (2016). Equality of opportunity in supervised learning. En: Advances in Neural Information Processing Systems.
Hobson, Zoë; Yesberg, Julia A.; Bradford, Ben & Jackson, Jonathan (2021). Artificial fairness? Trust in algorithmic police decision-making, J Exp Criminol, 1-25.
Huang, Wenxuan (2022). Reduce model unfairness with maximal-correlation-based fairness optimization, Master Thesis. https://repository.tudelft.nl/islandora/object/uuid%3A8f40561a-80be-4047-9760-
ab27207ffc
Katz, Yarden (2017). Manufacturing an Artificial Intelligence Revolution: Neoliberalism and the ‘new’ big data Yarden Katz. Harvard, Harvard University.
Katz, Yarden (2020). Artificial Whiteness. Politics and Ideology in Artificial Intelligence. Columbia University Press.
Kleinberg, Jon; Mullainathan, Sendhil & Raghavan, Manish (2017). Inherent tradeoffs in the fair determination of risk scores. Proceedings of Innovations in Theoretical Computer Science (ITCS 2017). https://arxiv.org/abs/1609.05807
Kleinberg, Jon; Ludwig, Jens; Mullainathan, Sendhil & Rambachan, Ashesh (2018). Advances in big data research in economics. Algorithmic fairness. AEA Papers and Proceedings, 108, 22-27.
Konstantinov, Nikola (2022). “Encontrar la equidad en la IA” (entrevista realizada por Sandrine Ceurstemont (17 de mayo de 2022). https://topbigdata.es/encontrar-la-equidad-en-la-ia-noticias/
Kraus, Rachel (2018). Amazon used AI to promote diversity. Too bad it’s plagued with gender bias. Algorithms reflect societal biases in more ways than one. October 10, 2018. https://mashable.com/article/amazon-sexist-recruitingalgorithm-gender-bias-ai
Kusner, Matt; Lofthus, Joshua; Russell, Chris & Silva Ricardo (2017). Counterfactual fairness, Neural Information Processing Systems. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA,
USA. https://proceedings.neurips.cc/paper/2017/file/a486cd07e4ac3d-270571622f4f316ec5-Paper.pdf
Galeotti, Mattia (2018). Discriminazione e algoritmi. Incontri e scontri tra diverse idee di fairness, The lab’s quarterly, XX(4) 73-96.
Llano Alonso, Fernando Higinio (2018). Homo excelsior. Los límites jurídicos del transhumanismo, Valencia: Tirant lo Blanch.
Lledó Yagüe, Francisco (2022). Los nuevos esclavos digitales del siglo xxi y la superación del hombre óptimo. ¿Hacia un nuevo derecho robótico? Madrid: Dykinson.
MacIntyre, Alasdair (1988) [2001] Justicia y racionalidad: conceptos y contextos. Trad. de Alejo Jose G. Sison, Editorial: Eiunsa.
Martínez García, Jesús Ignacio (2020). La respuesta jurídica. Anuario de Filosofía del Derecho, 36, 347-371.
Mehrabi, Ninnareh; Morstatter, Fred; Saxena, Nripsuta; Lerman, Kristina & Galstyan, Aram (2021). A survey on Bias and fairness in Machine Learning”, arXiv:1908.09635v3 [cs.LG https://arxiv.org/abs/1908.09635
Miller, David (2021). Justice. En: Edward N. Zalta (editor), The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University.
Nussbaum, Martha (2006). Las fronteras de la justicia. Consideraciones sobre la exclusión. Barcelona: Paidós.
O’Neil, Kate (2016). Weapons of math destruction. How big data increases inequality and threatens democracy, Penguin, New York, 2016 [trad. Española (2018): Armas de destrucción matemática: como el big data aumenta la desigualdad y amenaza la democracia, trad. de Violeta Arranz de la Torre, Capitán Swing].
Perez-Luño, Antonio Enrique (1996). Manual de Informática y Derecho. Barcelona: Ariel.
Rafanelli, Lucía M. (2022). Justice, injustice, and artificial intelligence: Lessons from political theory and philosophy, Big Data & Society, January-June: 1-5. https://journals.sagepub.com/doi/pdf/10.1177/20539517221080676
Rawls, John [1971] (2006). Teoría de la Justicia, trad. de María Dolores González, 6.ª reimpresión.
Rawls, John [2001] (2012). La justicia como equidad. Una reformulación, trad. Andrés de Francisco. Edición a cargo de Erin Kelly. Barcelona: Paidós/Estado y Sociedad.
Santangelo, Antonio (2020). Equità degli algoritmi e democrazia, DigitCult, 5(2), 21-30. http://dx.doi.org/10.53136/979125994120634
Sandel, Michael (2009). Liberalismo y los límites de la justicia. Barcelona: Gedisa.
Sandel, Michel (2018). Justicia. ¿Hacemos lo que debemos? Barcelona: Penguin Random House.
Saxena, Nripsuta Ani; Huang, Karen; DeFilippis, Evan; Radanovic; Goran; Parkes,
David. C. & Liu, Yang (2019). How Do Fairness Definitions Fare? Examining
Public Attitudes Towards Algorithmic Definitions of Fairness, pp. 1-8.
https://econcs.seas.harvard.edu/files/econcs/files/saxena_ai19.pdf
Scanlon, Thomas M. (1998). What We Owe to Each Other, Belknap Press of Harvard University Press.
Scanlon, Thomas M. (2020). Why does inequality matter? Oxford: Oxford University Press.
Sen, Amartya (1980). Equality of What? En: McMurrin S. Tanner Lectures on Human
Values, Volume 1. Cambridge: Cambridge University Press.
Solar Cayón, Jesús Ignacio (2019). La Inteligencia Artificial Jurídica. El impacto de la innovación tecnológica en la práctica del Derecho y el mercado de servicios jurídicos, Cizur Menor (Navarra): Aranzadi.
Soriano, Alba (2021). Decisiones automatizadas y discriminación: aproximación y propuestas generales, Revista General de Derecho Administrativo, 56, 1-45.
Stewart, Matthew (2020). Cómo lograr la equidad en los algoritmos. https://www. codetd.com/es/article/12010244
Tang, Zeyu; Zhang, Jiji & Zhang, Kun. (2022). What-Is and How-To for Fairness in Machine Learning: A Survey, Reflection, and Perspective, arXiv preprint. arXiv:2010.04053
Verma, Sahil & Rubin, Julia (2018). Fairness Definitions Explained, 2018 ACM/IEEE International Workshop on Software Fairness, 1-7.
Walzer, Michael (2001). Las esferas de la justicia: una defensa al pluralismo y la igualdad. México: Fondo de Cultura Económica, 2.ª edición.
Wang, Zhao (2021). Fairness-aware multi-task and meta learning. Dissertation presented to the Faculty of the University of Texas at Dallas. Doctor of philosophy in computer science.
Wang, Zhao & Shu, Kai (2021). Enhancing Model Robustness and Fairness with Causality: A Regularization Approach. Conference: Proceedings of the First Workshop on Causal Inference and NLP, DOI:10.18653/v1/2021.cinlp-1.3
Xiang, Alice & Raji, Inioluwa Deborah (2019). On the Legal Compatibility of Fairness Definitions, arXiv preprint arXiv:1912.00761, 1-6.
Young, Iris Marion (2011). Responsibility for Justice, New York: Oxford University Press.
Young, Iris Marion (2000). La justicia y la política de la diferencia. Trad. de Silvina Álvarez. Madrid: Ediciones Cátedra.
Zafar, Muhammad Bilal; Valera, Isabel; Gómez Rodríguez, Manuel & Gummadi, Krishna P. (2017). Fairness beyond disparate treatment and disparate impact: Learning classification without disparate mistreatment. En Proceedings of the 26th International Conference on World Wide Web
Informes
Fairness e Machine Learning. Il concetto di equità e relative formalizzazioni nel campo dell’apprendimento automatico. Nexa Center for Internet & Society Working paper nr 2/2018, Politécnico de Torino. https://nexa.polito.it/nexacenterfiles/Articolo%20TIM.pdf
IBM Policy Lab. Cómo mitigar el sesgo en los sistemas de Inteligencia Artificial
(8 de junio de 2021). https://www.ibm.com/blogs/policy/latin-america/2021/06/08/como-mitigar-el-sesgo-en-los-sistemas-de-inteligencia-artificial/
Nuevas normas sobre la Inteligencia Artificial: preguntas y respuestas. Comisión Europea (21 de abril de 2021). https://ec.europa.eu/commission/presscorner/detail/es/qanda_21_1683
Informe Automating Society 2020. Algorithm Watch. https://automatingsociety.algorithmwatch.org/
Propuesta de Reglamento del Parlamento Europeo y del Consejo por el que se establecen normas armonizadas en materia de inteligencia artificial (Ley de inteligencia artificial) y se modifican determinados actos legislativos de la Unión, COM (2021) 206 final, 2021/01106 (COD), 21/4/2021. https://eur-lex.europa.eu/legal-content/ES/TXT/?uri=celex:52021PC0206
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Anales de la Cátedra Francisco Suárez
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Authors are the owners of the rights to their works. ACFS requests that publication notice on ACFS is disclosed if they appear later in another place.