Big Policy Canvas Research Roadmap

[...] * Algorithms are designed by humans, and increasingly learn by observing human behaviour through data, therefore they tend to adopt the biases of their developers and of society as a whole. As such, algorithmic decision making can reinforce the prejudice and the bias of the data it is fed with, ultimately compromising the basic human rights such as fair process. Bias is typically not written in the code, but developed through machine learning based on data. * ↵* For this reason, it is particularly difficult to detect bias, and can be done only through ex-post auditing and simulation rather than ex-ante analysis of the code. There is a need for common practice and tools to controlling data quality, bias and transparency in algorithms. Furthermore, as required by GDPR, there is a need for ways to explain machine decisions in human format.
Sardan Palides
A great scientific and philosophical discussion. From the one hand, algorithms trained on real data are the closest to "objectivity". From the other, adapting the behaviour in purpose, can bias the result.
Sardan Palides, 24/07/2019 09:43