Difference between revisions of "Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability (Q2401)"

From Wikibase Personal data
Jump to navigation Jump to search
(‎Created claim: comment (P126): The instrumental rationale for regulating algorithmic decision-making counsels that regulation should try to correct these problems, often by using systemic accountability mechanisms, such as ex ante tech...)
(‎Created claim: comment (P126): The other two rationales for regulating algorithmic decision-making, however, suggest that systemic oversight is not enough. Both dignitary and justificatory reasoning point towards including individual r...)
Property / comment
 +
The other two rationales for regulating algorithmic decision-making, however, suggest that systemic oversight is not enough. Both dignitary and justificatory reasoning point towards including individual rights.
Property / comment: The other two rationales for regulating algorithmic decision-making, however, suggest that systemic oversight is not enough. Both dignitary and justificatory reasoning point towards including individual rights. / rank
 +
Normal rank

Revision as of 09:42, 8 December 2019

No description defined
Language Label Description Also known as
English
Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability
No description defined

    Statements

    0 references
    0 references
    Secret profiles and decisions made based on secret profiling can threaten personhood and thus dignity by proscribing active individual involvement in the construction of this objectified version of the self.
    0 references
    the “ ‘data shadows’ . . . threaten to usurp the constitutive authority of the physical self despite their relatively attenuated and often misleading nature”
    0 references
    Algorithmic decision-making founded on individual profiling limits the choices and, thus, the freedom a person will have.
    0 references
    Limiting the choices we see—whether by failing to show opportunities or by offering only bad options—limits our freedom to make choices.
    0 references
    Failing to be transparent about the fact that individuals are being targeted or the reasons why they are targeted itself may threaten autonomy. Secret profiling and decision-making can lead to manipulation. Without knowing how we are being targeted or why, we can be manipulated into making choices that are not autonomous at all.
    0 references
    Concerns about autonomy and the potential for manipulation, to a great degree, motivated the indignation around Cambridge Analytica’s targeted manipulation of U.S. voters prior to the 2016 election (and motivated the California legislature to enact the California Consumer Privacy Act in 2018).
    0 references
    the dominant rationale for regulating algorithmic decision-making is an instrumental (or consequentialist) rationale. We should regulate algorithms, this reasoning goes, to prevent the consequences of baked-in bias and discrimination and other kinds of error.
    0 references
    The instrumental rationale for regulating algorithmic decision-making counsels that regulation should try to correct these problems, often by using systemic accountability mechanisms, such as ex ante technical requirements, audits, or oversight boards, to do so.
    0 references
    The other two rationales for regulating algorithmic decision-making, however, suggest that systemic oversight is not enough. Both dignitary and justificatory reasoning point towards including individual rights.
    0 references