The efficiency and objectivity that algorithms provide account for the growing reliance on algorithmic decision-making processes. Decisions that were historically made by humans are now made by algorithms: votes are counted, loan and credit card applications are approved, welfare and financial aid decisions are made, and visas are granted by computer systems.[1]

These decision-making processes might lead to fairer decisions than those made by humans who may be influenced by greed, prejudice, or fatigue. However, they have been criticized for their potential to enhance discrimination and for their opacity. Even with the best intentions, data-driven algorithmic decision-making can reproduce existing patterns of discrimination, inherit the prejudice of human decision-makers, or simply reflect the biases that persist in our society.[2]

Discrimination via algorithmic decision-making can occur in two ways. First, two people may be the same in relevant aspects but are treated differently. For instance, two defendants may commit the same crime, but one gets a lighter sentence. Another instance is when relevant differences between them are not accounted for, and the two people are treated in the same way. A person’s zip code, for example, may be used as a factor to determine the likelihood of defaulting on a loan. The failure to acknowledge relevant details about an individual  makes the outcome unfair.[3]

How then can we take advantage of what these systems have to offer, while at the same time  hold them accountable? The Data Privacy Act (DPA) offers a tool: the right to an explanation.[4] Explanations  serve many purpose. They assist data subjects in understanding why a particular decision was reached and  what could be changed to receive a desired result in the future. Explanations also provide grounds to contest an adverse decision[5]

 Kinds of Explanations

 Explanations may be “white box” or “black box”. White box descriptions provide information about how a system produces a recommendation. These explanations disclose details about the operation of a system to the user. White box descriptions increase beliefs in the competence and benevolence of a system and the perceived usefulness of a system.[6]

Block box explanations, on the other hand, provide justifications for a system and its outcomes. They explain the motivations behind the system, but do not disclose how the system works. [7] By exposing the logic behind a decision, block box explanations can be used to prevent errors. Explanations can also be used to determine whether certain criteria were used appropriately or inappropriately in case of a dispute.[8]

 Is there a Right in the Philippines?

Unlike in the EU General Data Protection Regulation (GDPR)[9], the existence of the right to a  white or  black explanations may be clearly inferred from the provisions of the DPA. For white box explanations, section 16(b)(3) thereof states that a data subject is entitled to be furnished information on the scope and method of the personal information processing before the entry of his personal information into the processing system. After entry into the system, the law also grants the data subject the right to access the manner by which his data were processed.[10]

Meanwhile, the DPA provides for a limited right to black box explanations. Section 16©(6) thereof states that the data subject has the right to access “information on automated processes where the data will or likely to be made as the sole basis for any decision significantly affecting or will affect the data subject.” The provision is broad enough to cover both black and white box explanations. However, the right to black box explanations is only available where the automated process is the sole basis for any decision.

Practical Considerations

 The right to an explanation is only one tool for scrutinizing, challenging, and restraining algorithmic decision-making. While it affords transparency and enables user challenge, it has several practical flaws.[11]

 First, reliance on an explanation to bolster individual rights places a heavy burden on users to challenge bad decisions. Ordinary requests for access to information in the EU  requires  an enormous amount of time and persistence. [12] Moreover, historically, very few data subjects in the EU make use of their right to access. Even fewer will probably use the right to an explanation in the Philippines.[13]

Second, even if obtained, an explanation may not be helpful in bringing a challenge. Explaining algorithmic models, inputs, and weightings is technically challenging[14] and most individuals are ill equipped to review how computerized decisions were made.[15]

Finally, requiring data controllers to explain algorithms may not be advisable. For instance, the process for deciding which tax returns to audit, or whom to pull aside for secondary security screening at the airport, may need to be partly opaque to prevent tax cheats or terrorists from gaming the system. When the decision being regulated is commercial in nature, such as an offer of credit, full transparency may not always be advisable as it may defeat the legitimate protection of consumer data, commercial proprietary information, or trade secrets.[16] Companies that develop algorithms may consider them to be their intellectual property. Ownership over their algorithms guarantees them the edge over competitors who operate with similar algorithms.[17]

While a legal right to an explanation exists in the Philippines, it is by no means  easy  to exercise. The hype around the right may create the belief that there is no  other solution. Indeed, the right to an explanation in the DPA, though beguiling, is rife with technical difficulties. Rights become dangerous if they are unreasonably hard to exercise or ineffective as  they give the illusion that something has been done when  things are in fact not any better.[18]

[1] Joshua Kroll et al., Accountable Algorithms, available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2765268 (last visited August 6, 2018).

[2] Bruno Lepri et al, Fair, Transparent, and Accountable Algorithmic Decision-Making Processes, available at https://www.researchgate.net/publication/319127327_Fair_Transparent_and_Accountable_Algorithmic_Decision-making_Processes_The_Premise_the_Proposed_Solutions_and_the_Open_Challenges (last visited August 6, 2018).

[3] World Wide Web Foundation, Smart Web for More Equal Future, available at https://webfoundation.org/docs/2017/07/Algorithms_Report_WF.pdf (last visited August 6, 2018).

[4] Finale Doshi-Velez & Mason Kortz, Accountability of AI under the Law: The Role of Explanation, available at https://arxiv.org/pdf/1711.01134.pdf (last visited August 6, 2018).

[5] Sandra Wachter et al, Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR, available at https://arxiv.org/pdf/1711.00399.pdf (last visited August 6, 2018).

[6] Emilee Rader et al, Explanations as Mechanisms for Supporting Algorithmic Transparency, available at http://bierdoctor.com/papers/rader_chi18.pdf (last visited August 6, 2018).

[7] Id.

[8] Supra note 4.

[9] See Bryan Casey, Rethinking Explainable Machines: The GDPR’s “Right to Explanation” Debate and the Rise of Algorithmic Audits in Enterprise, available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3143325 (last visited August 6, 2018).

[10] Section 16(c)(4) of the Data Privacy Act.

[11] Lilian Edwards & Michael Veale, Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”, available at http://discovery.ucl.ac.uk/10042153/1/SSRN-id3052831%281%29.pdf (last visited August 6, 2018).

[12] Id.

[13] Id.

[14] Id.

[15] Supra note 1.

[16] Id.

[17] Paul de Laat, Big Data and Algorithmic Decision-making: Can Transparency Restore Accountability?, available at https://www.researchgate.net/publication/320072822_Big_data_and_algorithmic_decision-making_can_transparency_restore_accountability (last visited September 1, 2018).

[18] Supra note 10.

Disini & Disini Law Office
info@privacy.com.ph