Shoshana Zuboff, a leading academic in the field, coined to the term ‘surveillance capitalism’, to describe the way in which human experience been transformed into behavioral data, and that data into capital or value or money. She argues that while some of this data is clearly important for service improvement, much of it is considered to be a source of proprietary behavioral surplus, owned by corporations and used to improve machine intelligence, and not the specific project it was originally intended for.



Ethics

Ethics is a particular branch of moral philosophy, which concerns itself the what is right and wrong and how we should act. They are the rules that define our moral conduct. Normative ethics is a class of perspectives that help us consider how we should behave, rather than how we do behave. Within this type there are 3 distinct perspectives: deontological, teleological and virtue ethics.

Deontological ethicsAll about rules duty and obligations.
We believe that there can be universal ethical rules that we all agree on.
Teleological ethicsFocus on the action, outcomes and consequences of our actions.
We should do the thing that results in the most good or happiness for the greatest number of people.
Virtue ethicsA person’s character, their virtue and their moral reasoning.

There has been tension between rule-based and consequence-based ethics. The moral problems that arise in the context of machine intelligence is tricky, because they occur in the social world; they aren’t like traditional thought experiments because there are so many variables and perspectives involved.

Data is the core enabler when it comes to surfacing, accessing, and understanding previously invisible human behaviors. This understanding has become a competitive market, stimulating creative data analytics, surfacing core insights, and allowing the development of intelligent products and services. While the concept of algorithm is familiar to some, it still raise concern and unease around their nature and application for many within society. Algorithms are emerging as a powerful means of social control – limiting access to broader information and decisions, reducing our ability to choose freely, signaling certainty in uncertain situations, and pushing us towards actions that we would not otherwise have taken.

Data ethics is a new branch of ethics that studies and evaluates moral problems related to data in order to formulate and support morally good solutions. It looks at the issues around the generation, recording, curation, processing, dissemination, sharing and use of data. It also include the issues about algorithms, programming, hacking and professional codes. The existing ethics framework provides guidelines to address these issues:

  1. We should do good
  2. Minimize harm
  3. Respect human autonomy
  4. Be just or fair

However the question has been raising whether the frameworks alone are enough to guide our society, since they are not regulatory or in any way mandatory. Large corporation may find a way to avoid regulation and to sanitize their practice.



Jury Manipulation

There are two main schools of thought in legal theory: formalism and realism.

FormalismA court decision should depend on two things and two things only:
1. What the law says, and
2. What you have done

Legal reasoning is, therefore, driven by logic: purely mechanical, and every observer would come to the same result.
RealismAs long as law was administered by humans, human frailty and limitations would play a role.
Knowing your court can be as important as knowing your law, because humans, including judges, are not robots; they don’t work mechanically.

Data science is now increasingly used to predict court decisions, and in the US we find lots of tools developed to assist lawyers in that task. There are many systems that analyses the social media profile of jurors to predict whether a specific juror emotional appeals for better or worse than appeals to facts and also if they are likely hostile to your client. People will then try to get as many of ‘their type’ of juror on the bench, to get a favorable result.



Statistical Fairness

In more and more applications, we seek to automate decisions or at least to provide some algorithmic support to the human decision makers. We may unintentionally disadvantage certain groups of people as a result of bias in historical data. How to detect bias in algorithmic decision making? Statistical fairness is often associated with machine learning algorithms. We want to make a prediction based on a dataset (samples) of past decisions, which are described through a number of protected and unprotected attributes.

To consider different definitions of statistical fairness, it’s useful to look at some standard measures of accuracy that are applied to machine learning algorithms. Measures of True Positive, True Negative, False Positive, and False Negative are normally used to assess the accuracy of the algorithm. Ideally, we would like False Positive and False negative rates to be as low as possible, so that at least all seen examples are mostly classified correctly.

The most basic approach to statistical fairness is called “fairness through unawareness”, that is simply ignoring the protected attribute when we are training the algorithm. However, unprotected attributes, act as a proxy, may be correlated with the protected attributes. Another problem with unawareness is that if you choose not to collect the protected attribute data, you might not be able to prove your algorithm is unbiased, as you could not demonstrate that it performs similarly for both subsets.

In fact, what people are trying to do with another metric called Demographic Parity which suggests that the probability of every outcome should be the same, or in practice at least roughly similar, for each group: the protected and unprotected ones.

Demographic Parity = ( True Positive + False Positive ) / Group Size

A more advanced notion is called Equalized Odds. It requires that the probability to achieve a certain outcome (according to the classifier) is independent of the protected attribute, given the observed outcome in the data.

A = (True Positive + False Positive) / (True Positive + False Negative)
B = (True Negative + False Negative) / (True Negative + False Positive)

By comparing these quantities between protected and unprotected groups, we are trying to eliminate the impact of the protected attribute, looking at the impact of only the remaining variables.

We can also try to assess individual fairness, in other words “Did similar people get similar outcomes?” This measure requires some similarity measure to be defined between individuals, which may or may not relate to the actual predictive accuracy. Also these metric requires us in theory to compare all individuals with each other, which can be a complex and onerous computation in systems that use large data sets.

Another useful concept is called Counterfactual Fairness, which is roughly speaking based on the idea of hypothesizing about what would have happened to a certain group if we reassigned all the members of that group to another one. Note that even a classifier that may look unbiased with regard to the protected attribute after performing this comparison may still be biased towards proxy attributes.

In fact, there are around 20 such metrics that are commonly used and heavily debated in the area of machine learning. And unfortunately, they are mostly incompatible with each other, except in the most trivial cases.

Algorithm Economy

Platform economy stands for a situation where a small number of data billionaires collect and control huge amounts of citizen data which is used for behaviour prediction. Platform economy is criticized for furthering their own financial and political interests or even exploiting workers and consumers.

As the use of data and AI is becoming ubiquitous, a world seems possible where algorithms could measure and predict the needs and contribution of every citizen. The algorithms controlling such an economy could also ensure environmental impact and existing social inequalities are addressed, and financial crises averted.

When you design an algorithm, you have to quantify how you will assign different people to different categories and how you will treat each of these categories, based on rewards and punishments you’re able to distribute among them. How to makes algorithms fair?

In a market based economy, the main ethical framework that is traditionally applied to such questions is that of consequentialist or utilitarian ethics. According to this framework, the moral value of the decision is judged by its outcome and we should seek to maximize the total benefit produced for society when choosing from different alternatives. Utilitarian ethics also assumes that people make rational choices and that they have free will and act autonomously.

People are biased. This means that the very data we’re using to train some of these AI algorithms might be biased itself. The designers’ choices are also baked into the mathematical models that underpin decisions the algorithms make. Intentionally or unintentionally, these algorithms will be biased, sometimes to deliberately give an advantage to one category of people or products, sometimes because they detect patterns in unintended ways that lead to bias.



Distributive Fairness

Economics and game theory assume individuals have different preferences and are self interested, in other words, they’re only interested in maximizing their own utility, but there may be conflict of interest. So certainly, there are challenging ethical issues associated with distributing benefits using algorithm, i.e. it has to be fair.

Fairness considers the outcome of a mechanism that computes global decision given everybody’s individual reports of their preferences. In other words you elicit information about how much people value different outcomes and use a centralized mechanism to select a specific outcome. An example of such mechanism is an auction. However, in such mechanism, people might not even know whether they can trust the mechanism that makes the decisions. So they might misreport the preferences to influence the outcome. You need to ensure people have an incentive to report their preferences accurately.

Ethical decisions on how to split resources fairly can take a number of different criteria into account.

Social Welfare

One key distributive fairness metric is social welfare, whereby a solution should maximize the sum of the utilities of all agents. It is certainly the most common measure of overall social efficiency, as it promotes a globally good outcome for the whole society. However it does not take variation in individual utilities into account and does not give any guarantees on how many people get a good result or a bad result. In very extreme cases, a small number of very happy individuals could make up for any arbitrarily high number of people who get a very poor outcome.

Equity

An alternative criterion one can apply to address this shortcoming is equity, that is looking at the difference between the outcomes for all individuals under an outcome and trying to minimize this difference. Of course this metric suffers from the opposite problem, it does not take overall efficiency into account. In the extreme, we could choose not to give anybody anything bringing distance down to 0. A compromise is often achieved by looking at the product (multiplication )of utilities rather than sum of them, the product tends to favor outcomes where more individual utilities are increased simultaneously.

Maximin

Another common criterion is Maximin, in that is preferring the outcome that maximizes the outcome for the participant who is worst off. The participant doesn’t have to be the same participant across outcomes, we’re just looking at the person who is worst of globally in each case. As with equity, maximin does not make any guarantees as to how efficient the outcome is to society as a whole. But it DOES guarantee that it would give those worst off the highest possible outcomes, when compared to all available options.

Pareto Efficiency

Pareto efficiency is another criteria that can be applied generically to all comparisons among different outcomes. It requires that we could not make anybody better off without making other people worse off by choosing any available alternatives. We should improve some people standing in an outcome prepared to what we already have, if without affecting anybody else negatively. Unfortunately, it requires a pair wise comparison of all outcomes and it’s unclear which one to choose if there is more than one Pareto optimal outcome.



My Certificate

For more on Data Ethics, please refer to the wonderful course here https://www.coursera.org/learn/data-ethics-ai-and-responsible-innovation


I am Kesler Zhu, thank you for visiting my website. Check out more course reviews at https://KZHU.ai

Don't forget to sign up newsletter, don't miss any chance to learn.

Or share what you've learned with friends!