top of page
Search
Writer's pictureHubert Österle

Ethics or quality of life?

Updated: Dec 20, 2022

Hubert Österle, University of St. Gallen, Switzerland hubert.oesterle@unisg.ch



For decades, machine intelligence has changed companies and the economy. Now it is affecting our lives in significantly more direct ways, and generating hopes and fears. Ethical initiatives such as Digital Humanism want to align machine intelligence with the quality of life (happiness and unhappiness) of all humans.

Abundance and fear determine the discussion.

For highly developed societies at least, technology and capitalism have brought enormous material prosperity and satisfied needs such as food, security, and health, i.e. the needs of self-preservation and preservation of the species.


Fig. 1: Network of Needs


But the affluent society can do more than satisfy basic needs (yellow background in Fig. 1). The needs of selection (light-blue background) come to the fore (Osterle, 2020, p. 68-80) and drive human beings onto a treadmill in which, consciously or unconsciously, everyone is constantly working on their status, whether through clothing, offices in a club, knowledge, skills in music, youthful fitness, or simply through capital. An almost explosive growth in the literature on happiness research and ethics as well as an accompanying offer of lifestyle services such as happiness training, yoga, and wellness is aimed at helping us to gain as many positive feelings as possible from the satisfaction of all needs and to avoid negative feelings.

At the same time, there is a growing fear of what is to come. Dystopias such as surveillance capitalism (Zuboff, 2019), the totalitarian surveillance state, the loss of humanity and traditional values, or the excessive demands placed on the individual distract from the urgent task of shaping the coming change.


Development requires ethical guidelines.

Phrases such as "for the benefit of humanity" have become a common element of corporate mission statements. But who actually believes in such grandiose statements? What has ethics, especially business ethics, as formulated by Max Weber 100 years ago (Weber, 1915), actually achieved? It is certainly helpful to ask what kind of interests guide ethics.


Companies and business leaders want to satisfy their stakeholders.

At the American Business Round Table, nearly 200 CEOs of leading US companies signed a "fundamental commitment to all of our stakeholders"[1]. Many media articles have described it as an attempt to sugarcoat the social ills of digitalization through simple declarations of intent. Interestingly, the statement of these business representatives does not even mention the much more concrete international standard ISO 26000 on Corporate Social Responsibility (Schmiedeknecht & Wieland, 2015), which was adopted ten years ago. Digitalization requires many corporate leaders to demonstrate, among other things, the responsible handling of personal data. Individual management consultants have reacted to this with offers for data ethics, aimed primarily at maintaining company ratings.


Investors are looking for returns through sustainability.

Investors seek additional financial performance through investments that meet environmental and social criteria as well as the requirements of good governance (ESG – environment, social, and governance). They want to identify the opportunities and risks of their investments at an early stage based on these criteria and thus increase the profitability of their investments. Rating agencies like MSCI[2] and inrate[3] evaluate listed companies according to ESG criteria for investors. In accordance with the recommendations of the OECD[4], politicians use the weight of the financial markets to achieve sustainable development.


Do-gooders misuse the ethics discussion.

Avoiding the dangers of digitalization and seizing the opportunities for the benefit of human beings is a task for all citizens. Everyone must consider how they use digital services and what they expect from companies and politicians, for example, what personal data they give to Facebook, and where politicians should protect them from abuse. The danger arises when the discussion is dominated by do-gooders, who often argue purely emotionally, usually represent a very narrow partial view, and use vocal debate to compensate for their lack of knowledge and thus influence politics. Typical “enemies” are the greed of shareholders, totalitarian manipulation in China, the taxation of foreign corporations, and the “zombification” of mobile phone users. Do-gooders altruistically stand up for the good of the community but demand sacrifices mostly from others. In many cases, their commitment is a search for recognition for their efforts and a striving for self-esteem, which is often described as a “meaningful life” or similar phrases.


Politics follows the need for ethical rules.

Politicians need votes or the trust of their constituents. So they pick up on the popular mood and translate it into pithy catchphrases. A good example is the European Union’s announcement of the digital future of Europe[5] with populist values such as fairness, competitiveness, openness, democracy, and sustainability. In addition to emphasizing fashionable topics such as artificial intelligence, the paper focuses on the regulation of digitalization, while it hardly presents any concepts on how Europe should keep pace with the USA and China, and therefore actively contribute toward shaping digital services. The focus is on restricting entrepreneurial activity, not on exploiting potentials such as the Internet of Things (5G, sensor and actuator technology). The addressed citizens do not know these technologies, or know them too little, and they have neither the time, nor the motivation and the prerequisites to understand the technologies and their consequences. It is therefore much easier to evoke the previously mentioned “bogeymen” than to arouse enthusiasm for misunderstood technologies.

This is also confirmed by the discussion on the use of mobile phone users’ location data to curb the spread of Covid-19. The data that has long been used, for example, for planning public transport, is virtually negligible compared to the use of data voluntarily submitted to Google, Apple, or Facebook. Even classic personal data such as the traffic offenders' register in Flensburg, credit scorings, and customer data in the retail sector allow for far more dangerous misuse. Ethical values cultivated by do-gooders and attention-grabbing media hamper any serious discussion on how the rapidly growing collections of personal and factual data could help to make human coexistence healthier, less conflictual, and more enjoyable[6], rather than concentrating on tightening criminal law.


Ethics wants quality of life for all.

Ethics is looking for rules that should bring the highest possible quality of life for everyone. If we accept that digitalization cannot be stopped and that it will bring about massive socio-cultural change, we need mechanisms, now more than ever, to guide this change for the benefit of humankind. But do ethics and the underlying interests provide the tools? Two essential prerequisites are missing: First, ethics does not determine what quality of life actually constitutes. Second, there is a lack of procedures for objectively measuring quality of life.


 

A discipline called Life Engineering should start right there. It should develop a robust quality of life model based on the findings of psychology, neuroscience, consumer research, and other disciplines, and validate this model using the increasingly detailed and automatically collected personal and factual data. The network of needs can be a starting point if each of the needs, like health, is broken down into its components, such as age, pain, weight, strength, and sleep quality, and the causal relationships are statistically recorded.


Once the factors of quality of life are better understood, it will be possible to better assess the opportunities and risks of digital services. The sensors of a smartwatch can measure possible influencing factors on health so that individualized correlations between physical activity and sleep behavior or heart rhythm disturbances can be recognized and the wearers of smartwatches can thus increase their health and well-being by taking simple measures. Such concrete, statistically sound evaluations of digital services currently remain the exception. However, a quality of life model, even in such a rudimentary form as the network of needs outlined above, provides at least a framework for discussion in order to evaluate technical developments in terms of arguments, as shown by the example of Instagram.


Ethics is based on values such as dignity, respect, trust, friendship, responsibility, transparency, and freedom. However, such values are only relevant to people if they meet their needs and thus trigger positive or negative feelings. What does the ethical value trust mean for needs like security, power, or energy?


It very quickly becomes clear how far away we are from a quality of life model that combines behavior, perceptions, needs, feelings, and knowledge. However, looking at the tasks of ethics, it is hardly justifiable not to at least try what is feasible. Right now, we are leaving this development to the internet giants, who, like Google, for instance, with its Knowledge Graph, try to better understand and model these connections, while these companies and their management are being measured by their economic success, not by human quality of life. It is therefore almost inevitable that they will have to persuade customers to make the decisions that generate the most revenue.


Never before in the history of humankind have we had such comprehensive and automatically recorded datasets that allow statements about behavior and quality of life. The Internet and sensors are documenting our lives more and more seamlessly, as Melanie Swan discovered as early as 2012 under the banner of the ‘quantified self’ (Swan, 2012, p. 217-253). The instruments of machine learning and modeling in neural networks offer us the chance to recognize quality of life patterns and to make them effective in digital assistants of all kinds, from shopping to nutrition, for the benefit of human beings. Never before has such intensive support been provided for people by machines in all areas of life through digital services. Never before has it been possible to give people such well-founded and well-targeted help and advice, to guide them in a recognizable but subtle way. The thought of this frightens the pessimists and excites joyful expectation among the utopians.


With the methods of data analytics, health insurance companies evaluate the personal and factual data of their policyholders in order to better calculate the individual risks. They adjust the individual premiums in line with the individual risks, and ultimately reduce claim costs for the same income. For some policyholders, this leads to savings, but for those who are disadvantaged in terms of health and therefore financially less well off at the same time in most cases, it means higher payments. The redistribution of risk in the sense of solidarity is lost.


If an insurance company succeeds in better understanding the influences on health and – what is even more difficult – in guiding the insured to health-promoting behavior through digital services, then this machine intelligence helps both the insured and the insurers.


Ethics needs life engineering

Development cannot be stopped, but the direction can be influenced. We need a discipline called Life Engineering that translates the humanities concepts of traditional ethics and philosophy into design-oriented proposals, i.e. that pragmatically shapes technical, economic, and social development.


Only those who drive and lead development can influence it. The aversion to technology, which can be felt in many ethical discussions, has exactly the opposite effect to what it aims to achieve. It is therefore extremely welcome that scientists from engineering and technical sciences, from social science and humanities, for example in the DIGHUM initiative of Vienna University of Technology (TU Wien) or the Ethically Aligned Design[7] initiative of the IEEE (Institute of Electrical and Electronics Engineers), are coming together to formulate rules for machine intelligence. Even without an elaborated quality of life model, it is possible to avoid at least some clearly unwanted characteristics of digital services. This calls among other things for rules stipulating that people can access and check the data stored about them and approve its use, or that a machine decision must be justified. However, these rules come up against the limitation of human cognitive abilities, i.e. whether a layman can even understand these connections within a reasonable time.


Apart from these obvious rules, which do not have to be derived from scientific studies, it would be helpful if ethics could be based on an operational quality of life model. It is positive that version 2 of the IEEE guidelines on Ethically Aligned Design, unlike the first version, attempts to do just that. It is based on approaches and metrics for well-being. Its recommendations on the different aspects of ethics for machine intelligence ultimately provide a comprehensive agenda for Life Engineering.


In order to ever be able to meet such requirements, a Life Engineering discipline needs the following, in addition to financial resources:

  • Access to the digital personal and factual data

  • Exchange of knowledge about behavior patterns and their effects on quality of life

  • Ability to influence the development of digital services

  • Political incentives for positive developments and prohibitions of negative developments

Life Engineering offers the chance to transfer ethics from the stage of a religion to a stage of science, just as the Enlightenment did in the 18th century. This has brought about a human development that probably only few people today would like to reverse.





Osterle, H. (2020). Life Engineering - Machine Intelligence and Quality of Life. https://doi.org/10.1007/978-3-030-31482-8, p. 68-80.


Swan, M. (2012). ‘Sensor Mania! The Internet of Things, Wearable Computing, Objective Metrics, and the Quantified Self 2.0’. Journal of Sensor and Actuator Networks, 1(3), 217–253. https://doi.org/10.3390/jsan1030217


Schmiedeknecht, M. H. & Wieland, J. (2015). ISO 26000, 7 Grundsätze, 6 Kernthemen. In Corporate Social Responsibility. Verantwortungsvolle Unternehmensführung in Theorie und Praxis. Berlin, Heidelberg: Springer Gabler.


Weber, M. (1915) Die Wirtschaftsethik der Weltreligionen. Jazzybee Verlag.


Zuboff, S. (2019). The Age of Surveillance Capitalism. New York: PublicAffairs.

[1] https://www.businessroundtable.org/business-roundtable-redefines-the-purpose-of-a-corporation-to-promote-an-economy-that-serves-all-americans [2] https://www.msci.com/documents/10199/123a2b2b-1395-4aa2-a121-ea14de6d708a [3] https://www.inrate.com/index.cfm [4] https://www.oecd.org/finance/Investment-Governance-Integration-ESG-Factors.pdf [5] https://ec.europa.eu/commission/presscorner/detail/de/ip_20_273 [6] https://www.lifeengineering.ch/post/social-scoring-the-future-of-economy-and-society [7] https://standards.ieee.org/content/ieee-standards/en/industry-connections/ec/ autonomous-systems.html





Read the full article featuring Rainer Alt, Andreas Göldi, Edy Portmann and Sarah Spiekermann here.


47 views0 comments

Comments

Couldn’t Load Comments
It looks like there was a technical problem. Try reconnecting or refreshing the page.
bottom of page