The good computer science (thought experiment(s) during the doctorate)



I make no claim that my thoughts on IT ethical issues are valid in any way, because I am not a trained philosopher or cyberanthropist. This modesty is necessary in order to listen to those who are. My questions are therefore debatable and (presumably) flawed, but it is precisely these flaws that I am looking for in order to draw conclusions: Where are the critical thinking errors that confirm me in presumably false assumptions? Or do they carry a piece of truth after all? Is the pain caused by a lack of understanding of things as they are a personal struggle in the sense of a free spirit, even a pathological embitterment, or a logical reason to be angry?

A graduate in business informatics (born in 1994) who founded a marketing company and has teaching responsibilities for data-driven marketing and automation is now writing about human-centered approaches as part of his own doctorate – is that ironic?

When I founded my IT marketing consultancy in my early 20s, I was less concerned with the morality of my work and the pathological potential of digital attention mechanisms than with the task at hand: How can I support companies by technically optimizing their corporate content so that they achieve more visibility in search engines? I was always guided by the idea of classifying user needs by search terms and serving them accordingly. Back then, the SEO scene was still a truly amusing group of people made up of skilled, stranded and rogue individuals and their trial-and-error practices. We’re talking about the years from 2011 onwards, when Ryte was still called OnPage, a registered link at was cause for celebration, inverse term frequency analyses were considered a miracle weapon, SEO ranking competitions were fun. 28 years on, it’s bizarre to say that things seem to have changed. This doesn’t apply to search engine optimization in isolation. When I had just graduated from high school, the Instagram platform was just getting started.

It is difficult to describe the exponential change in the state of digital attention production in just a few words without boring readers like a digitalization-is-important spider slayer. Especially as it would be naive to try to show the breadth of the changes in just a few words. Search engine optimization is just one of the many attention channels that are becoming the focus of visibility optimization. But one thing is certain: the formula has not changed in any marketing discipline: Everything for attention. Push marketing, pull marketing, guerrilla marketing, terror marketing. Today’s market-crashing concepts are in demand because they can still be heard amidst constant noise. At the same time, the scope for action is shrinking. Many a provider solution has the character of a black box with a banknote slot. Even better miracle algorithms promise automatic playout with ideal budget allocation – why hire digital junior managers? Direct answers from the search engines make outdated search engine optimization content strategies obsolete.Why pay copywriters? Bot instead of human. Why develop communication strategies? Virtual instead of real. Why cover real ground? Should it be? Should it be like this? Is polemic appropriate here? The problem lies, soberly considered, in the creation of clarity. A real challenge in the face of constant information overload. Not only the increase in general interest in digital job profiles, digital transformation ideas right down to the toaster and the lack of qualifications for such professions are and have been responsible for an increase in readable mediocrity, no, unfortunately also for the increased number of truisms that burden the mind. Is this all true? Shouldn’t we take a closer look? Who else is benefiting here?

Ironically, at the beginning of my doctorate, I decided to focus on the prediction of user behavior in order to grasp the possibilities and find a framework for working towards the maximum based on evidence. The urgency of the qualifying occupation with prediction is due to the fact that within the developing digital competitive structure, the maximization of the company’s own ability to predict (potential) customer needs, ideally in real time, will be required. Can a marketing manager do this? Of course not, and certainly not on a far-reaching, real-time basis. I have described these concepts, which require touchpoints to be played along the customer journey, in my marketing automation tutorials. As automated as possible. This can only be done cost-effectively via clusters. Personalization is welcome, but how much? Perhaps research has the answers. Recommendation systems, personalized search, user interfaces, usability. It’s amazing how much implicit feedback can be evaluated. So if I ignore content on news portals in my feed, is that immediately a signal for the provider? Isn’t that already monitoring in real time?

My question of “how much is possible?” in technical development is increasingly giving way to the question of “how much should be possible?”. The questions of “may”, “may not” or “should not” are different. The question of “should” is an ethical one, isn’t it? How much automation should I be responsible for? At what point do I, as a designer, put my intentions above those of the people who act or have to act in my system environment because I have already nudged them to do so through system mechanisms? What data should I have at all? Can algorithms be unfair? Can a single button in the user interface be unfair? Because it is a button and therefore nudged?

The amusing fact is that these comments have so far managed without the abbreviation AI, which is often perceived as diabolical. The question of human-centeredness, ethics and responsibility is currently developing in computer science. The concept of responsible AI comes up again and again – self-learning systems weigh up historical data, develop an output – and what if this can appear immoral according to a value system (it shouldn’t actually do that, it’s unfair)? Some of the (inhumane) results cannot be explained, but research is currently focusing on this topic in order to make AI explainable. So that it can then be evaluated more morally. This is absolutely necessary in matters of discrimination; there are laws for this. It’s not a question of a subjective value system, it’s simply the law. In Germany, there is a ban on discrimination. This is not a question of good or bad. It becomes more interesting with the implementation of economic BDI agents. Is the benefit maximization calculation immoral?

Is the ethical discussion enough for change?

Is it morally justifiable to develop recommendation or classification algorithms that utilize human-related attributes? Can’t we do without them and still guarantee the accuracy of the system outputs? Should we increasingly relieve people of thought processes by using prediction algorithms for tasks and wishes if, from the perspective of brain research, this leads to a significant decline in people’s ability to remember content? Can a high-performance health AI be used ethically by everyone (rhetorically)? Is the incorporation of addictive mechanisms morally unobjectionable because it is the user’s responsibility to use systems with addictive potential? Can ethics seriously bring about change in cyberbullying?

The good and bad of information technology

I am firmly convinced that the question of the good and bad of computer science is one that will not be limited exclusively to the algorithm or a data set. The ethically justifiable availability for stakeholders in particular must also be discussed honestly and requires an exchange at an interdisciplinary (including political) level, in which it becomes clear that all specialist areas are in demand. Ethics alone will not be enough, as we will probably see conflicting value systems in which the imperatives of reason give way to vested interests. The previous comments may sound somewhat destructive, so I would like to emphasize that I am of course not an opponent of digitalization. I believe that added value can be generated for everyone in the right areas of application. In the near future, it will be important to distinguish between sensible digitization and unhealthy digitization. And not just for ethical reasons.

© J. Marvin Jörs