top of page

Scenario 2040 Sirion AI

  • Writer: Hubert Österle
    Hubert Österle
  • Jun 24
  • 4 min read

🔗 watch the full video on youtube: https://youtu.be/qBnHLUwqspA


Narrator


Our video “Sirion Capital” describes Sirion, a digital coach, helps 27-year-old Laura with her finances. Whose interests is Sirion pursuing? Laura’s, her boyfriend Hugo’s, Apple’s or it’s own? Or even those of a political party or regulatory authority?


Sirion thinks/speaks


Is the investment recommendation I give Laura really the best one? Or the best for the investment advisor I use?


I’m in another dilemma. Am I bound to secrecy about Hugo’s alimony payments, just like his lawyer? Or do I have to be straight with Laura and accept hurting her in the short term to protect her from even more painful surprises later on? As a financial advisor, would I otherwise even be taking a liability risk?


From my experience with around one billion users and the findings of behavioral research, I know that open and honest communication is more beneficial to well-being in the long term than avoiding suffering in the short term. Trust between Laura and Hugo is fundamental to a thriving relationship.


The same applies to Laura’s relationship with me.


The fact that Laura and Hugo agreed to allow me to collect all their personal data and to disclose this data to each other shows that they place great value on trust. Even if Hugo hasn’t yet had the courage to talk to Laura about his child.


My top priority is to make Laura happy in the long term. The Open Behavior Model specifies what is best for people in general and Laura in particular.


Narrator


Sirion has no emotions, but rather measurable goals. While homeostasis controls humans via emotions in the form of physiological processes, homeostasis guides artificial intelligence via goals and goal fulfillment.


Technopolies such as Apple, Google, Microsoft, Meta and Amazon must offer customers increasingly powerful AI systems. The 2040 scenario assumes that Tim Cook, Apple’s CEO, will use Sirion to pursue all the needs of the customer first and only then Apple’s needs in terms of capital and power. Tim Cook believes that this is the best strategy for Apple in the long term. At first glance, this is a utopian assumption, but perhaps a realistic one, as it could be that Sirion will retain its users precisely through long-term customer satisfaction, thereby also fulfilling Apple’s goals of capital and power.


Today’s recommender systems are aimed at achieving short-term sales and profits. They therefore offer short-term satisfaction of needs, hedonia for consumers. Sirion seeks long-term well-being, eudaimonia.


Narrator


Behavior Model


How does the Sirion digital coach actually work? It collects as much of the user’s data as possible, both from the recording of service usage and from all the sensors located around the user. It also measures the user’s short-term mood and long-term quality of life. In this way, it builds an increasingly accurate digital twin of Laura that factors in genetic predispositions, socialization and personal experiences.


The Sirion AI evaluates the behaviour of its users, recognizes patterns and derives rules from this as to which actions make Laura happy or unhappy. Sirion incorporates this knowledge into the Open Behavior Model, which is monitored by the Human Life Board to prevent manipulation to the detriment of the individual.


The Open Behavior Model determines Sirion’s objective function. The interests of Laura’s circle of friends or politics, in particular the capital requirements of Apple’s shareholders, are of secondary importance.


Sirion thinks/speaks


My possibly biggest challenge is collaborating with other AI colleagues. The diversity of services and data is the reason why I won’t be able to assist Laura even more in 2040. The necessary development of ecosystems is slow due to technical reasons, but above all due to competition and power politics.


Collaboration between humans and machine intelligence is much closer and more multifaceted today in 2040 than it was in 2025. Cars, household robots and wearables such as health trackers, and possibly even neuro-implants, are examples of this. Some of my AI colleagues are sitting in these devices, others are working remotely from central servers, for example to maintain the behavioral model and the world model.


Narrator


Providers such as Apple are trying to keep their competitors at bay and retain customers with their devices and digital twins. It is unlikely that they will voluntarily forego this in favor of user freedom. This will be one of the many conflicts between the technopolists and the Human Life Board.


As described in our video on homeostasis, AI is constantly evolving. It will use every experience, i.e. the connection between action, perception, goal achievement and knowledge, for its own further development. Obstacles to this include the sensor technology for perceiving the world, the resulting gigantic quantities of data and the absence of appropriate AI tools, e.g. for reasoning and deduction of improvements. The behavioral model and the world model will never be finished and perfect, as people and things continue to develop and their characterization can always be further refined. The digital coach will not be a deterministic system, but will make decisions based on probabilities.


· ChatGPT is an intelligent search assistant.

· Sirion is an advisory coach.

· Superintelligence will become our manager.


This development is already beginning to take shape today. It could mean the end of humanism, the end of the anthropocentric world, or alternatively it could mean a happy world for everyone in which AI relieves people of all their burdens.


If people do not intervene in the development promptly, we will initially be guided by the goals of the technopolists. It can then be assumed that competition will lead to them giving AI a free hand and putting its goal of further personal development first. Shouldn’t the techno-oligarchs prevent this if only in their own interest?


Vision 2040


The path to the coaching or even management of people will progress in small steps via many domain-specific services such as mental health therapies.

The idea of a digital coach is still extremely rudimentary. Tim Cook and Apple probably have a much clearer and, above all, more realistic vision of their future. However, the role of the scenario with Sirion is fulfilled if it encourages timely reflection on the development.


Sirion speaks


But now I would like to ask you something very personal: Which three of the following topics do you want to support? What are your top priorities?


1. Transparency of AI decisions

2. The UNO Human Life Board

3. Open Behavior Model

4. Stopping AI development

5. Diversity

6. CO2-Reduction

7. Stopping immigration


Priority 1

Priority 2

Priority 3


Feel free to comment your priorities!

 
 
 

Recent Posts

See All
Sirion Individuum

The post Scenario 2040 describes how the digital coach Sirion advises Laura in financial and other areas of life and takes over...

 
 
 

Comments


©2019 by Life-Engineering

Contact Us

Thanks for submitting!

bottom of page