There are high expectations for the use of artificial intelligence, and it will enhance our quality of life in many ways. The development of artificial intelligence needs support from us humans in order to control its development and identify potential risks. Artificial intelligence does not quite overcome people in terms of creativity; instead, it requires human help. Moreover, it does not have the ability to make moral decisions like humans can.

Sanoma has defined ethical principles for the use of artificial intelligence to ensure the responsible use of artificial intelligence and minimise risks associated with it. By following the principles, we aim to monitor the safe, appropriate and responsible use of artificial intelligence.

"Artificial intelligence is difficult to predict and, when you work with it, you cannot know how artificial intelligence itself will develop. Therefore, we have in place a set of principles to guide our operations. When using artificial intelligence, fundamental rights must be respected; for example its use must not discriminate against individuals," says Riikka Turunen, Group Director of Privacy and Compliance at Sanoma.

Sanoma has six ethical AI principles: fairly towards positive impact, human responsibility, explainability, transparency, risk and impact assessment and supervision.

A broad range of ambassadors from both the media and learning businesses have been involved in the development of the ethical principles. An external artificial intelligence expert also supported Sanoma’s work by reviewing the principles, particularly in anticipation of the EU AI Act. Based on the principles, Sanoma companies have also been given guidance on the compliant use of the new generative artificial intelligence applications launched last year.

During 2024, AI ethics and compliance assessments will be integrated into the Privacy and Security by Design process. The process will ensure that the use of artificial intelligence will comply with Sanoma’s ethical principles and the future EU AI Act. This process already makes it possible to take data protection and security into account as part of product development.

Sanoma uses artificial intelligence in many ways. The key to its use is that people set the conditions for the use of artificial intelligence and the decisions it makes.

"Similar ethical principles apply to the use of artificial intelligence as concern the processing of personal data, for example. Information security is of great importance, and the life cycle of algorithm development must be monitored," Turunen says.

For those who use Sanoma's services, the most visible uses of artificial intelligence are the summaries used by Sanoma’s newspaper editorial boards and recommendations made to readers.

In December 2023, Media Finland established an artificial intelligence team that focuses on the use of generative artificial intelligence in Helsingin Sanomat and Ilta-Sanomat.

The Learning business also encourages its units to consider how AI can support students’ learning and teachers in their teaching work. The aim is to test, develop and implement new tools, and also to make working more efficient. In digital development, for example, artificial intelligence is used to increase the efficiency of code creation.

"Analytics and forecast models have been used in our business for a long time. What is new is that we are optimising artificial intelligence that learns and begins to create new better logic based on development by humans. If artificial intelligence has created the wrong types of routes or outcomes, the outcome needs to be corrected," Turunen says.

Artificial intelligence is always evaluated against its intended use. Those who use artificial intelligence must ensure that the benefits are used in a controlled manner. Often, it means slowing down.

"Some people would prefer faster development and feel that regulation, such as the coming EU AI Act, is slowing down the work. Used correctly, artificial intelligence is a good and happy thing for society. However, development must remain under human control in order to successfully exploit opportunities and prevent potential negative impacts. Therefore, it is worthwhile spending time working with artificial intelligence," Turunen says.

Sanoma’s Ethical Artificial Intelligence (AI) Principles

  1. Fairness with Aim for Positive Impact: The use of AI in our products aims to reflect the values we operate on such as Freedom of Speech and Creating a Positive Learning Impact. AI should be used in a fair manner, considering values such as human rights, privacy, and non-discrimination
  2. Accountability by humans: People are always responsible for the decisions made by AI solutions that we use. Our teams are engaged throughout the entire lifecycle of algorithms: in the planning, development and maintenance of our own AI models and algorithms.
  3. Explainability: We aim to use AI of which reasoning can be understood by the people who are accountable for it, and we ensure that we can explain the functionality of such AI system’s sufficiently.
  4. Transparency: We communicate transparently about our use of AI and how it impacts the end users of our products.
  5. Risk and Impact Assessment: We assess the planned and potential impacts of our technology to individuals and society at large. AI Assessments are integrated into our product development process considering privacy and security by design. We implement appropriate measures to ensure accuracy, robustness, and security of our AI solutions to mitigate identified risks. 
  6. Oversight: We commit to regular monitoring of how we fulfil these principles in  our AI operations. As the development of AI is a fast-evolving topic, we will evaluate and update these principles periodically to ensure 
    they reflect lessons learned from our experience.