GLOBAL

ALLIANCE 

NEWS POST

Harnessing AI Responsibly: A Call for Ethical Innovation

Harnessing AI Responsibly: A Call for Ethical Innovation

Harnessing AI Responsibly: A Call for Ethical Innovation

The ethical use of AI can be described as an aspiration to ensure that AI technologies are developed and deployed responsibly, reflecting widely shared human values and societal norms. There are multiple ethical concerns arising from the use of AI, such as the creation and multiplication of misleading and often harmful content (i.e., deepfakes), deepening biases embedded in data used for AI training, and data privacy and security issues. Adding a sustainability component to this complex puzzle reveals how risky this technology can be when misused or overused. The environmental impact of AI is alarming: researchers state that training GPT-3 “emitted roughly 500 metric tons of carbon dioxide – the equivalent of driving a car from New York to San Francisco (4.139 km[1]) about 438 times[2]”. In just five years, Google’s carbon emissions due to AI energy consumption have increased by 50%, and our current energy demand will double by 2030 because of this technology.

Ethical means Responsible; Responsible means Sustainable

Last year in Venice, the Global Alliance for Public Relations and Communication Management brought to life the Venice Pledge[3] – a shared promise to integrate Responsible AI into the daily practice of public relations and communication. The mission of the Global Alliance for Artificial Intelligence is “to promote research, education, and development in AI, ensuring its ethical and beneficial application for society[4].” Multiple organizations worldwide are designing, defining, shaping, and publishing their Code of Conducts, policies, and guidelines advising on the ethical use of AI.

Ethical, Responsible, and Sustainable practices hinge on three questions: what, how, and why. While ‘what’ encompasses principles and codes, the real challenge lies in ‘how’ these commitments are enacted and ‘why’ they matter. Organizations that effectively communicate these aspects foster trust and empower informed employees as ambassadors of sustainable development.” – says Daria Krupinska, Founder of Caverna, a boutique PR agency.

A practical framework in use

While numerous principles and guidelines provide valuable recommendations for the ethical use of AI, real change occurs only when these principles are implemented in a practical framework that companies and individuals can apply in their daily work. Simon Lucas et al. note that “such a framework should arguably be grounded in transparency, accountability, and public engagement, ensuring that the development and deployment of GAI (generative AI[5]) technologies maximize benefits while mitigating risks[6].”

A notable example comes from Merck KGaA, a science and technology company, operating across healthcare, life science, and electronics, which instituted a set of ethical guidelines for generative AI, “designed to align GAI applications, such as ChatGPT, with the company’s Code of Digital Ethics (CoDE), an ethics framework based on 20 action-guiding principles[7]”. Merck is proud of its protocols promoting responsible use of AI as well as its custom-built internal AI-solution called MyGPT, designed to meet operational and ethical standards that support the activities of the company and its employees, as well as high cybersecurity, data privacy, and governance criteria. After the successful implementation of AI use Merck is now taking a closer look at the environmental aspects as well.

Similarly, Vivicta, a Nordic IT services and digital solutions provider, requires all employees to renew their commitment to responsible AI practices through an annual eLearning program on Ethical AI. This training supports the company’s unified AI governance model, which ensures that every AI system is developed and deployed responsibly, transparently, and with appropriate human oversight. Fully aligned with the EU AI Act, GDPR, and leading industry standards, Vivicta’s framework strengthens its commitment to ethical, secure, and scalable AI that benefits both customers and society.

Both examples illustrate how organizations can build a consistent culture of responsibility—where governance, skills, transparency, and continuous learning work together. It’s a path forward for the entire communication profession: by understanding the challenges and amplifying best practices, we can help shape how AI is adopted with integrity and societal trust.

Principles to practice gap

In communications, we say that AI isn’t about skipping our work; it is used to remove the friction so we, communications experts, can spend more time on judgment, nuance, and strategy.

AI is neither the enemy nor the ally. It is a tool that can be used to our benefit. But it can be also mis- or overused, causing damage to ourselves, society, and the planet we all share. Ethical use of AI is about that – to be aware that all comes at a price and that all our actions have their consequences; let’s keep that in mind and use the Ethical Month celebration as an excuse to deepen our AI-use dialogue.

Author: Roksana Obuchowska

Roksana Obuchowska is an experienced communications professional and active contributor to the public relations community in Poland and Europe. A member of the Polish Public Relations Association (PSPR), she served on its Board from 2021 to 2025 and have been a member of its Supervisory Board since March 2025. Roksana represents PSPR on the European Council of the Global Alliance for Public Relations and Communication Management. In 2025 she has been appointed as a member of the Board of Arbitrators of the Public Relations Ethics Council – an independent, advisory-decisional body established by the Polish Public Relations Consultancies Association.

A speaker and mentor, she shares expertise through programs and podcasts and advocates for neurodiversity by leading a global employee network at Merck, a company she is currently professionally connected to.

[1] Author’s note

[2] The Carbon Footprint of AI | Climate Impact Partners

[3] Global Alliance Updates Responsible AI Guiding Principles for the PR and Communication Profession — Global Alliance

[4] About – Global Alliance for Artificial Intelligence (GAFAI)

[5] Author’s note

[6] Full article: Developing a framework for addressing ethical challenges in generative AI

[7] Full article: Developing a framework for addressing ethical challenges in generative AI

Insights