Can We Trust AI? Key Insights from JournoTECH’s London Event on Privacy and Security
By Matin Animashuan
“Barely.” That was the frank response from a group of journalists when asked if they trust artificial intelligence in their profession. The exchange set the tone at JournoTECH’s AI 2025 event in London, which brought together journalists, academics, technologists, and civil society advocates to discuss one urgent question: Can we trust AI with our work?
This event is funded by SPRITE+. SPRITE+ brings together people involved in research, practice, and policy with a focus on digital contexts. SPRITE+ is a consortium comprising the University of Manchester, Imperial College London, Lancaster University, Queen’s University Belfast, and the University of Southampton and is funded by UKRI EPSRC (UK Research and Innovation’s Engineering and Physical Sciences Research Council).

Several speakers at the event warned that rapid adoption without caution risks eroding credibility.
Security and trust at the core
For Elfredah Kevin-Alerechi, founder of JournoTECH and organiser of the event, security must come first. She told attendees that journalists can trust AI, but only if they remain alert to risks.
“We understand as journalists that we have to secure our sources and data. Security was one of the main things I considered when building NewsAssist AI.”
The JournoTECH platform, NewsAssist AI, helps professionals transcribe and summarise large reports while keeping privacy and security front of mind.
Click the AI Event Report to explore more about the programme

The “machine trickster”
From Germany, Cade Dhiem, founder of Head of Research, World Ethical Data Foundation, painted a vivid picture of AI as a “machine trickster”. He compared it to an 18th-century automaton duck that seemed to eat and digest food but was, in reality, an elaborate illusion.
“Its green pellets, when inserted into authorship, can contaminate your work or defecate on the reputation of a masthead,” he warned.
Yet Cade did not dismiss AI outright. Instead, he urged journalists to “imprison the trickster and harness it”, offering rules such as never quoting AI directly, forcing it to reference sources, and using it only to strengthen rigour—not to seek truth.

Privacy and regulation gaps
Rebecca Bird, founder of BixBe Tech, stressed privacy concerns. She noted that Meta admitted in 2024 to training its models on public Facebook posts dating back to 2007, highlighting how little control users often have over their data.
“Confidentiality is sometimes not available on these platforms,” she cautioned, urging organisations to classify data carefully to avoid breaching GDPR and other regulations.

AI as a “false multiplier”
Pravin Prakash, from the Centre for the Study of Organised Hate, described AI as a “false multiplier” that amplifies misinformation within existing institutional weaknesses.
“Yes, it makes the problem worse—but mainly because of how it has been designed to source information,” he said, calling for stronger accountability from both governments and media houses.
A call for responsible use
Despite their differing perspectives, speakers circled back to a common theme: AI should not be rejected but used responsibly. Irresponsible use could worsen misinformation, damage public trust, and weaken democratic institutions.
As the discussion closed, one message stood out: AI is here to stay, but the responsibility lies with professionals—especially journalists—to use it with integrity, scepticism, and security at the core.
Since OpenAI released ChatGPT in November 2022, many industries have questioned this new technology’s trustworthiness and have hesitated to use it in their day-to-day operations. Journalism is a profession built on the concepts of trust and verifiable information. AI’s tendency to fabricate facts partly explains the industry’s initial hesitancy.
Nevertheless, media organisations are rapidly adopting AI. Whether it is through using generative AI to create headlines or draft breaking news. According to JournalismAI, 73% of media organisations believe AI provides new opportunities in journalism. Additionally, 85% of survey respondents said they used AI to complete tasks and summarise reports
Leave a Reply