JournoTECH

Loading

Applications Open for Fully Funded Training on Responsible AI for Educators and Researchers: Financial Support Available

JournoTECH is pleased to announce that applications are now open to participate in our upcoming 2-day online training for educators and researchers globally.

Selected participants can receive support from a small inclusion fund to cover data or internet access, making the training accessible to everyone.

This initiative is funded by SPRITE+, a consortium that brings together people involved in research, practice, and policy focused on digital contexts. SPRITE+ comprises the University of Manchester, Imperial College London, Lancaster University, Queen’s University Belfast, and the University of Southampton, and is funded by UKRI EPSRC (UK Research and Innovation’s Engineering and Physical Sciences Research Council).

The training will focus on enhancing and increasing the capacity of educators and researchers to integrate Artificial Intelligence responsibly into their teaching and research practices, with a strong emphasis on ethics, data privacy, and digital trust.

The training is expected to take place on December 1 and 2, 2025, and aims to equip global educators and researchers with the knowledge and practical skills to use AI responsibly, ethically, and securely in academic and research settings. The exact training times will be determined after participant selection to ensure a fair schedule for people in different time zones.

Participants will explore how AI tools can be used responsibly in education and research, with a focus on data privacy, digital trust, and ethical practices. The training will combine expert talks, interactive discussions, and practical demonstrations of AI tools that allow participants to perform both short and large-scale analyses efficiently. Participants will also learn how to use industrial-grade AI tools without any coding, enabling fast, accurate, and responsible decision-making in their teaching and research workflows.

Trainers will include experts building responsible AI tools for researchers and educators, as well as specialists in data privacy, ethics, and digital trust who will share practical insights and real-world applications. The training will also help JournoTECH launch the Responsible AI Toolkit for Educators and Researchers, a practical resource developed to support teaching and research practices globally.

🧭 Who Should Apply
We welcome applications from:

  • Educators, lecturers, and academic staff at all levels (primary, secondary, or higher education)
  • Researchers and research support professionals, including PhD students
  • Practitioners and innovators exploring responsible and ethical use of AI in academic or educational contexts

💻 Event Format: Online (two days)
🗓️ Dates: December 1–2, 2025
🕐 Time: To be confirmed after participant selection to ensure a balanced schedule across global time zones

🚨 How to Apply
👉 Complete the application form here
🕓 Application Deadline: November 24, 2025
Only selected participants will be contacted.

For any questions or further information, please contact us at info@journotech.org.

Join us to build a community of educators and researchers committed to advancing responsible, ethical, and inclusive AI practices in education and research.

Can We Trust AI? Key Insights from JournoTECH’s London Event on Privacy and Security

By Matin Animashuan

“Barely.” That was the frank response from a group of journalists when asked if they trust artificial intelligence in their profession. The exchange set the tone at JournoTECH’s AI 2025 event in London, which brought together journalists, academics, technologists, and civil society advocates to discuss one urgent question: Can we trust AI with our work?

This event is funded by SPRITE+. SPRITE+ brings together people involved in research, practice, and policy with a focus on digital contexts. SPRITE+ is a consortium comprising the University of Manchester, Imperial College London, Lancaster University, Queen’s University Belfast, and the University of Southampton and is funded by UKRI EPSRC (UK Research and Innovation’s Engineering and Physical Sciences Research Council).

Several speakers at the event warned that rapid adoption without caution risks eroding credibility.

Security and trust at the core

For Elfredah Kevin-Alerechi, founder of JournoTECH and organiser of the event, security must come first. She told attendees that journalists can trust AI, but only if they remain alert to risks.

“We understand as journalists that we have to secure our sources and data. Security was one of the main things I considered when building NewsAssist AI.”

The JournoTECH platform, NewsAssist AI, helps professionals transcribe and summarise large reports while keeping privacy and security front of mind.

The “machine trickster”

From Germany, Cade Dhiem, founder of Head of Research, World Ethical Data Foundation, painted a vivid picture of AI as a “machine trickster”. He compared it to an 18th-century automaton duck that seemed to eat and digest food but was, in reality, an elaborate illusion.

“Its green pellets, when inserted into authorship, can contaminate your work or defecate on the reputation of a masthead,” he warned.

Yet Cade did not dismiss AI outright. Instead, he urged journalists to “imprison the trickster and harness it”, offering rules such as never quoting AI directly, forcing it to reference sources, and using it only to strengthen rigour—not to seek truth.

Some participants at the event/ Photo credit: Matin Animashuan for JournoTECH

Privacy and regulation gaps

Rebecca Bird, founder of BixBe Tech, stressed privacy concerns. She noted that Meta admitted in 2024 to training its models on public Facebook posts dating back to 2007, highlighting how little control users often have over their data.

“Confidentiality is sometimes not available on these platforms,” she cautioned, urging organisations to classify data carefully to avoid breaching GDPR and other regulations.

Pravin Prakash, during his presentation/ Photo Credit: Matin Animashuan for JournoTECH

AI as a “false multiplier”

Pravin Prakash, from the Centre for the Study of Organised Hate, described AI as a “false multiplier” that amplifies misinformation within existing institutional weaknesses.

“Yes, it makes the problem worse—but mainly because of how it has been designed to source information,” he said, calling for stronger accountability from both governments and media houses.

A call for responsible use

Despite their differing perspectives, speakers circled back to a common theme: AI should not be rejected but used responsibly. Irresponsible use could worsen misinformation, damage public trust, and weaken democratic institutions.

As the discussion closed, one message stood out: AI is here to stay, but the responsibility lies with professionals—especially journalists—to use it with integrity, scepticism, and security at the core.

Since OpenAI released ChatGPT in November 2022, many industries have questioned this new technology’s trustworthiness and have hesitated to use it in their day-to-day operations. Journalism is a profession built on the concepts of trust and verifiable information. AI’s tendency to fabricate facts partly explains the industry’s initial hesitancy.

Nevertheless, media organisations are rapidly adopting AI. Whether it is through using generative AI to create headlines or draft breaking news. According to JournalismAI, 73% of media organisations believe AI provides new opportunities in journalism. Additionally, 85% of survey respondents said they used AI to complete tasks and summarise reports