JournoTECH

Loading

Call for Applications: Journotech Innovation Forum

Journotech is pleased to invite applications for a two-week online Innovation Forum dedicated to designing a practical blueprint for ethical, secure, and trust-centred technology departments. This project is designed to help civic organisations understand the fundamental requirements for establishing or restructuring a technology department.

This initiative is funded by SPRITE+, a consortium that brings together people involved in research, practice, and policy focused on digital contexts. SPRITE+ comprises the University of Manchester, Imperial College London, Lancaster University, Queen’s University Belfast, and the University of Southampton and is funded by UKRI EPSRC (UK Research and Innovation’s Engineering and Physical Sciences Research Council).

The forum will co-create a modular organisational framework that acts as a guide for civic organisations building or restructuring their technology and innovation departments. The framework will include core components such as governance and accountability, security and risk, privacy and data protection, trust and ethics, digital infrastructure and tools, and innovation and product development.

While many frameworks focus solely on preventing risks, this forum will also explicitly address Resilience: the mechanisms and governance needed when things go wrong. We are seeking professionals to help us design the structures that allow civic institutions, NGOs, and tech startups to respond effectively to system failures while maintaining public trust.

Who We Are Looking For

We are seeking a diverse cohort of experts from Journalism, Academia, Civic Tech, Cybersecurity, Data Law, AI, and Policy Research. Whether you are a technologist building tools, a researcher studying digital ethics, or a policymaker shaping governance, your perspective is vital to creating a truly interdisciplinary framework.

The Commitment

To ensure the success of this co-creation process, participants must commit to a four-session online programme spread over two weeks.

  • Duration: Two weeks (two sessions per week).
  • Daily Commitment: A maximum of 3 hours per session.
  • Total Time: Approximately 12 hours, including live sessions and collaborative group drafting.
  • Requirements: Attendance at all four sessions, active contribution to a modular group design, and participation in the final framework presentation.

Participant Support & Benefits

Selected participants will receive a £150 participation stipend, paid upon successful completion of the programme. This honorarium is intended to support your time, data/connectivity needs, and accessibility costs.

Beyond the stipend, you will have the opportunity to influence an international innovation framework, collaborate with global experts, and receive formal recognition for your contribution to the published TIPS (Trust, Identity, Privacy, and Security) Organisational Design Guide.

How to Apply

Please complete the application form here

  • Application Deadline: May 15, 2026 (11:59pm BST)
  • Notification of Selection: May 20, 2026
  • Start Date: June 2026 (Exact date will be communicated to selected participants)

If you have any urgent questions in the meantime, please contact us at info@journotech.org

Only successful applicants will be contacted.

Call for Applications: Journotech Innovation Forum

Journotech is pleased to invite applications for a two-week online Innovation Forum dedicated to designing a practical blueprint for ethical, secure, and trust-centred technology departments. This project is designed to help civic organisations understand the fundamental requirements for establishing or restructuring a technology department.

This initiative is funded by SPRITE+, a consortium that brings together people involved in research, practice, and policy focused on digital contexts. SPRITE+ comprises the University of Manchester, Imperial College London, Lancaster University, Queen’s University Belfast, and the University of Southampton and is funded by UKRI EPSRC (UK Research and Innovation’s Engineering and Physical Sciences Research Council).

The forum will co-create a modular organisational framework that acts as a guide for civic organisations building or restructuring their technology and innovation departments. The framework will include core components such as governance and accountability, security and risk, privacy and data protection, trust and ethics, digital infrastructure and tools, and innovation and product development.

While many frameworks focus solely on preventing risks, this forum will also explicitly address Resilience: the mechanisms and governance needed when things go wrong. We are seeking professionals to help us design the structures that allow civic institutions, NGOs, and tech startups to respond effectively to system failures while maintaining public trust.

Who We Are Looking For

We are seeking a diverse cohort of experts from Journalism, Academia, Civic Tech, Cybersecurity, Data Law, AI, and Policy Research. Whether you are a technologist building tools, a researcher studying digital ethics, or a policymaker shaping governance, your perspective is vital to creating a truly interdisciplinary framework.

The Commitment

To ensure the success of this co-creation process, participants must commit to a four-session online programme spread over two weeks.

  • Duration: Two weeks (two sessions per week).
  • Daily Commitment: A maximum of 3 hours per session.
  • Total Time: Approximately 12 hours, including live sessions and collaborative group drafting.
  • Requirements: Attendance at all four sessions, active contribution to a modular group design, and participation in the final framework presentation.

Participant Support & Benefits

Selected participants will receive a £150 participation stipend, paid upon successful completion of the programme. This honorarium is intended to support your time, data/connectivity needs, and accessibility costs.

Beyond the stipend, you will have the opportunity to influence an international innovation framework, collaborate with global experts, and receive formal recognition for your contribution to the published TIPS (Trust, Identity, Privacy, and Security) Organisational Design Guide.

How to Apply

Please complete the application form here

  • Application Deadline: May 15, 2026 (11:59pm BST)
  • Notification of Selection: May 20, 2026
  • Start Date: June 2026 (Exact date will be communicated to selected participants)

If you have any urgent questions in the meantime, please contact us at info@journotech.org

Only successful applicants will be contacted.

Call for Applications: Journotech Innovation Forum

Journotech is pleased to invite applications for a two-week online Innovation Forum dedicated to designing a practical blueprint for ethical, secure, and trust-centred technology departments. This project is designed to help civic organisations understand the fundamental requirements for establishing or restructuring a technology department.

This initiative is funded by SPRITE+, a consortium that brings together people involved in research, practice, and policy focused on digital contexts. SPRITE+ comprises the University of Manchester, Imperial College London, Lancaster University, Queen’s University Belfast, and the University of Southampton and is funded by UKRI EPSRC (UK Research and Innovation’s Engineering and Physical Sciences Research Council).

The forum will co-create a modular organisational framework that acts as a guide for civic organisations building or restructuring their technology and innovation departments. The framework will include core components such as governance and accountability, security and risk, privacy and data protection, trust and ethics, digital infrastructure and tools, and innovation and product development.

While many frameworks focus solely on preventing risks, this forum will also explicitly address Resilience: the mechanisms and governance needed when things go wrong. We are seeking professionals to help us design the structures that allow civic institutions, NGOs, and tech startups to respond effectively to system failures while maintaining public trust.

Who We Are Looking For

We are seeking a diverse cohort of experts from Journalism, Academia, Civic Tech, Cybersecurity, Data Law, AI, and Policy Research. Whether you are a technologist building tools, a researcher studying digital ethics, or a policymaker shaping governance, your perspective is vital to creating a truly interdisciplinary framework.

The Commitment

To ensure the success of this co-creation process, participants must commit to a four-session online programme spread over two weeks.

  • Duration: Two weeks (two sessions per week).
  • Daily Commitment: A maximum of 3 hours per session.
  • Total Time: Approximately 12 hours, including live sessions and collaborative group drafting.
  • Requirements: Attendance at all four sessions, active contribution to a modular group design, and participation in the final framework presentation.

Participant Support & Benefits

Selected participants will receive a £150 participation stipend, paid upon successful completion of the programme. This honorarium is intended to support your time, data/connectivity needs, and accessibility costs.

Beyond the stipend, you will have the opportunity to influence an international innovation framework, collaborate with global experts, and receive formal recognition for your contribution to the published TIPS (Trust, Identity, Privacy, and Security) Organisational Design Guide.

How to Apply

Please complete the application form here

  • Application Deadline: May 15, 2026 (11:59pm BST)
  • Notification of Selection: May 20, 2026
  • Start Date: June 2026 (Exact date will be communicated to selected participants)

If you have any urgent questions in the meantime, please contact us at info@journotech.org

Only successful applicants will be contacted.

The Digital Solution to Lagos’s Waste Crisis: Can Blockchain Clean Up the City

By Matin Animashaun, Oluchukwu Nwabuikwu

 AI Generated Image for JournOTECH used to depict Lagos residents line up with their household waste.
AI Generated Image for JournOTECH used to depict Lagos residents line up with their household waste.

Lagos, Nigeria’s bustling economic hub and second most populous state, faces a mountain of a problem—literally. The city’s waste management system is struggling to keep pace with its rapidly growing population, which expands by about 1.2 million people per year as highlighted by the United Nations’ Environment Programme.

While the state government’s official figures suggest Lagos generates between 13,000 to 14,000 tonnes of waste daily (nearly 5 million tonnes a year), a 2025 analysis by two environmental experts revealed that only one third of this is collected by the Lagos State Waste Management Authority (LAWMA). This is part of a larger national challenge, as Nigeria collects less than 20% of the 32 million tonnes of annual solid waste it generates, significantly lower than the World Bank’s estimated average waste collection rate in sub Saharan Africa of about 44%. Experts also expect countries like Nigeria to generate three times more waste by 2050.

The city’s waste challenge is compounded by two major issues. First, the problem of inaccurate data. The government acknowledged in 2018 that the amount of waste generated far exceeded the official figure of 13,000 tonnes per day, and this lack of clear data makes effective management difficult. Second, the issue of poor waste hierarchy. According to a 2022 research article by Kehinde Allen-Taylor, Lagos skips critical steps in the waste hierarchy like prevention and recycling and proceeds straight to disposal. This systemic inefficiency has led to a massive informal waste economy taking over where official services fail.

Waste hierarchy by the European Commission’s Waste Framework Directive

LAWMA and its private service providers (PSPs) have attempted solutions. In February 2022, the state launched the “Adopt A Bin” programme to enhance waste management. This initiative saw 40,000 standard waste bins (green for general, blue for recyclables) delivered to homes and businesses to promote sorting at the source. However, the bins were too expensive for many, and a major issue persists: PSPs are known to ignore low-income areas because these communities are less profitable, leading to “black spots” of uncollected waste.

The Blockchain Solution

Nigeria’s launch of the Nigerian Circular Economy Program (NCEP) in 2024 has led stakeholders to consider blockchain technology. While often associated with cryptocurrencies, a blockchain’s core function as a distributed, immutable ledger offers a powerful solution to Lagos’s problems. First, it offers accountability. All waste collection activities, including those by LAWMA officials and the 364 PSPs, will be recorded on a decentralised database, which will stop private service providers from altering their data and keep them accountable for collecting waste in low-income areas. Second, it can provide accurate data. Its real-time tracking, similar to what the Berlin-based environmental technology company Cleanhub uses, could provide precise, auditable statistics on the volume and condition of waste collected. According to one report, Cleanhub has collected over 21 million kilograms of plastic waste as of October 2025. In Lagos, local companies like Motex Africa are already exploring this technology.

Challenges to Viability

The success of blockchain is conditional on addressing several major hurdles. Cost and scalability are significant concerns. Integrating this technology is very expensive, which is a major factor as Lagos State struggles to meet its internally generated revenue target for 2025. Furthermore, scaling a blockchain network to a city of over 20 million people can be technologically challenging and slow. Regulatory gaps are also a problem, as there are currently no local laws to compel stakeholders like PSPs to fully comply with blockchain-based reporting. Finally, data privacy conflicts pose a threat. Blockchain’s immutability clashes with Nigeria’s data protection laws, specifically the consumer’s “right to erasure”, as residents may not want their waste data permanently recorded on a publicly accessible database.

Ultimately, strong political will, adequate funding, public education, and regulatory frameworks are key. Most importantly, any system must integrate the informal waste workers who sustain much of Lagos’s recycling economy. If Lagos can address these prerequisites, the transparency and real-time tracking of blockchain could transform the state’s waste management system.

JournoTECH Convenes Legal Professionals to Tackle the Future of AI Governance and Security

London, United Kingdom — Legal professionals from several countries recently came together for a specialised training designed to help lawyers better understand the risks and responsibilities that come with using artificial intelligence in legal practice.

The two-day online programme, “Interrogating AI in Legal Practice: Security, Privacy, and Accountability for Legal Professionals,” took place on March 3–4, 2026. It was organised by JournoTECH with funding support from SPRITE+, a consortium of five UK universities led by the University of Manchester.

Thirty-four legal professionals were selected to take part in the training. They came from different backgrounds and levels of experience, including lawyers with more than 20 years in practice as well as mid-career practitioners. The participants included senior advocates, human rights lawyers, and professionals working in both private and government established from Nigeria, Ghana, Sierra Leone, and Zambia.

Many participants said the training helped address an important gap in AI knowledge within the legal profession. They explained that most AI training programmes available today are very general and often do not focus on the ethical duties, legal procedures, and confidentiality issues that lawyers deal with every day.

Opening the programme, Elfredah Kevin Alerechi, founder of JournoTECH, focused on connecting AI theory with real-life legal work. She introduced participants to specialised tools that can support legal workflows while emphasising that simply knowing about the technology is not enough.

Alerechi also spoke about some of the risks lawyers must consider when using AI systems, including bias, misinformation, and inaccurate results. She showed how platforms such as NewsAssist AI, Google NotebookLM, among others, can help legal professionals with tasks like transcription of depositions and analysing legal documents. At the same time, she reminded participants that technology should always be used carefully and with professional judgement.

Offering a wider perspective on responsibility in AI systems, Lizzie Coles-Kemp, Head of Information Security at Royal Holloway, University of London, spoke about what she described as a “sociotechnical” approach to responsibility. She explained that AI systems can sometimes create “responsibility gaps” when people rely too much on automated tools to make decisions.

According to Coles-Kemp, good AI governance is not only about deciding who is to blame when something goes wrong. It is also about building shared values, clear accountability, and transparency within organisations that use the technology.

The training also looked at the technical side of protecting sensitive legal information. Rebecca Bird, founder of BixBe Tech, cautioned lawyers against assuming that paid AI tools are automatically secure. She encouraged participants to treat every AI platform like a “stranger in the room” and to carefully check how their data is handled before sharing confidential information.

Meanwhile, Soribel Feliz, CEO of Personal Algorithms LLC, spoke about the growing problem of “Shadow AI.” This happens when employees use AI tools at work without approval from their organisation. She said law firms need clear policies and governance systems to ensure AI is used safely and responsibly.

Participants described the training as a valuable opportunity to receive practical guidance designed specifically for the legal profession. The sessions went beyond simple demonstrations of AI tools and explored deeper issues such as professional accountability, client confidentiality, and the full lifecycle of data privacy.

As the programme ended, organisers reminded participants that while AI can help automate many time-consuming tasks, the final responsibility for legal decisions always rests with the lawyer.

Participants said the training has equipped them with new strategies and frameworks that will help them guide their firms through the growing digital transformation of legal practice while maintaining strong ethical standards and public trust.

Call for Applications: Interrogating AI in Legal Practice – Global Online Training for Legal Professionals

JournoTECH is pleased to announce that applications are now open to participate in our upcoming 2-day online training for legal professionals globally.

This initiative is funded by SPRITE+, a consortium that brings together people involved in research, practice, and policy focused on digital contexts. SPRITE+ comprises the University of Manchester, Imperial College London, Lancaster University, Queen’s University Belfast, and the University of Southampton and is funded by UKRI EPSRC (UK Research and Innovation’s Engineering and Physical Sciences Research Council).

The training will focus on empowering legal practitioners to critically interrogate AI systems in legal practice, with a strong emphasis on security, privacy, accountability, and real-world legal risks. Participants will explore how AI systems are being deployed in legal environments, with a focus on security risks, data protection, accountability gaps, and governance challenges. The training will combine expert talks, interactive discussions, and practical demonstrations of AI-enabled legal tools that support tasks such as legal research, transcription, analysis, drafting support, and auditability. Participants will also learn how to apply informed decision-making in legal practice.

The training is expected to take place on March 3–4, 2026, and aims to equip legal professionals globally with the knowledge and practical skills to understand, question, and challenge AI systems used in legal and regulatory contexts.

The exact training times will be determined after participant selection to ensure a fair schedule for participants across different time zones.

Selected participants can receive support from a small inclusion fund to cover data or internet access, making the training accessible to everyone.

🧭 Who Should Apply

We welcome applications from:

  • Lawyers, legal practitioners, and legal advisors
  • Legal aid organisations and public interest lawyers
  • Compliance officers and regulatory professionals
  • Policy and governance professionals working with digital systems
  • Practitioners and innovators working at the intersection of law, technology, and digital rights

💻 Event Format: Online (two days)
🗓️ Dates: March 3–4, 2026
🕐 Time: To be confirmed after participant selection to ensure a balanced schedule across global time zones

🚨 How to Apply

👉 Complete the application form here
🕓 Application Deadline: February 26, 2026 (Selections are on a rolling basis)
Only selected participants will be contacted.

For any questions or further information, please contact us at info@journotech.org.

Join us to build a global community of legal professionals committed to strengthening security, privacy, accountability, and trust in AI systems used in legal practice.

Applications Open for Fully Funded Training on Responsible AI for Educators and Researchers: Financial Support Available

JournoTECH is pleased to announce that applications are now open to participate in our upcoming 2-day online training for educators and researchers globally.

Selected participants can receive support from a small inclusion fund to cover data or internet access, making the training accessible to everyone.

This initiative is funded by SPRITE+, a consortium that brings together people involved in research, practice, and policy focused on digital contexts. SPRITE+ comprises the University of Manchester, Imperial College London, Lancaster University, Queen’s University Belfast, and the University of Southampton, and is funded by UKRI EPSRC (UK Research and Innovation’s Engineering and Physical Sciences Research Council).

The training will focus on enhancing and increasing the capacity of educators and researchers to integrate Artificial Intelligence responsibly into their teaching and research practices, with a strong emphasis on ethics, data privacy, and digital trust.

The training is expected to take place on December 1 and 2, 2025, and aims to equip global educators and researchers with the knowledge and practical skills to use AI responsibly, ethically, and securely in academic and research settings. The exact training times will be determined after participant selection to ensure a fair schedule for people in different time zones.

Participants will explore how AI tools can be used responsibly in education and research, with a focus on data privacy, digital trust, and ethical practices. The training will combine expert talks, interactive discussions, and practical demonstrations of AI tools that allow participants to perform both short and large-scale analyses efficiently. Participants will also learn how to use industrial-grade AI tools without any coding, enabling fast, accurate, and responsible decision-making in their teaching and research workflows.

Trainers will include experts building responsible AI tools for researchers and educators, as well as specialists in data privacy, ethics, and digital trust who will share practical insights and real-world applications. The training will also help JournoTECH launch the Responsible AI Toolkit for Educators and Researchers, a practical resource developed to support teaching and research practices globally.

🧭 Who Should Apply
We welcome applications from:

  • Educators, lecturers, and academic staff at all levels (primary, secondary, or higher education)
  • Researchers and research support professionals, including PhD students
  • Practitioners and innovators exploring responsible and ethical use of AI in academic or educational contexts

💻 Event Format: Online (two days)
🗓️ Dates: December 1–2, 2025
🕐 Time: To be confirmed after participant selection to ensure a balanced schedule across global time zones

🚨 How to Apply
👉 Complete the application form here
🕓 Application Deadline: November 24, 2025
Only selected participants will be contacted.

For any questions or further information, please contact us at info@journotech.org.

Join us to build a community of educators and researchers committed to advancing responsible, ethical, and inclusive AI practices in education and research.

Can We Trust AI? Key Insights from JournoTECH’s London Event on Privacy and Security

By Matin Animashuan

“Barely.” That was the frank response from a group of journalists when asked if they trust artificial intelligence in their profession. The exchange set the tone at JournoTECH’s AI 2025 event in London, which brought together journalists, academics, technologists, and civil society advocates to discuss one urgent question: Can we trust AI with our work?

This event is funded by SPRITE+. SPRITE+ brings together people involved in research, practice, and policy with a focus on digital contexts. SPRITE+ is a consortium comprising the University of Manchester, Imperial College London, Lancaster University, Queen’s University Belfast, and the University of Southampton and is funded by UKRI EPSRC (UK Research and Innovation’s Engineering and Physical Sciences Research Council).

Several speakers at the event warned that rapid adoption without caution risks eroding credibility.

Security and trust at the core

For Elfredah Kevin-Alerechi, founder of JournoTECH and organiser of the event, security must come first. She told attendees that journalists can trust AI, but only if they remain alert to risks.

“We understand as journalists that we have to secure our sources and data. Security was one of the main things I considered when building NewsAssist AI.”

The JournoTECH platform, NewsAssist AI, helps professionals transcribe and summarise large reports while keeping privacy and security front of mind.

The “machine trickster”

From Germany, Cade Dhiem, founder of Head of Research, World Ethical Data Foundation, painted a vivid picture of AI as a “machine trickster”. He compared it to an 18th-century automaton duck that seemed to eat and digest food but was, in reality, an elaborate illusion.

“Its green pellets, when inserted into authorship, can contaminate your work or defecate on the reputation of a masthead,” he warned.

Yet Cade did not dismiss AI outright. Instead, he urged journalists to “imprison the trickster and harness it”, offering rules such as never quoting AI directly, forcing it to reference sources, and using it only to strengthen rigour—not to seek truth.

Some participants at the event/ Photo credit: Matin Animashuan for JournoTECH

Privacy and regulation gaps

Rebecca Bird, founder of BixBe Tech, stressed privacy concerns. She noted that Meta admitted in 2024 to training its models on public Facebook posts dating back to 2007, highlighting how little control users often have over their data.

“Confidentiality is sometimes not available on these platforms,” she cautioned, urging organisations to classify data carefully to avoid breaching GDPR and other regulations.

Pravin Prakash, during his presentation/ Photo Credit: Matin Animashuan for JournoTECH

AI as a “false multiplier”

Pravin Prakash, from the Centre for the Study of Organised Hate, described AI as a “false multiplier” that amplifies misinformation within existing institutional weaknesses.

“Yes, it makes the problem worse—but mainly because of how it has been designed to source information,” he said, calling for stronger accountability from both governments and media houses.

A call for responsible use

Despite their differing perspectives, speakers circled back to a common theme: AI should not be rejected but used responsibly. Irresponsible use could worsen misinformation, damage public trust, and weaken democratic institutions.

As the discussion closed, one message stood out: AI is here to stay, but the responsibility lies with professionals—especially journalists—to use it with integrity, scepticism, and security at the core.

Since OpenAI released ChatGPT in November 2022, many industries have questioned this new technology’s trustworthiness and have hesitated to use it in their day-to-day operations. Journalism is a profession built on the concepts of trust and verifiable information. AI’s tendency to fabricate facts partly explains the industry’s initial hesitancy.

Nevertheless, media organisations are rapidly adopting AI. Whether it is through using generative AI to create headlines or draft breaking news. According to JournalismAI, 73% of media organisations believe AI provides new opportunities in journalism. Additionally, 85% of survey respondents said they used AI to complete tasks and summarise reports

NewsAssist AI Founder Elfredah Kevin-Alerechi Selected for INSEAD AI Venture Lab Program

By Rosemary Nwaobasi

Elfredah Kevin-Alerechi, founder and developer of NewsAssist AI, has been selected for the INSEAD AI Venture Lab Sprint Cohort, a programme that brings together founders from around the world who are building bold and innovative ideas. NewsAssist AI was chosen from over 1,000 applications, highlighting its growing global impact.

The INSEAD Founder Sprint is a prestigious global accelerator that provides startups with mentorship, access to investors, and strategic partnerships to help scale their ideas. Since its launch in May 2025, NewsAssist AI has been used in 117 countries, reflecting the global demand for tools that streamline workflows while preserving originality.

NewsAssist AI Founder and Developer. Elfredah Kevin-Alerechi.

Over the next eight weeks, Kevin-Alerechi will collaborate with a diverse group of founders and learn from world-class mentors while building smarter solutions with AI at the core.

“This opportunity validates how timely and important our product is in today’s demanding world,” said Kevin-Alerechi. “What started as a solution to reduce the workload for journalists and newsrooms has grown beyond expectations. Today, academics, students, researchers, government agencies, and content creators also rely on NewsAssist AI.”

According to Kevin-Alerechi, acceptance into the INSEAD program represents more than recognition. It is an opportunity to refine the product and enhance its features for greater impact, deepen support for the growing community of both paid and free users, sharpen the fundraising strategy and investor readiness, learn from experienced founders who have raised millions and closed pre-seed rounds, and strengthen the pitch and roadmap for investors aligned with the mission.

NewsAssist AI is designed to revolutionise journalism and media practices through AI-powered tools for news creation, storytelling, and audience engagement. Originally built for journalists, it is now used across multiple sectors, including academics, legal practitioners, researchers, and content creators.

AI in Education: JournoTECH Trains Zimbabwean University Lecturers on Classroom Technology

Lecturers also learnt how to use NewsAssist AI to generate course content, grade and analyse students’ work, and help students learn how to write news stories.

Lecturers from two prominent Zimbabwean universities recently participated in JournoTECH training programme on AI technology applications, focusing on the various AI tools for classroom use, including NewsAssist AI.

The training, led by Elfredah Kevin-Alerechi, founder of NewsAssist AI, equipped lecturers from the Media and Journalism Department at the National University of Science and Technology and the Department of Languages, Media, and Communication Studies at Lupane State University with practical skills.

The programme began with participants sharing their prior experiences with AI. While many haven’t had prior AI usage in their workflows, those who had shared their experiences.  

Kevin-Alerechi emphasised the benefits and limitations of AI in education.  “While AI technology is good for the classroom,” she noted, “it has its disadvantages with excessive use. When using AI to do anything, it is very important that we still cross-check what AI has produced before publishing or using it in classrooms.”

She highlighted AI’s role in reducing lecturers’ workloads: “It can be used to create lesson notes, research, check plagiarism, and grade students; however, these tools are there to support you and not to replace lecturers in class activities.” 

The training included practical sessions on various AI tools, including NewsAssist AI, Gemini, Diffit AI, EdCafe AI, and Classpoint. Kevin-Alerechi distinguished NewsAssist AI by its reliance on content input to generate results, making it suitable for both teachers and students.

NewsAssist AI’s features—transcription, editing, summarisation, document analysis, and translation—were showcased as valuable classroom tools.  Lecturers learnt to use NewsAssist AI for transcribing and summarising lectures, analysing student projects, preparing course materials, supporting student research, and training students on real-world AI tools.

Regarding student applications, Kevin-Alerechi advised, “Students can use it to learn how to write reports, especially those just starting in media and struggling to write news stories, opinion pieces, or feature articles. 

However, when students are using it to learn, they shouldn’t dwell 100% on it but use it to learn and produce their original articles while comparing their work to NewsAssist AI’s output.  With this pattern, they will learn fast in addition to what their lecturers have taught them. They can share their work with lecturers and mentors for improvement.”

The training concluded with practical sessions demonstrating NewsAssist AI’s capabilities for lecturers at both universities.