
Building Credibility in the Age of Misinformation with Paul Smith of Amnesty International
Paul Smith
In this episode, Paul Smith, Chief Information Officer of Amnesty International, shares how unstructured data powers global human rights advocacy. He explains how Amnesty verifies and organizes massive volumes of data, such as videos, testimonies and reports, while navigating the ethical and operational challenges of data use in high-stakes environments.
I'd rather read the transcript of this conversation please!
Key Takeaways:
(03:16) Unstructured data is key to documenting human rights abuses.
(06:43) Amnesty International verifies data using reverse image search, metadata and cross-referencing.
(12:34) SharePoint and Preservica help manage, archive and access multilingual data.
(16:35) AI adds speed and scale across Amnesty International’s research and evidence processes.
(20:21) Generative adversarial networks help mitigate AI risks like bias and censorship.
(24:18) Nonprofits need to guide people, from learning about the cause to becoming advocates.
(28:33) The future of advocacy depends on AI, crowdsourcing and protecting privacy.
(30:13) The flood of false information is an important area to watch out for in advocacy efforts.
Resources Mentioned:
Thanks for listening to the “Data Masters Podcast.” If you enjoyed this episode, be sure to subscribe so you never miss our latest discussions and insights into the ever-changing world of data.
Paul: [00:00:00] I think the challenges still exist around misinformation and disinformation that will become, think, it'll increase in volume. because it's just as easy to spread, nefarious messages, or, twist the truth as it is to, for the truth to, to get out there.
Anthony: Welcome to another episode of Data Masters, where we explore the evolving world of data, technology and leadership. Today we're gonna dive into one of the most complex and critical challenges in the data space. unstructured data. Joining me is Paul Smith, the CIO at Amnesty International, an [00:01:00] organization that operates in over 150 countries, advocating for human rights worldwide.
Paul has spent over two decades leading digital transformation efforts across a variety of industries from defense and logistics to direct marketing. And of course, now the nonprofit sector. His expertise in structuring, analyzing vast amounts of information has been key to driving Amnesty International's data strategy.
In today's discussion, we'll explore why unstructured data is at the core of Amnesty's work and how they analyze documents, videos, images, and reports to build fact based cases. That hold global institutions accountable. We'll also discuss the role of AI and automation and processing massive amounts of information, the challenges of combating misinformation and how amnesty uses data to engage supporters and drive advocacy efforts.
This conversation is more than about [00:02:00] technology. It's really about. Truth and accountability and the power of data to shape global human rights. So let's jump into it. Welcome, Paul.
Paul: Thank you. Great to be here. Appreciate having me on the show.
Anthony: So think many listeners would be maybe surprised to have an episode of Data Masters, focused on unstructured data that starts with Amnesty International, that isn't maybe what one would expect. and, maybe why don't we talk a little bit about Amnesty's mission and why unstructured data fits so, sort of uniquely into Amnesty International's mission.
Paul: Yeah, happy to. That's a great question to start us off, I think. So for us, unstructured data in the context of human rights activism refers to any form of data that doesn't have a predefined structure or organization. So that means that in our work, it can come from a variety of sources in a variety of formats and languages at any time [00:03:00] and with varying degrees of quality and completeness. It can say the same thing, but from different perspectives and in different ways due to different, lenses and contexts. And that all makes it all the more challenging to analyze and process compared to more structured data, which is more organized like a data in a spreadsheet or a database. So unstructured data for us typically includes things like textual data that can be informed as social media posts, blogs. Transcripts of interviews and speeches, and other forms of written content. It can come in the form of multimedia data, such as video footage, or audio recordings, or photos, or other forms of visual or audited content that can include recordings, for example, of protests, or interviews, or testimonies, or events of human rights violations. It can come in web content form, such as information found on blogs, on forums, and news sites, where people want to discuss or share experiences related to human rights violations, discrimination, or advocacy efforts. And I can come in the format of reports and documents, and they can include human rights reports, government publications, [00:04:00] NGO documents, court records, which in themselves may not be standardized or uniform in their structure. So its importance to us is varied vast. So firstly, gathering evidence, unstructured data is often used to collect firsthand accounts of human rights violations like testimonies and evidence of violence oppression. In real time monitoring through social media and other sources, activists can monitor unfolding events such as protests or conflicts. in real time. In advocacy and awareness, activists can leverage unstructured data to inform the public and mobilize people for causes, and images that used to show the impact of human rights violations. And through pattern recognition, so by analyzing large amounts of unstructured data, activists may be able to uncover patterns of abuse or systemic discrimination or other injustice that would otherwise go unnoticed. But in dealing with unstructured data, it comes with some significant challenges for organizations like us. I mean, firstly, we've talked about the challenges in data processing analyzing that data, which can sometimes require sophisticated tools and techniques, and [00:05:00] also in data authenticity. So with unstructured data, there is a risk, as you've said, Anthony, of misinformation and manipulated content.
So verifying authenticity of data is critical in what we do.
Anthony: that's why you guys are the perfect people to have on the show to talk about unstructured data. Cause it really is fraught. It's a really important mission and it's really difficult, mission. and I think if I were to. Maybe summarize the business term used slightly loosely here, but the business of amnesty international, it's really about fact based advocacy.
You're trying to help use truth to advocate for organizations and people on a global basis. And in that context, it feels like, and you talked about two issues. One is scale, but let's talk about the first one first, which is, credibility and understanding. Providence and where data is coming from.
it feels like a similar problem that journalist organizations have. [00:06:00] so not only are collecting large amounts of information, but you really want to make sure that information is credible because you're going to use it in this really important advocacy effort. So Talk a little bit about, how you take both a large quantity of data, but how do you understand, the credibility of that data, the providence of that data, and especially at scale, because it feels like that makes that problem even harder.
Paul: Absolutely. And it's a great challenge to have in a way because I think as a research organization at our core, effective management of information throughout all touch points across the information management lifecycle is key for us in equal measure, whether that's from ingestion to storage and protection to search and retrieval to compliance and preservation of evidence right away through that lifecycle. I think handling vast amounts of unstructured data like documents, images, and videos, it can be challenging, but there are a few methods really where. and technologies that can be used to ensure that credibility and reliability of the data. So, for example, in collection and organization, using things like centralized platforms,to organize unstructured data, [00:07:00] making it easier to access. in tagging and metadata, by adding that metadata to tell us information about the information, so to speak, I think can really help us to help search and to verify the data. Within the verification of documents, in cross referencing, so to cross check with other sources such as news reports, eyewitness testimonies, or official records to verify authenticity. Um, in cataloguing sources of information to help us confirm the credibility of the document sources vital.and using tools like optical character recognition to extract text from scanned documents and images, making it easier to analyse and search for validation against other sources. It can be helpful in the verification of images and videos through using things like reverse image search. So tools like Google's reverse image search or InVID can help us verify the origin of an image and tracing where it's been published or shared online. Geolocation and metadata analysis. In time and location verification. In forensic analysis, using deep techniques to analyze pixels and so on in [00:08:00] photographs. In digital watermarking.in crowd sourcing and eyewitness testimony. So using crowd sourcing platforms, we can bring in others to help us to collect real time data and verify the credibility of what we have and maybe the speed and scale we could otherwise not do through using those platforms for eyewitness verification. And you've touched on it already, really, but, through AI and machine learning, we can start to use techniques such as natural language processing to analyze large volumes of text, um, in image recognition algorithms that can be used to analyze images and videos for signs of tampering, in, through automated fact checking tools that can help cross reference statements or claims against reliable sources or, known sources of high integrity for us. And they can flag inconsistencies. Through verification and collaboration with trusted partners. So other partnerships and experts that can help us. there could be forensic analysis, for example, or specialist digital security firms that can help us to verify the data. and through networks.we can use, in the area of transparency and [00:09:00] documentation.
So clear documentation itself. helps us to structure information to verify its credibility, because within that we'll know how that data was collected, how it was verified, how it was processed. So having clear records of sources and methods and the steps we went through to improve credibility can allow others to audit our work and add to its integrity in perpetuity. I think in dealing with disinformation, you touched on it. So I think deep, deep fakes and misinformation. I think as part of the verification process, we have to be vigilant against manipulating content. There are tools that can help in that, but, I'll talk a little bit about that a bit more later. And I think also in engaging with experts in this space, because, it is a fast moving area in terms of the technologies available to, even to the layman, to you and I, where anyone with a credit card these days can access that stuff. Absolutely.
Anthony: about deep fake thing in a moment. But I think the thing you're highlighting here,this idea of providing structure on top of the unstructured data. [00:10:00] So if all you had is a God forbid a shared drive with a whole bunch of videos and images in it, that is arguably So I Quite hard to engage with and use.
but everything you previously described is really just about applying structure to that so that you can then, as you say, link them, search them, match them, et cetera. And this is a point that I've often, highlighted, which is this idea that there's no such thing as unstructured data. There's really just data that needs structured.
To be applied to it. And something I think you actually highlighted at the beginning, this idea of structure becomes really the important part. maybe share a little bit, about what are the techniques and technologies and sort of tricks if you want to say, tricks of the trade for finding and managing that structure.
Cause I suspect that would be very helpful to people as they embark on, similar projects as they have, large amounts of unstructured data.
Paul: Again, another great question. I think so. [00:11:00] All of the steps I mentioned previously can add structure to the data collected as we understand more about it and organize it actually. Whether that's about its source, its time, its location, named entities, sensitivity, and categorization can all help enrich our view of the data. And as I've said, we're a research organization at our core, so by naturally following the research process, we naturally add context and structure to our increased understanding of the data. Because we triage it, we interpret it, we cross check it, and we, to enable our use of it. So, for example, in, collection and triage, in classification, in the organization of the evidence, by source, perhaps by locale, perhaps by theme. And in our preliminary analysis and cleansing of the data to sort the wheat from the chaff in the verification of the analysis, in the corroboration of the analysis and the management of what that tells us. So these are all important things that, by going through the natural research process, we understand more about what we have, we understand more about where we are comfortable with it, where we need more information around it. And also, [00:12:00] no project is the only project we've ever done, right? We've done some of these things in the past. Some of the systematic abuses that we fight against are ongoing for decades, perhaps. So, pointing time placing in context. of other projects also helps us to understand data and in isolation. But you're right, we need to ensure integrity and referenceability over time. So whether that's for the atrocity of today, or a repeated atrocity tomorrow, which is a pattern over the last 30 or 40 years. Or for recall for court proceedings at some point in the future, the International Criminal Court. So we can use a variety of tools within that. I mean, I touched on some of the technologies earlier for, looking at images and reverse searching and some of those sorts of things and OCR, but. But for like us in managing information, knowledge management tools like SharePoint, for example, can help us through the entire information lifecycle and with the processes of moving it from one stage to the next. Tools like Preservica can help us with archiving so that we can catalog things in history to preserve formats as formats change, to help [00:13:00] with retrievability, to help with searchability, and accessibility because it's also not all about accessing this information in English. Thank you very much. It's multilingual too, and we often need to recall on previous work and previous outputs a citation in future work, where we need to show patterns over time. But I think in all that, as it's an information challenge, that therefore brings information security into focus. So the confidentiality, integrity, and availability of that information is just as crucial to us in equal measure, um, to help us in what we do and those we deal with and support. So various information and cyber security technologies deployed to help there.
Whether that's through case management, through CRM tools, through data loss prevention. Things like sensitivity and retention labels help us through that lifecycle process as well. And also those tools that can detect and respond to cyber threats, whether that's proactive or reactive. So tools like the Microsoft Sentinel and Defender can help in that space. [00:14:00]
Anthony: So I hadn't considered that, but it's a really good point that the context of the data that you're handling is,probably doesn't change that much. It's, generally the similar types of, abuses and, conflicts that you're tracking, even if the place changes, the context changes, the dates change, it's the similar types of problems.
And so, probably similar types of,manipulation of that data as well. So the, it [00:15:00] becomes easier to detect, but it also speaks to, I think, a essential challenge and opportunity, I suppose, that you have, which is just the sheer scale of the data that you're handling. maybe it's worth sharing.
How much we're talking about here, but I suspect that there's just a really large amount. And I'm also curious in that context, I think there's a, maybe a false belief that we could just turn this all over to AI. so, and I'm sure you're using a lot of AI techniques. You've mentioned a handful already, but how do you balance The use of AI in this context, but also wanting to ensure accuracy and reliability, even a single fake video could destroy the credibility of the organization.
that's something you absolutely want to avoid. but obviously. those are just a really a volume problems, just so much data. So maybe talk a little bit about what is the scale here? Because I suspect it's impressive. And then [00:16:00] just how do you balance those,the use of AI, which can help, but also, with the human factor.
Paul: Again, a great question and topical too. I mean, as you touched on earlier, we operate in over 150 countries and territories globally. And we're all aware these days of the volume of data and information is created through digital channels. So, that you can extrapolate from that the amount of information that is potentially out there where we need to search for needle in haystacks and separate the wheat from the chaff. It can be quite the challenge. And I think so, said earlier, I think, as we're an information lifecycle based organization. Wherever information exists in the process we follow, there's an opportunity to leverage AI and generative AI and other emerging technologies for efficiency and effectiveness. So whether that's through collection and verification and so on. We're starting to explore some of those use cases, particularly within the research and evidence management space. And you touched on it earlier, I think, Anthony, I think AI can definitely add speed and scale on a level that's just not [00:17:00] possible with human resource to each capability within that process.
Thank you so much. Now that's a key factor for today's data driven world, where some tasks would just be impossible without that, because we're dealing with such large quantities. And noise, I think noise is also the key factor, deliberate or otherwise. So I think equally, AI is the only thing that can also respond at the speed and scale of AI. And that's a critical factor in today's world, where things are being generated by AI, or where AI is used to augment or para assist other things, maybe nefarious,purposes. So I think it's a critical factor in responding to incidents, leveraging that technology, which could be emerging events and where time and relevance are key. the work we do, the work to do outstrips that of a resource constrained non profit. So through that ability to operate at speed and scale, it can be a great help in, as you said, spotting that odd one out. What it was is the stuff that's been seeded deliberately to throw us off track, to, misinform or feed disinformation. And it helps us. So, you 2, 000 videos that you're able to consume and analyze at once, [00:18:00] you'll spot the one that purports to be of a certain location at a certain time that just doesn't look and feel right, because it becomes a probabilistic exercise, as well as the metadata we can extract. But the challenge for organizations like us, which have very public opinions about these sorts of technologies, it becomes more one of good or should and of ethics and cost for benefit. So to, expand on that a little bit more, I think, we care about climate, for example. So, should we, just because we can use these technologies, should we? Now, clearly they can help to fight for good reasons and human rights, but we have to square that circle. And for organizers like us, opportunities exist, I think, with varying degrees of impact based on what I've learned directly and through indirect conversation with others, that can be loosely categorized into three areas. That's through transformation to the organization's value chain. So those are the opportunities that exist to transform the way our organization is structured and how it delivers its mission. Such as the way, for example, we carry out our core purpose of research and evidence management, the way we fundraise, the way we [00:19:00] engender impact and advocacy or education. The second level is through process automation. So those opportunities which exist to transform or pair assist the business process through user journeys, automation, chatbots, search and retrieval, translation even, for example. Or basically through personal augmentation. So those are the opportunities which will assist the individual in a role. Execute their own role so that they're so much more effective and can, 2X their productivity. And they can be things just as simple as digital assistance in helping them to review, draft, summarize content. or an accessibility in learning. and finally, there are two perspectives where these technologies can help an organization like, like ours.
that's with the lenses of offensive and defensive. And what I mean by that, so I term offensive, those things which can help us move our strategy forward in the fight for human rights. So enablement, automation, inequity, insight and analytics, those things that we care about that we can do better, faster, at a larger scale. And then also you've got a defensive perspective. So how we can [00:20:00] protect the organization to ensure its resilience and business continuity. There are a lot of bad guys out there that would seek to disrupt what we do because global reputations are important and money and lives are at stake. But within all those technologies, I think feedback loops are key because you need to know if the result you had was good or bad, and you need to able to fine tune to help with that accuracy. So I think the rise of generative adversarial networks can play a key role in challenging the outputs of AI to ensure some of those known risks are mitigated and their, their risks are in bias, discrimination, underrepresentation, censorship, all these things that we care deeply about as a human rights organization. Because actually, governance at the speed of AI is also crucial in mitigating those pointing out a, I think, an interesting challenge. If I sort of play that back to you, what you're saying is, in a resource constrained organization with a massive set of challenges that need to be handled, large amounts of data [00:21:00] coming at you. and, again, to not overstate this case a bit, but like, Every person is carrying a fairly high fidelity camera, audio recording equipment.
Anthony: the amount of, content that's coming your way is immense. there's really no other way to do it than to leverage AI techniques, to get a handle on this data. But it also raises, you alluded to this, but I want to really drill in on it, the other side of that coin, which is, we sort of alluded to this a few times, but now's our opportunity to talk about it.
misinformation and deep fakes. So all of a sudden we could flip this around and say, bad actors can use. Similar quantities of data to generate things that look very real, and in fact are false and mislead the public, but also potentially even mislead Amnesty International.
how do you think about that challenge balancing these two things, both the [00:22:00] quantity of data, but also then deep fakes that I'm sure you've got. many examples of in your data sets.
Paul: Yeah, it's the emerging challenge, really. With, as I said earlier, if you've got a credit card and access to the internet, you know, you or I can access these tools today. it only takes a moment for a video to go viral. So that one video 000 places. know, so then you have a real deluge of noise that you need to filter through. For an organization like us, put it simply, I mean, credibility and therefore accuracy and integrity takes precedence overall. We have to report factually and objectively. It's more important to say the right things than to say something in haste. only is our reputation on the line as a trusted voice in both now and future work, but also with those relying on us, or those seeking to discredit us, and potentially people's lives too.
So the stakes are high when it comes to these kinds of things for us. And like most things in life, there has to be limits. There's no, different in research when you have enough to establish if that actually kind of has to make a judgment call on how much more time [00:23:00] and resource are invested before you're into the realms of diminishing returns. And so that, that triage at the point of identification of evidence is really important. What do you dig into? What don't you need to dig into? and what questions do you ask before you know that's a trustworthy source? And of course, there are times when you need to balance all of that carefully.
and that will need to be assessed on a case by case basis around risk, opportunity, and benefit. how sure do you have to be before you're able to say something to respond to an incident, for example? These are the sorts of challenges we have between sort of, I guess, time, cost, and quality.
That perennial triangle still exists in our space, too. I think the deepfake thing is a worrying thing. I mean, I've seen plenty of videos, even in the last week, of people showing here they're at an interview but not at an interview. But we wouldn't know. I could be talking to you and it could actually be a video. The individual could be off camera talking through character. So it's quite scary isn't it?
Anthony: Yeah, no, certainly changes, the game and leads to some, disturbing challenges and opportunities. And we'll talk about those in a second. before we get there and where the future [00:24:00] is going, it seems odd to talk about. Amnesty International as a business, but at some level, there is a, relationship that you have with your supporters on the network of people for whom, human rights is important and are big supporters of the mission of Amnesty International.
how are you taking advantage of, or do you not take advantage of, the vast quantities of data and information that you're collecting in the context of engaging. Your supporters. has that been helpful? and has it changed your advocacy strategy at all?
Paul: I think that's a really topical question actually and most non profits, not just us, any charity probably faces the same challenge and there's a lot I can cover in this space. But loosely any non profit with a cause has to firstly reach people. We need to make people aware of the cause. And you need to educate them.
And then maybe you'd like them as a supporter. And then maybe you'd like them to advocate for your cause, even if they can't support directly. So as a journey, you need to take people through. And firstly, it's about reach, right? How do we rise above the noise? Why should you care about human [00:25:00] rights over dogs or cats or whatever else your cause may be that strikes a chord with you? But this sort of data, it plays quite a significant role and help us to understand and engage with our audiences, supporters being one of them. And if you leverage it effectively, you can build stronger relationships, tell your advocacy strategies and better align with the needs and interests and sentiments of your audience. Not just for today, but your audience of tomorrow, the next generation coming through and why you should still be relevant to them. So in understanding supporters, sentiments and preferences, for example, through social media, listening and sentiment analysis in audience segmentation. for personalization, around engagement and communication through customized outreach to the demographic you'd like to speak to.in real time engagement and mobilization, so using real time analysis and through engagements in moment of crisis. For building trust through transparency, so in engaging through content sharing, in feedback loops, so analyzing feedback from supporters about whether that should be comments or emails or survey responses. for advocacy strategy and marketing. So in [00:26:00] identifying the key issues and themes we should care about, amplifying our supporter voices, optimizing our content, um, enabling influencers and ambassadors, identifying who those people may be and giving them the tools to advocate for our cause. And I think also it's useful in mobilizing action advocacy.
So in a call to action optimization, there should always be a call to action for any communication with a supporter. And in identifying other potential donors and future volunteers, who else could help to grow our cause. In understanding the best time and the best way to campaign for a cause, and through what channel. And then through actually bringing all that back, monitoring the impact of all that activity. So you can track engagement work to seek to build long term relationships and engagement. So there's quite a lot in there and I think you have to sort of tune what's most important to you. What you have the capacity to do something with, because just because you have the information doesn't mean it's actionable. So I think you have to really pick what's important at what time and how, but there's a lot of [00:27:00] opportunity there for sure.
Anthony: I think many listeners might have heard the title for this episode and maybe even the topic and thought, well, I'm not in the unstructured content business, or I'm not in the advocacy business, but I think what you're pointing out is. Every business fundamentally is an advocacy business, whether you're advocating for buying a product or service or advocating for a cause, you're advocating for a service offering.
Fundamentally, you're advocating for something. And the point you're making is that at the core of that is a whole set of a whole new set of data that we could be leveraging in that marketing effort in this context, unstructured data. So I think that yeah. is the sort of link here between really every business and business of human rights advocacy.
as we sort of run out of time here, I wish you would cast your eye. forward. We've talked a lot about both the quantity of data that you're collecting, how [00:28:00] you apply managed structure to that, the risks associated that around deep fakes and, and making mistakes. but I'm curious on where you see this all going.
you know, are we moving to a world, where everything is a deep fake and everything's an opinion? or, how do you see this impacting, the world of Advocacy and investigative journalism. predict the future for us. Tell us where this is going.
Paul: Well, I'm certainly happy to give a perspective. I think if there's one thing I've learned in the last 12 months is technology changing at a rate, you know, a rate that none of us can keep up with actually. But I think the future of unstructured data and human rights advocacy for me and investigative journalism, I think is going to be shaped by those technological advancements, but also by those evolving ethical and security concerns that I touched on earlier. So I think human rights organizations and journalists must continue to adapt to those new tools. we have to become agile, we have to become learners, as well as tackle the challenges of misinformation, data overload, which we touched on earlier, and privacy. So for us, privacy is equally [00:29:00] important, right, to protect those at risk, whistleblowers, high value supporters, and so on. So I think by developing better data verification practices, harnessing the power of AI where appropriate, Using crowdsourced data and fostering collaboration across digital platforms, we're better equipped to tackle those human rights abuses and build more effective real time advocacy strategies. But it will require ongoing vigilance to ensure that the benefits of unstructured data are realized without compromising safety. Privacy and integrity of the people and causes that they aim to support. So for organizations like us, there are specific opportunities in enhancing data protection, in processes and analysis with AI and machine learning, for improved advocacy, accuracy and efficiency, for better contextual understanding, for crowdsourcing that data collection and digital witnesses, for enhancing verification and deep fake detection. in real time advocacy and response in instant mobilization and personalization campaigns that we talked about earlier. But I think the [00:30:00] challenges still exist around misinformation and disinformation that will become, think, it'll increase in volume. because it's just as easy to spread, nefarious messages, or, twist the truth as it is to, for the truth to, to get out there. So I think the flood of false information is an important area to watch out for, as is the bias in data. And I think we need to careful, as you touched on earlier, actually, around overload and information fatigue. Due to the sheer volume alone, it's almost a denial of service challenge, isn't it? And then due to fatigue and desensitization, you'll get so used to seeing some things. it just won't strike a chord anymore, and that's just as relevant for us as an organization as it is with our supporters as well, because they're hit with the stuff through social media now every day. So I think in privacy and ethical concerns, there are ethical dilemmas in data collection, as I'm sure you're aware, as we, organizations rely more on unstructured data than social media and other platforms, and around surveillance and censorship. So authoritarian regimes may increasingly use surveillance technologies to monitor and suppress the digital [00:31:00] activities of activists. I think we've also got to be, finally to bring it to a close. I think we've got to be careful in all this that we don't become too over dependent on tech and accessibility.
So I think there are technological barriers that we still need to get over. We,don't all start from the same place when it comes to accessibility in tech, whether that's, our awareness, our capability and our processes, the things available to us. And so the digital divide in that, for me, is still really important.
So having equitable access to things like internet and digital tools, it still remains unequal at this point in time, particularly in conflict zones and marginalized communities. So all of this stuff can create challenges in the data we see and have the use of, and challenges in that data getting to the right people for advocacy campaigns and to reach those that are affected the most. So that digital divide, that accessibility piece, that tech dependence, is the bit that I think we need to look at.
Anthony: No, I don't think that sounds like a, great. Call to action. And but I'll also add, as I alluded to before, there's a lot of lessons [00:32:00] here that are very relevant to every business. If anything, I'd say, Amnesty International sort of on the cutting edge of the forefront of this. it's a very fraught.
I mean, obviously, challenges and mistakes, that you make, have a, an outsized impact. but also the opportunities. maybe outsized as well, really making a difference in the world. and really helping, hundreds of millions of people. But there are a lot of lessons in that, that really any business can use again for advocating for their mission and their strategy.
and also, being aware of the risks and challenges associated with,volume and variety and fakeness and, and the impact that AI has on that. so a lot there that I think many people can take away and use, Paul, thank you so much for sharing, your perspective on this and,really help it.
Paul: I hope it's of some use to some of your listeners. Not a problem at all. Thanks, Anthony. [00:33:00]