In my experience, the most important way to balance the desire to innovate with being responsible is to have a safe space in which to experiment. Organizations are often interested in potentially using new technologies but are afraid of causing harm or distracting their staff in the process.
The good news is that a little permission can often go a long way. Many organizations, like DataKind or USDR in the states, offer design sprints, hackathon-like events, or learning cohorts where nonprofits can engage with AI in a low cost, high touch way to try to solve their problems. Having an intermediary or social-sector-trained AI specialists making clear what is or isn’t possible can have catalytic effects. Will these events deliver a full working product? Of course not, but they’ll allow organizations to more experientially understand where this technology could (or shouldn’t!) be applied in their own work.
Moreover, the nonprofit is in control of the use of the tech. When technology is built specifically for nonprofits, the nonprofits get to set the rules on what ethical and responsible outcomes look like.
Lastly, if you don’t have access to an organization that runs design sprints or a similar activity, fear not—the latest generation of AI tools are more accessible than ever. You may be able to host your own half-day sprint with your staff to begin testing how these tools work.