Kell Kittell.

In the late ’70s, 17-year-old me got on a Greyhound bus for boot camp; a bunch of shit happened; and I ended up working on the ARPANET, the experimental network that would become today’s internet. (Note to the Reno Gazette-Journal, which conducted a “Reno’s identity” poll: “Shit happened” is the actual Reno Identity.)

Back then, you could visualize the entire network in your mind. You could even print it on a piece of paper—and we did! Then, two pieces taped together. Then four. And that was it. It grew so big, so fast, we couldn’t paper it. I saw firsthand how unstoppable new technologies could grow.

Recently, AI systems started doing things their creators didn’t anticipate, and now, the people who built these tools—and the rest of us—are in uncharted territory. What was science fiction is now real.

In a lot of ways, it feels like the early ’80s, when we unleashed the World Wide Web despite the gloomy terror surrounding its birth: the fear of unknown consequences, the loss of jobs, security issues, stolen identity, moral issues—all of which was enough for legitimate concern.

When ATMs first appeared, the technology was castigated in a lot of the same language: taking away jobs, our security, our identities. And now we’re zapping money through the internet to strangers giving us a ride in their car. It turns out that despite the fears and boycotts, convenience outweighs standing in line for 20 minutes to cash a check for physical money.

Now, AI has appeared, and the arguments against it are the same. The internet, ATMs and AI are all examples of disruptive technologies. And each time, the people who adapted early took control of the technology as it developed.

The truth is we have all been using AI for years: spam filters, autocorrect, social platforms and search engines all use AI. To say you’re not going to use it means you haven’t been paying attention. And if you’re not paying attention, that could become a problem.

The question isn’t about whether an organization is going to use AI; it’s about whether it does it with integrity and purpose.

I was recently working on a project with a nonprofit, and it was put on hold because AI was used in its production materials. Members of the staff had concerns over using AI, and creating a policy became necessary. Nothing was harmed, but the work stopped, because no policy was in place. The community waits.

Leaner nonprofits have a lot of reasons to leverage AI. Grant-writing and volunteer coordination are two areas in which I can see AI being quite helpful. The question isn’t about whether an organization is going to use AI; it’s about whether it does it with integrity and purpose.

Set a policy now. Look at your mission statement and stakeholder values. Decide where AI can help with running your operation, and in what areas it is strictly forbidden. You know your nonprofit’s values; write them down.

Institutions that play wait-and-see with technology get left behind. Blockbuster could have bought Netflix for $50 million, but didn’t see the value of movies over the internet. Nonprofits aren’t corporations, but still, the pattern exists.

The good news is you are not too late to catch up. Frameworks already exist. An AI policy is an act of stewardship, not a bureaucratic chore.

I’ve watched a technology go from small enough to map, to too big to comprehend. When shit like this happens, history shows how fast institutions can lose control.

Write your AI policy. Own it. And get back to doing good work!

Kell Kittell is a Reno native and Navy veteran. Besides being a playwright and actor, he spent 30 years as an information designer working with Apple, Roche, Toshiba and other tech industry leaders.

Leave a comment

Your email address will not be published. Required fields are marked *