stories

ai for social good

How the City of Boston uses AI to serve its residents

The most quickly adopted web application in history can help you write a business memo, summarize Shakespere into bullet points, or explain the Bill of Rights to a 5 year old. It also helped write this story.

OpenAI’s ChatGPT had over 100 million monthly active users just two months after its launch in November 2022, and Boston’s Chief Information Officer, Santiago “Santi” Garces, was one of them. Where many have seen skepticism or risk with the new technology, Santi saw opportunity.


“My thinking was: this is everywhere. It’s potentially risky, but also potentially helpful. Let’s see what we can do with it."

Santiago "Santi" Garces

Boston was one of the first cities to embrace generative AI. From summarizing hundred-page handbooks to translating public information into different languages, Santi saw the possibilities and wanted to give city staff a way to test the tools out for themselves.

“In government, we spend so much time writing and consuming dense information,” shared Santi. “When I started to play around with generative AI, I knew it could be a gamechanger. I thought, why not put a structure in place for others across the city to experiment responsibly and see where this could take us?”

guidelines and governance

Generative AI tools like ChatGPT, Bard, and others work by having a two-way, conversation with users. You enter a prompt (“Explain the Bill of Rights as if I’m 5 years old”), and the chatbot responds. You can steer answers toward a desired length, tone, format, level of detail, or even language.

By empowering others to experiment with AI, Santi hoped to surface different ways the tools could be beneficial to city-specific use cases. But to start, he wanted to put some guardrails in place.

He began by jotting down what he’d learned from his own experiments using ChatGPT. Then, he collaborated with staff on different teams across the city. He got feedback from contacts in academia to help bring in a perspective around ethics, and shared with the mayor’s office to get topdown buy-in too. As he shared more broadly, Santi made sure to include folks who were skeptical of AI’s place in government too, to balance the feedback.

In the end, they landed on three guidelines:

  1. Fact check and review all content—you’re always responsible for the output, even if it’s generated by AI

  2. Disclose whenever content is AI-generated in order to build trust

  3. Don’t share sensitive or private information in your prompts

In the guidelines document, Santi includes rationale, example use cases, and prompts to better support colleagues as they get started.

“It’s been really well-accepted so far,” shares Santi. “I think this is, in part, because the guidelines are so simple. In government, we have a tendency to prioritize comprehensiveness over usefulness. I wanted to design something that people would actually use.”

And people are helping shape them too. Santi put support structures in place to collect feedback and adjust the guidelines as they learn together. This includes a Slack channel for open discussion and a Google form for ideas, concerns, or complaints.

doing more with less

Since publishing the guidelines last May, Santi’s seen more teams across the city turn to generative AI—especially to streamline the motions they go through on a regular basis. Things like writing memos, brainstorming in meetings, and writing job descriptions are all common uses.

“Being able to come up with a first draft of a job description in 5 minutes rather than 45 minutes—that’s huge.”

Santi Garces

He says the tools are enabling teams to do more with less.

“We can upload our 311 cases, and say ‘Tell me about differences in response time for a particular case, based on neighborhood’ and the tool will do the analysis,” explains Santi. Before, you needed a data analyst on your team or an understanding of tools like Python or R. Now, anyone can do it.”

Streamlining work with generative AI has also freed up more time for staff to dive into patterns, formulate hypotheses, and consider decisions on a deeper level.

“What it really enables is more collaboration and more rigor around the quality of the work,” he explains. “Part of the challenge in a resource-constrained environment like government is we spend so much time and energy just getting to an answer that we accept it as-is. But if we have more time to think deeply and iterate on our ideas—that’s really valuable.”

better serving Bostonians

Santi is also thinking about ways generative AI can help the city better serve its residents. Especially when it comes to democratizing access to information.

“Right now, there’s this asymmetry that exists,” Santi explains. “We know how zoning works, we know how construction permits work—but the people usually don’t. Yes, these rules and regulations are public, but they’re written in such a complicated way that most people don't have the time or tools to understand them.”

Translation is another area where generative AI could help level the playing field and make information instantly more accessible to more people.

a risk worth taking

Boston was one of the first cities to embrace generative AI. Santi credits this to a culture of openness, transparency, and embracing failure that Mayor Michelle Wu has worked to build.

“A lot of times, in government, we believe we can never fail. If you can never fail, you also can never learn,” he shares. “As long as you’re willing to be judicious and iterate—take the risk! Because you know what? There’s always going to be risk. There’s even risk in the status quo.”