AI, the Social Sector, and the Future We Choose to Build

I recently spoke on a Health Leads webinar about AI and health equity. More than 350 people attended, representing healthcare, government, academia, philanthropy, and nonprofit organizations. These folks are working every day on some of the hardest problems in society: equitable access to healthcare, housing, food, benefits, and social services. What surprised me most during the conversation wasn’t the questions about AI. It was the level of fear, and the sense that AI was something happening to the social sector, rather than something we could help shape.

What I saw in the chatroom

During the webinar, I kept an eye on the chat, and it was one of the most active chatrooms I’ve ever seen on a public webinar.

What shocked me most wasn’t skepticism about AI… because skepticism and a critique of the AI systems are healthy. But what surprised me was how much fear and anxiety there was.

People were (rightfully) worried about environmental impact, bias, corporate power, reliability, job loss, surveillance. All valid concerns. All important conversations to have. But the tone of many comments suggested something deeper, like a sense that AI was something happening to us, rather than something we could shape.

I live and work in a tech bubble in San Francisco, and even some folks on my own team have shared similar fears with me. But I didn’t fully appreciate the severity and breadth of those fears until I watched that chat scroll by for an hour.

After the webinar, I prompted Gemini to create a visual graphic of the comments section, which turned out to be pretty interesting, and you can check it out above. Here are a few of the comments that stood out to me.

The fear of AI is not new

One person wrote, “I really wish that we could just stop the AI train. I think it’s incredibly destructive and could literally kill us all.”

It’s a familiar comment. Every major technological shift has produced similar reactions. When railroads were first introduced in the 1800s, doctors warned that the vibration of trains would damage the human body and cause serious medical problems (I actually referenced an 1862 Lancet article about this during the talk). When the internet emerged, many people believed it would destroy social relationships. When smartphones appeared, people worried about addiction and surveillance.

But in each case, the technology became core infrastructure. And society reorganized itself around it.

The same will probably be true for AI. The question has never really been whether a technology will exist. The question is who builds it, who controls it, and who benefits from it.

Fix the system, but people need help today

Another comment during the webinar said, “Instead of needing to rely on AI to navigate the system, we could just make the system easier to navigate and we’re choosing not to.”

I actually agree with that sentiment. Our current systems didn’t become complex by accident; they are the result of policy choices made over time. Affordability is a policy choice. Health equity is a policy choice. Administrative burden is a policy choice. We should absolutely work to make these systems easier to navigate and more equitable.

But that kind of change takes decades. It requires policy reform, funding changes, system redesign, and political will. In the meantime, millions of families are navigating fragmented systems right now. They are filling out the same forms over and over, waiting on hold, riding buses across town for appointments, and trying to piece together help from dozens of different organizations that don’t talk to each other.

Any tool that reduces paperwork, administrative burden, and time spent navigating bureaucracy can make a real difference in people’s lives today. That’s always been my stance, even back when I started One Degree in 2012.

The reliability debate

Another participant raised concerns about reliability, citing studies showing high error rates for some AI systems, and asked whether it was responsible to expose users to unreliable tools.

This is an important concern, but it also reflects a misunderstanding of how these tools should be used. I believe that AI should not be making final eligibility decisions or replacing professional judgment (at least not unless we reach a level of reliability that even humans rarely achieve). But AI can be very useful for moving and organizing data, summarizing documents, translating information, pre-filling forms, and reducing administrative work.

Social service and healthcare systems still have big barriers in the form of paperwork, intake forms, phone calls, follow-ups, and administrative friction. If AI can reduce that friction, it can free up staff time, reduce errors from manual data entry, and make systems easier for people to navigate.

AI is already here, whether we participate or not

Right now, while many of us are debating whether AI is good or bad, the technology is already being adopted at massive scale. People are using AI for legal advice, medical questions, benefits questions, housing questions, and emotional support, whether we think that’s a good idea or not. Chatgpt is already the largest single provider of mental health support in America!

The reality is that AI is not waiting for the social sector to be ready.

My organization, One Degree | 1degree.org, builds technology that helps people access food, housing, healthcare, and public benefits. We started using AI not because it was trendy, but because we had very real problems, like trying to figure out how to interoperate between multiple data silos of community resource information with different taxonomies.

What we’ve learned is that AI is just one tool in our toolbox.

Most of our platform is still built using traditional deterministic software, databases, rules engines, workflows, APIs. We use generative AI only where it actually solves hard problems. And a big part of our work now is simply learning: learning how to build with generative AI, how to push its limits, where it breaks, where it works well, and where it shouldn’t be used at all.

It’s important to understand these tools now, not because we want to use AI everywhere, but because we need to know how to harness it and apply it in ways that actually benefit people and move us toward more equitable systems.

AI is an infrastructure moment

If we are serious about equity, then we should care about who is building these tools, who controls the infrastructure, whose values are embedded in the systems, and who benefits from them.

Because if the social sector, public sector, and healthcare sector do not actively participate in building and shaping these tools, they will still be built — just by someone else, with different incentives.

This moment reminds me again of the railroads. The biggest impacts of railroads were not medical, as early critics feared. They were economic and social. Railroads connected some cities and bypassed others. Some towns grew rapidly, while others declined. Railroads raised huge questions about infrastructure investment, corporate power, and equity.

Technology, like rail systems, becomes infrastructure. And AI is in a similar kind of infrastructure moment.

What organizations should do right now

So what does that actually mean for organizations, clinicians, community health workers, and nonprofit leaders right now?

Most organizations do not need to build AI models. But every organization should be thinking carefully about how these tools will change their work, their workflows, and the people they serve. In many ways, AI tools are becoming like spreadsheets or shared documents, which are tools that not every organization builds, but tools every organization eventually uses.

The most important place to start is not with the technology, but with the problems. Instead of asking, “How do we use AI?” a better question is, “Where are we struggling to make an impact? Where are people dropping off? Where are we wasting time? Where is paperwork slowing things down?” AI is pretty useful right now for very specific operational problems.

It is also important to be careful about what decisions we automate. Using AI to summarize notes or help fill out forms is very different from using AI to determine eligibility or generate risk scores. High-stakes decisions should still involve human judgment and accountability.

Another lesson many organizations are learning is that AI problems are often actually data problems. If your data is incomplete, inconsistent, or biased, AI will simply amplify those problems. Investing in better data systems, data governance, and data quality may ultimately be more important than investing in AI tools themselves.

Organizations also need to invest in their people’s understanding of AI. Staff need training not just on how to use AI tools, but on when not to trust them, how to verify outputs, how to protect sensitive information, and how to think critically about where these tools should and should not be used. Many organizations will need internal AI policies and principles, just as they developed policies for email, data security, and cloud software over the past two decades.

And perhaps most importantly, the communities we serve need to be involved in the design and testing of AI tools. Otherwise, we risk building systems that work well for institutions but not for the people who actually need them.

A message for funders: get off the sidelines

For funders, I have one message in this moment: get off the sidelines.

Technology systems always reflect the people who pay to build them. This is a major technology shift that is going to change healthcare, social services, and how people access benefits and support. If funders do not engage, the future of AI in healthcare and social services will be built primarily by large vendors and institutions, optimized for compliance, billing, and efficiency (think EHRs, HMIS, CLR systems).

We don’t just need small grants for experimental AI tools around the margins. We need investment in community-owned digital public infrastructure, data systems, governance, and shared platforms so equity is built into these systems from the beginning, not added later as an afterthought.

Don’t let anxiety turn into inaction

And for everyone else reading this (nonprofit leaders, healthcare providers, social workers, policymakers), here’s my advice for right now: don’t let anxiety turn into inaction. Start learning. Start experimenting. Start asking questions. You don’t need to become an AI engineer (and TBH, you won’t need to be), but you do need to understand how these tools work and where they fit into your work.

Every major infrastructure shift reshapes society (railroads, electricity, highways, the internet). AI is likely in another one of those moments. The biggest questions are not technical. They are social, economic, and political. Who benefits? Who gets left out? Who controls the infrastructure? What systems are we reinforcing, and what systems are we changing?

AI is here whether we like it or not. The question is not whether we use it. The question is whether we help shape how it is used.

If we care about equity, we cannot only critique the future. We have to help build it.

One Degree Welcomes Technology Leader Erik Arnold

We’re excited to welcome Erik Arnold to One Degree. His lived experience, deep technical expertise, and long-standing commitment to strengthening the nonprofit sector bring a powerful combination of heart and discipline to our next chapter.

When you speak with Erik, one theme becomes clear: systems should work for the people they are meant to serve. That conviction is not abstract. It is personal.

Erik grew up as the youngest son of a single mother. His family relied on food stamps and the support of community-based organizations in their town. Those early experiences gave him a lasting appreciation for the safety net, not as policy, but as something real that families depend on. He understands what these services mean from the perspective of someone who benefited from them, and that perspective continues to shape how he thinks about technology, impact, and responsibility.

Erik entered the tech world in Seattle in the early 1990s and was, as he puts it, “lucky enough” to be part of a startup founded by Bill Gates. Over time, he became increasingly interested not only in building technology, but in improving how technology companies engage with the social sector.

In his view, traditional corporate social responsibility efforts like small donations, volunteer days, even one-off prototypes, can be well-intentioned, but often fail to “move the needle.” In some cases, they even distract nonprofit teams that lack the resources to absorb short-term initiatives. What Erik advocated for instead was lasting engagement: partnerships grounded in ethical, sustainable models that help nonprofits succeed over the long term.

That perspective led him to Microsoft in 2017, where he helped found the Tech for Social Impact team within Microsoft Philanthropies. As CTO for Microsoft Philanthropies and a General Manager of the team, Erik helped grow an initiative supporting hundreds of thousands of nonprofits worldwide. He oversaw a billion-dollar software donation portfolio, led engineering teams dedicated to the sector, and shipped products designed specifically to meet nonprofit needs.

Since early 2025, Erik has worked independently with nonprofits, foundations, and mission-driven companies to help them navigate a shifting funding landscape and use digital technology more strategically.

A formative experience earlier in his nonprofit career sharpened his systems mindset even further. During a visit to PATH’s operations in Kenya, he saw firsthand the complexity of delivering services in resource-constrained environments. In a clinic supporting mothers and children with HIV, a doctor showed him a USAID form used to order medication. A new medication was available, but it wasn’t in the dropdown menu. The doctor asked a simple question: “How do I get it in the dropdown?”

Behind that question was a deeper issue. No one on the ground knew who built the system, how it connected to headquarters, or what would happen if the form were changed. For Erik, that moment crystallized a core insight: too often, technology in the social sector becomes fragmented, built around grant cycles or reporting requirements rather than around the lived experience of the individual receiving services.

“What we should be tracking is the individual and the services they’re getting, and whether they’re getting the right services.”

When systems are designed around anything else, the result can be an ecosystem of disconnected tools that burden frontline staff instead of empowering them.

That systems-thinking approach is central to what Erik is excited to build at One Degree: moving from a collection of semi-connected solutions toward a cohesive, scalable platform, one that is performant, compliant, easier to support, and built for real-world adoption.

Erik understands how to improve estimation and planning, how to create better visibility into workstreams, and how to shift teams from reactive execution to proactive strategy. Just as importantly, he bridges business and technology fluently. With “one foot in business and one foot in tech,” he translates across program leaders, engineers, funders, and partners.

He also brings a clear perspective on nonprofit innovation. In his experience, nonprofits are not resistant to change.

“It’s not the smarts. It’s not the desire. It’s that nonprofits don’t have the resources.”

Sustainable impact requires investing in people, systems, and infrastructure, not simply minimizing overhead.

Erik’s arrival strengthens our ability to build technology that is not only innovative, but sustainable, interoperable, and built to last.

Outside of work, Erik is a lifelong tabletop gamer who started playing Dungeons & Dragons in the early 1980s and even wrote a dice-rolling program on a TRS-80. He enjoys hiking and spending time outdoors in the Pacific Northwest, and he’s an avid cook who finds joy in preparing meals for friends and family.

Welcome, Erik. We’re glad you’re here.

Why California’s DxF and Interoperability Matter

For more than a decade, One Degree has been building technology to help people find and access life-changing services. Along the way, one challenge has remained stubbornly constant: systems don’t talk to each other. As a result, the burden of coordination falls on the people least equipped to carry it—help seekers and frontline staff—who are forced to re-enter the same information, navigate duplicative workflows, and bridge gaps between disconnected tools.

This is why interoperability matters. Not as an abstract technical ideal, but as a practical, human one.

Today, far too much time in social care is spent manually transferring information from one system to another—copying assessments, re-creating referrals, or following up by phone or email to confirm whether help was received. Every handoff introduces friction. Every delay risks someone falling through the cracks. True interoperability has the power to change that by enabling real-time, standards-based data exchange that meets people where they already are.

That belief sits at the heart of our participation in California’s Data Exchange Framework (DxF). Through the DxF Technical Assistance Grant, One Degree has been able to make focused, intentional investments in the infrastructure required to support real-time sharing of social drivers of health (SDOH) assessment and referral data. This kind of funding is critical: it recognizes that interoperability is not “extra,” but foundational public infrastructure that requires time, expertise, and coordination to get right .

As part of our December 2025 progress update, we shared our ongoing investments in API based DxF-aligned data exchange and reaffirmed our commitment to implementing nationally recognized standards, including the Gravity SDOH implementation guide. Just as importantly, we finalized a partnership we are genuinely excited about: a collaboration with Elimu InformaticsAdvisory Services.

Elimu is a rare kind of partner. Their team includes key contributors to the Gravity SDOH standards and deep, real-world experience running production-grade FHIR infrastructure. They understand not only how interoperability should work on paper, but how it actually functions in live systems with real providers, real workflows, and real constraints. Through this partnership, Elimu is supporting architecture design, implementation, and workflow development as we move toward production and training.

Together with our partners—including 211 Ventura—we are aligning on the specific data fields and workflows needed to enable meaningful, real-time exchange. The goal is not technology for technology’s sake, but practical interoperability that frees up time for frontline workers and reduces the administrative burden placed on people seeking help.

We see this work as part of a larger shift. Interoperability is how we move from fragmented, siloed systems to coordinated systems of care. The DxF grant made it possible to invest in this foundation, and our collaboration with Elimu gives us confidence that we’re building it the right way—grounded in standards, informed by practice, and focused on impact.

We’re excited about what comes next, and grateful to be building toward a future where sharing information is the easy part—so people can focus on what matters most: showing up for one another.

One Degree Welcomes Social Change Strategist and Innovator Leslie Kerns to the Board of Directors

We’re excited to welcome Leslie Kerns to One Degree’s Board of Directors. Her commitment to dignity, narrative change, and systems transformation reflects the heart of our mission.

When you speak with Leslie Kerns, one theme rises above all others: the belief that every person deserves the chance not merely to survive, but to thrive in this world. It is a philosophy shaped by her earliest memories, watching her mother, a single parent of six, pull her family out of poverty and into the middle class through determination, resourcefulness, and a deep sense of purpose. “We should do whatever it takes to flourish,” Leslie shared, reflecting on the guiding principle that has shaped every chapter of her life.

“That idea, that we’re meant to thrive, not just get by, has informed every career decision I’ve made.”

This commitment to meaningful impact propelled Leslie from an early career in law into public relations, nonprofit communications, advocacy, and ultimately social change consulting. She led communications, advocacy, and integrated campaigns at firms like M&R and partnered with organizations such as Public Welfare Foundation, the MacArthur Foundation, the Vera Institute of Justice, and The Frameworks Institute. With each career move, she grew more focused on whether her work created real, measurable improvement in people’s lives, continuously choosing to “do more,” as she puts it, to ensure her daily work translated into better outcomes for communities. That intention led her to launch 1235 Strategies, a consultancy named after her childhood street address. Leslie leads the firm while assembling senior-level partners as needed, creating teams tailored to each client’s goals. “Strategy should reflect an organization’s goals and what it takes to achieve them, not the services or people a consulting firm happens to have under one roof,” she explained. 1235 Strategies brings top talent together to deliver communications, branding, narrative change, and advocacy strategies that help purpose-driven organizations accelerate impact.

This same ethos is what drew Leslie to One Degree. In her words, the alignment felt immediate and profound. One Degree’s mission, to make essential resources accessible so that families can build healthy, fulfilling lives, mirrors her own belief that systems should empower people to flourish. She also sees the organization’s approach as both innovative and deeply practical: “Technology helps so many of us in the middle class and above easily meet our daily needs, from finding housing to accessing healthcare to applying for jobs. There’s no reason that the same technology can’t be used to help lower-income families meet theirs.” Leslie speaks candidly about the broken social safety net and the harmful assumptions embedded within it. The idea that lower-income individuals don’t engage with digital tools or won’t follow through if offered online pathways reflects our incorrect assumptions. “These assumptions aren’t intentional,” she said, “but they’re still harmful. They hold us back from designing systems in ways that respect people’s dignity and capabilities.” For her, One Degree is dismantling those assumptions by proving that modern, human-centered technology can and should be accessible to everyone.

On the board, Leslie hopes to contribute her experience as a strategist, communicator, and narrative architect, skills that bridge policy, partnerships, digital engagement, media, and social justice. She has helped organizations shift public sentiment, connect communities to benefits, and use narrative to advance structural change. She sees an opportunity to bring those tools to One Degree in a way that strengthens both the organization’s growth and its impact.

“If One Degree thrives, then more low-income families and communities can thrive”

Outside of her professional life, Leslie brings a vibrant sense of creativity and joy. She is an avid practitioner of improv, a hobby she calls both playful and grounding. She performs with a studio in Los Angeles and describes the practice as an antidote to stress, a pathway to presence, and a reminder of the importance of play in a world that often demands constant seriousness. 

We are honored to welcome Leslie Kerns to One Degree’s Board of Directors. Her commitment to dignity, her belief in people’s potential to thrive, and her leadership in narrative, strategy, and impact will strengthen our work in powerful ways. As we continue building a more connected, compassionate, and technologically modern safety net, her voice and vision will help guide the path forward.

Welcome, Leslie. We are grateful to build this future with you.

One Degree Takes the Stage at Google.org Demo Day

This week, One Degree had the opportunity to present at Google.org’s Generative AI Accelerator Demo Day, marking a major milestone in our journey to transform how families access the social safety net.

Out of more than 3,000 applicants worldwide, One Degree was selected to join the 2025 Google.org Generative AI Accelerator, a two-year program supporting social impact organizations using AI to tackle some of the world’s most complex challenges. Demo Day represented the culmination of six months of intensive collaboration, experimentation, and product development alongside the Google.org team and an extraordinary cohort of peers.

At Google HQ, our team shared a vision for what digital public infrastructure for the social safety net can look like and unveiled two new AI-powered tools designed to make that vision real.

Reimagining Access to the Social Safety Net

For millions of families, getting help today feels like searching for a needle in a haystack. The social safety net is fragmented across nonprofits, public agencies, and healthcare systems that often lack the digital tools needed to coordinate effectively. Families are left navigating unreliable information, complex eligibility rules, paper forms, and inaccessible waitlists—often at moments of crisis.

At One Degree, we see this not as an inevitability, but as an opportunity to fundamentally rethink how people access help.

At Google.org Demo Day, we showcased two AI tools built through the Accelerator to tackle the biggest friction points in access and enrollment. Our AI Resource Navigator uses natural language processing to understand the way people actually ask for help and connects them to relevant, human-verified resources in seconds. Our AI Intake & Enrollment Tool simplifies paperwork by converting paper and PDF forms into digital experiences and pre-filling information when available, reducing enrollment time and administrative burden for frontline staff.

Together, these tools point toward a future where families can find and enroll in help in minutes, not months, and providers can spend less time on administration and more time supporting people.

Gratitude and What’s Next

This work would not be possible without philanthropy and partnership. We are deeply grateful to Google.org, the Accelerator team, our executive sponsors, and our Google squad members for believing in this vision and pushing us to build better.

Demo Day was not an endpoint, it was a beginning. As we move forward, we remain focused on building AI-powered infrastructure that helps families access the right resources faster and supports frontline providers in doing their best work.

Stay tuned as we continue shaping the future of social care together.

Discover the Impact We Made Together in 2024-2025, a Year of Courage, Collaboration, and Innovation

We are proud to share with you One Degree’s 2024–2025 Annual Impact Report, a reflection of the progress we made together in strengthening the social safety net during a year of immense challenges. 

This year, we saw firsthand the cracks in our system as families faced wildfires, immigration threats, and disruptions to public benefits. Yet we also saw hope. With partners ranging from LA County DHS and ACES-LA to new collaborators like Interface Children & Family Services, we demonstrated how trusted community networks and next-generation technology can work together to keep families from falling through the cracks.

We also took bold steps into the future by exploring how generative AI can transform access to essential resources and benefits. Our work this year laid the foundation for AI-powered tools designed to make finding and enrolling in resources even faster and better. 

Thank you for believing in this work and for standing with us as we build a more equitable, connected, and compassionate social safety net for all.

We encourage you to dive into the full report and see the impact your support made possible:
Read the Annual Impact Report

As our work expands to meet rising community needs, your support helps fuel the innovation and collaboration required to strengthen the safety net for everyone. Every donation counts!

Donate Now

We’re grateful to have you with us in this work.

With gratitude,
The One Degree Team

Why Use One Degree’s AI Assistant Instead of just ChatGPT or Gemini? We Put Them to the Test

The world of generative AI, led by platforms like ChatGPT and Gemini, is changing how we think about information access. For nonprofit professionals dedicated to connecting people with vital resources, the question isn’t if AI will change our work, but how we can use it effectively and responsibly.

At One Degree, we are focused on developing a One Degree AI Assistant to be a trustworthy and powerful tool for resource search and access. As this is our first entry into generative AI, we conducted a series of user tests and interviews with our Community Feedback Group. This group comprises a diverse mix of help-seekers, direct service providers, and healthcare workers. Through this process, we’ve gained critical insights into what makes a specialized AI solution stand out from general-purpose leaders in the market.

The Problem to Be Solved

Our community members face persistent, systemic challenges when searching for help, including:

  • Wading through outdated and inaccurate resource information.
  • Wasting time on long eligibility processes only to find they don’t qualify.
  • Having to fill out the same paperwork repeatedly.

While a gentle optimism toward AI exists, citing our recent survey data, so do distinct fears. This includes concerns regarding resource verification and accuracy (“Will AI just make up information?”), the loss of human empathy, the potential for algorithmic bias, or unethical data use. This can be an incredible risk for people facing vulnerable situations. It takes an intentional and rigorous effort for AI-enabled solutions to have the community’s best interests at heart.

We sought comparison feedback on our in-development AI Assistant alongside general-purpose models like ChatGPT and Gemini. We did this through several rounds of user testing, comparison feedback, and in-depth remote interviews with our community. Here is what our participants found:

One Degree AI Assistant

The One Degree AI Assistant was described by one tester as: “a ChatGPT for resources for people.” The familiar chat interface was intuitive, and testers appreciated its focus.

  • Strength: Focus and Actionability. “I think that it gives you what you need and then some,” noted one participant. Participants found the information level “enough to take action on a resource but not too much to be overwhelming.” Its core value lies in being intentionally focused on verified, actionable community resources.
  • Weakness: Speed and Transparency. Compared to the high-speed, advanced back-ends of the larger platforms, some testers noticed a slowness in our Assistant’s responses. Furthermore, a tester noted that they missed the visual cues that platforms like ChatGPT use to show an “undergoing and thinking process” such as real-time typing. Though subtle, this provides transparency and signals to the people that “Something is happening.”

ChatGPT and Gemini

Many research participants stated that they had familiarity with ChatGPT, which made our Assistant easier to interpret initially. Even though those who said they weren’t particularly tech savvy shared that they had experience using generative AI platforms. However, when tested directly for resource access, the limitations of general AI became clear:

  • Strength: Thoroughness and Layout. Some participants appreciated the sheer volume of information these platforms provided. ChatGPT was also noted for using tables to organize results, making resources easier to compare and contrast. Tags with linked sources on some text also offered a layer of trust by stating the source of the information, including a hyperlink.
  • Weakness: Overwhelm and Unpredictability. A common critique was the sheer volume of text, which was seen as “long-winded and difficult to read.” One interviewee commented on Gemini: “It is visually difficult to process… It’s just a wall of text to me at this point.” Crucially, the information was often unpredictable. Sometimes contact numbers were present, other times they weren’t. Participants found themselves having to take an organization’s name from the AI response and “do more research just to find a phone number. It’s just a lot of waste of time.”

Interestingly, the traditional, non-generative AI One Degree search with its card-based interface received positive feedback for its readability and predictable format.

  • Strength: Readability and Consistency. Participants appreciated the structured format, which made addresses, phone numbers, and hours easy to locate. “I like the break up of different things… it just doesn’t look like a report I’m reading,” one person shared. The predictable layout eliminated the need to “hunt for information,” where time is a precious commodity in urgent situations.

Key Takeaways

Our research confirms a crucial point for the nonprofit sector: Given the existing field of generative AI tools, there is still a clear need for AI training and specialization to work effectively as a tool for social service navigation.

While general platforms like ChatGPT and Gemini offer global scale and flexibility across many topics, their data, user experience, and use cases are not optimized for social service navigation. Because of this, we are working to train a specialized AI Assistant to  focus on:

  1. Trustworthiness: For our community, accuracy and relevance are critical, not just convenient. Standard AI models pose an unacceptable risk to vulnerable populations because they can hallucinate or provide out-of-date information. Even when an AI platform offers credible information, it must provide assurances to verify its accuracy. But the fact that it’s unclear where they’re pulling the data from and how accurate it is, significantly erodes the trustworthiness of the dominant AI chatbots.
  2. Information experience: Throughout user testing and in-depth interviews, a clear preference emerged: participants preferred information to be succinct and well-structured. They don’t want to comb through a conversational chat or an in-depth analysis. They need specific details that allow them to take the next step toward accessing a resource.
  3. Value proposition: The One Degree AI Assistant is powered by One Degree’s curated, human-verified database of nonprofit services and public benefits. Every response is grounded in accurate, up-to-date, and community-reviewed data — not scraped from the chaotic sprawl of the internet. This means our AI doesn’t hallucinate, doesn’t guess, and doesn’t send people down dead ends. Instead, it surfaces only verified, useful, and available resources. That focus is what makes One Degree AI a tool our community can trust by default.

One participant shared their experience using ChatGPT and Gemini to seek resources and stated in an interview, “I haven’t been blown away by anything by AI bots. I haven’t been blown away because I could do the same thing with Google.” This highlights the distinct challenge of having to offer a unique value proposition that existing tools like ChatGPT or Google search don’t specifically address.

We are incredibly thankful for the guidance from our Community Feedback Group. Their stories and insights are steering the development of the One Degree AI Assistant to ensure it is not just another piece of technology, but serves as an essential tool that genuinely reduces the friction between a person and the help they need.

When They Make It Harder, We Make It Easier: Partnering with 211 Ventura to Simplify ECM Enrollment

The Challenge

The Solution

Building Tech That Works for People

AI, Governance, and Ownership: Reflections from the Rockefeller Amplification Event at the NationSwell Summit

Last week, I had the honor of sharing the One Degree | 1degree.org story at The Rockefeller Foundation’s Amplification event during the NationSwell Summit. As part of the U.S. Big Bets Fellowship, I got to share why I believe so deeply in the need to invest in nonprofits that are building digital public infrastructure and why we need community-rooted organizations, not just tech giants, creating the next generation of AI tools.

I’m grateful to the Rockefeller Foundation for giving me the mic. That one pitch led to thoughtful conversations with leaders from across the country (from Tulsa to New York) who are trying to reimagine how to serve people better in this day and age. It reminded me: this is how change starts. In rooms where ideas collide over lunch and dining room tables and the connections spark some interesting conversations.

NationSwell network in full effect

The NationSwell Summit itself was a great experience. I met Greg Behrman during NationSwell’s founding days, and seeing the community he and his team have nurtured was inspiring. This network has grown into a vibrant collective of do-gooders across sectors.

The backdrop of the Summit was complex and layered: an accelerating AI revolution on one side and a looming government shutdown on the other. And who bears the brunt of these seismic shifts? Vulnerable communities. People who rely on services like SNAP (who are already navigating a tangled web of bureaucracy) face the greatest risks (with SNAP benefits . My pitch was, in many ways, about that very urgency: the need to streamline and strengthen access to safety net services now, using the best tools available.

AI, governance, and what was missing: ownership

One of the most powerful moments was a small-group dinner focused on AI and governance, attended by senior leaders at some amazing philanthropies and organizations: Omidyar Network, EqualAI, Cadence, Gates Foundation, Tulsa Innovation Labs, Rockefeller Foundation, Block, and others. The talk was mostly about guardrails, liability, trust, and literacy, which are all critical.

But something was missing: ownership.

Because right now, only a handful of companies own the future of AI. And if we’re not careful, governance becomes a spectator sport, where a few build, and the rest of us comment from the sidelines.

This is why I keep saying that we need community-based organizations and public interest technologists to jump into the AI waters. Start building. Start testing. Start understanding what this technology can and can’t do. We cannot afford to only theorize. We need to own something real, something useful, and something that reflects our values and serves our people.

The Builders will govern

Honestly, as I sat in that room surrounded by some of the most influential leaders in philanthropy and corporate social responsibility, I was struck by how much thoughtful conversation was happening, and how strong concerns and deep values were being surfaced. And yet, it also felt like we were one step removed. We weren’t quite in the room where AI was being built. We were in the room discussing what others are building.

Because of that, the conversations were largely about playing defense, not offense when it comes to AI.

To be clear, philanthropy can play an important role in this moment to put up guardrails, convene smart minds, and think carefully about long-term impacts. But when it comes to shaping the future of AI, only a handful of builders are playing offense, and by that I meant actively using this technology to share the world we live in. We need to change that.

We need to equip more of people (community-rooted, mission-driven builders) to play offense, too.

Philanthropy can also play a role there too, but so far, only a handful of funders have made bold moves in this space. Many are still taking a wait-and-see approach or launching coalitions to study the problem rather than deploy their funding on real implementation happening today.

At the end of the day, those shaping the future of AI will be the ones building it, and not just thinking or talking about it.

And that’s why it’s so important for philanthropy to deploy its resources now… to fund organizations that are already deeply embedded in communities and well-positioned to build technology and AI that is community-centered, community-owned, and grounded in equity.

Huge thanks to The Rockefeller Foundation for the opportunity to share One Degree’s work through the Big Bets Fellowship. And thanks to NationSwell for hosting such a thoughtful and energizing summit!

Fighting for America’s Social Safety Net: A White Paper

Over the next 10 years the federal government will be cutting over $1.1 trillion in safety net funding. The pressure on state and local governments to respond will be extraordinary. To step up to the challenge local public sector leaders will need to find better, more scalable and more affordable ways — to work together with each other — and with the social sector.

For the past twenty-five years the national 211 network has been collaborating with local communities as the system of record to help tens-of-millions of Americans find and access life-changing benefits and services.  This historically not-for-profit Information & Referral (I&R) ecosystem is being buffeted by political, commercial and technological change. These changes once promised to modernize and transform how social services are coordinated and delivered; but that has not yet happened.

The steep cuts in federal funding will compound these challenges leaving state and local governments without the time, or money, to continue to tolerate a status quo that already leaves too many kids and families behind. This White Paper is addressed to local social service advocates, agencies, funders, providers and policy makers unwilling to settle for a world that has not yet embraced the digital solutions we need to take care of each other.

Download the full white paper here: Technology for a Healthier World

Gen AI Mid-Point Presentation

It has been a busy fall at One Degree. On Tuesday the team joined together with our Google.org Gen AI Accelerator colleagues for a mid-point presentation to demo progress to date. It is an extraordinary group of committed leaders grappling with the challenges of applying new technologies, to old problems.

At One Degree, we have been focused on two workflows where AI can reduce friction for people seeking help and the caseworkers who support them. First, our new natural-language search prototype lets people ask for support the way they naturally speak (“food pantries in Oakland”) and receive verified, structured results pulled from One Degree’s resource database to improve connection rates.

Second, we’re kicking off AI-assisted intake: parsing organizations’ existing forms to generate secure digital versions and offering privacy-aware pre-fill so referrals arrive complete—cutting phone-tag and drop-offs.

Early user research underscored four challenges we will be working on over the next few months: speed, trust & accuracy (human-verified results), privacy by design, and clear value and differentiation familiar tools like Google Search and ChatGPT.

What’s next? The team is hardening evaluations, moving the AI search experience to production, and prototyping the intake flow so it plays nicely with today’s systems—and tomorrow’s agents—without burdening small nonprofits.

For a closer look click here: One Degree’s Gen AI Mid-Point Presentation

One Degree and AI: First Look at Our Community’s Perspective

Introduction

At One Degree, we are exploring the potential of AI to enhance our platform and better serve our community. To guide our efforts, we conducted a survey starting July 7th, 2025, to understand how our members (including help seekers, service providers, and health workers) feel about AI.

This survey was distributed to the One Degree Community Feedback Group, a group of active One Degree members who have volunteered to participate in various feedback activities to help improve our platform, including surveys, software testing, and interviews. The group is primarily composed of “help seekers,” those who search for community resources for their own needs, and “direct service providers,” those who connect their clients to resources. A number of participants even have experience in both categories.

The promise of generative AI is impressive, but it comes with potential risks. We are mindful of concerns about accuracy, bias, and the potential for misinformation, especially in a sector as critical as community resource navigation, which relies heavily on accurate and up-to-date information. This survey was designed to ground our work in the real-world experiences and concerns of our community, ensuring that any AI functionality we develop is done mindfully with a sense of responsibility to the people we serve.

Current Challenges in Accessing Resources

Our survey began by asking participants about the primary challenges they face when searching for and accessing resources. We received over 60 completed surveys, and the responses revealed a complex web of systemic and operational hurdles. Many participants noted the difficulty of navigating a fragmented system, where crucial information is often outdated or hard to find. 

Here is a collection of responses from our participants organized by theme:

  • Systemic and Operational Hurdles:
    • “Too many disconnected systems or organizations, which can make it hard to find.”
    • “Frequent changes in program availability or requirements.”
    • “Limited coordination between service providers.”
    • “Finding one that is actually able to help me/ finding one that hasn’t hit its limit of clients.”
  • Information and Navigation:
    • “Services are often not in area needed or not updated frequently enough.”
    • “Not knowing where to start or who to ask for help.”
  • Human and Process Gaps:
    • “A lack of follow-up or case management, where people are given information.”
    • “Limited coordination between service providers.”

These responses highlight a clear need for tools that can simplify navigation and provide up-to-date, accurate information. They address the core friction points that prevent people from getting the help they need. Due to these challenges, we are carefully assessing how generative AI can help support seekers and professionals in a complicated, fragmented system. These problems have been persisting for 10+ years, and our team is optimistic about the potential for generative AI-powered technology to create real shifts and impact in the sector.  

AI Perceptions and Applications

When asked to define AI in their own words, participants provided a range of perspectives, from seeing it as a helpful tool to viewing it with skepticism. Responses included:

  • “It’s an alternative to a human customer service representative. AI provides information gathered from the web.”
  • “helped by a streamlined, computer powered artificial personality that after answering a few questions can adapt and understand your individual needs and goals.”
  • “Uses too much energy and is detrimental to the earth. If you say you care about connecting people to resources we cannot use AI as a go to.”
  • “at my company we are not allowed to use A.I in any form”

The last quote, in particular, suggests that institutional policies around AI are already beginning to take shape.

We then asked participants about potential AI features they would find most useful.

For the 39 participants who identified as help seekers, the top-selected features were:

  • App giving you personalized recommendations (26 votes)
  • Chat assistant to answer questions in real-time (14 votes)
  • AI-powered search (12 votes)

This data suggests a strong desire for personalized guidance, which could be particularly impactful for those navigating a complicated social services system with little prior knowledge.

For the 26 professional service providers, the highest-rated features were:

  • Notifications when services are available (12 votes)
  • Automatic client case summaries (7 votes)
  • Digitizing your organization’s intake forms (7 votes)

These results point to AI’s potential to streamline administrative tasks, freeing up valuable time for service providers to focus on client interaction and support.

To further understand how people might use an AI assistant, we presented a mockup and asked what questions they would ask. Responses were varied but often centered on specific, time-sensitive needs:

  • “Where can I find shelter tonight in my area?”
  • “Show me what services are available for seniors over the age of 55 within the 91331 zip code?”
  • “Help with sorting out medicare options.”
  • “When was the last time eligibility requirements/provider information was updated.”

The final response is particularly insightful, as it highlights a critical concern about AI’s ability to provide timely and accurate information. This is a challenge that is paramount in the social services sector, regardless of technology.

Concerns about AI

We asked participants to rate their general feelings about “AI-powered” apps on a scale of 0 to 5. The average rating was 3.39, indicating a mixture of optimism and caution. While a significant number of people rated AI highly, a notable portion of the community expressed hesitation.

To dig deeper, we asked about their specific concerns regarding AI in resource search and referrals. The responses made it clear that trust, accuracy, and ethics are top of mind. The most frequently cited concerns were:

  • Ensuring accurate responses (40 votes)
  • Ensuring data privacy (27 votes)
  • Transparency in how the AI works (19 votes)
  • Not automating tasks that still need a human (13 votes)
  • Adhering to ethical values (11 votes)

To get a better sense of the specific concerns survey respondents had, here are a few notable quotes:

  • “One concern I have about AI being used in resource search and referrals is the risk of providing outdated or inaccurate information.”
  • “AI may struggle to handle unique or urgent situations with the sensitivity and flexibility that a human advisor could provide.”
  • “AI might unintentionally reinforce biases in the data it’s trained on.”
  • “Environmental impacts; it directly affects communities of color all over the world.”
  • “people may become too reliant on AI and miss out on the human support and empathy that caseworkers or community advocates can provide.”

This feedback highlights a critical point: the concerns extend beyond just accuracy. Participants also expressed apprehension about AI’s ability to provide the human connection essential for trust and understanding, especially when dealing with diverse and complex individual circumstances. Using AI as a wholesale replacement for people could raise significant red flags in a sector that relies so heavily on empathy and personal relationships.

Including AI in Your Work

Based on our survey findings, here are three key takeaways for any organization developing AI tools for the social sector or for communities with low incomes:

1. Tailor AI to Your Specific Users

Our survey revealed that help seekers and professionals have overlapping and different needs. An effective AI tool must be designed to serve these distinct use cases. 

For help seekers, AI can offer personalized guidance to navigate the complex web of social services. They are looking for tools that simplify the search for eligible services and streamline intake processes
.

For professionals, AI can be a powerful tool to improve administrative efficiency. Features like automatic case summaries or digitized intake forms can free up valuable time, allowing them to focus more on direct client interaction and support.




2. Prioritize Trust and Human Connection

Trust is fundamental in this work, and our survey respondents made it clear that trust in AI-powered platforms is not a given. Respondents emphasized the need for accurate information, especially during urgent situations. Unreliable AI responses can quickly lead to distrust. 

In addition, many expressed concern that AI lacks the nuance and sensitivity a human can provide. AI should not sideline the essential role of human connection and empathy. However, when used intentionally, AI can handle routine tasks, which can free people to focus on building relationships and providing deeper support.


3. Ground AI in Real-World Systemic Challenges

AI should not be a surface-level tech solution; it must address the deep-rooted, systemic issues our community faces. Survey participants consistently pointed to a fragmented and disconnected system as a primary obstacle. Examples of major challenges include finding resources that match their specific needs, have current availability, and for which they meet the eligibility requirements. 

Therefore, AI development must be aimed at solving these underlying problems to be truly effective, rather than simply adding a new layer of technology.

Staying Connected

Thank you to everyone who completed our survey. We do this work to support you and our community. It is necessary to build with you to earn our place as a trusted partner.

If you’re interested in collaborating with us or learning more about One Degree:

  • Join our Community Feedback Group
  • Visit our FAQs 

Reach out to us at help@1degree.org

From Stillness to Urgency: Reflections from the Rockefeller Fellowship at Bellagio

It’s hard to describe the kind of quiet you find at The Rockefeller Foundation‘s Bellagio Center in Lake Como, Italy. It’s not silence exactly. It’s more like a stillness. The hum of cicadas, the shimmer of Lake Como in the morning light, the rhythm of footsteps on stone paths. It’s the kind of quiet that lets big questions rise to the surface.

For a week this summer, I joined a group of extraordinary leaders at the Bellagio Center as part of the Rockefeller Big Bets Fellowship. The Bellagio Center has hosted Nobel laureates, heads of state, and social innovators from around the globe, and you can feel that legacy in every hallway and room. We moved through the week with a mix of reverence for those who came before us and resolve to tackle our own intractable challenges.

The Fellowship helps leaders refine their big bets, which are bold, systems-level ideas with the potential to drive transformative change. At One Degree our big bet is to build locally led, interoperable digital infrastructure that seamlessly connects people to social services and benefits, and to supercharge it with next-generation AI to leapfrog outdated systems.

Bellagio gave me the space to stress-test our big bet, see our work in a larger context, and expand my own sense of what’s possible. Here are the ideas and moments from Bellagio that changed how I see our work:

The Sky Just Got Higher

In our first Fellowship gathering in D.C., I sketched what I believed was my “sky scenario,” or the place where I dare to dream without limits. In my original future scenario, everyone has an AI assistant, and beneath it lies a robust public digital infrastructure for the social service ecosystem powered by One Degree. Social service AI agents will be able to talk to one another, instantly access accurate information, determine eligibility, handle enrollment and renewals, and guide people toward lasting economic opportunity. It’s a world where anyone who needs help can easily find, understand, and access the services and benefits they deserve, powered by AI, open infrastructure, and true interoperability.

Ambitious, right?

But at Bellagio, I had a realization: that wasn’t actually the sky. It was just our stretch goal. Our true “sky” is even higher: Universal Basic Services, where everyone’s basic needs are met and government works in deep partnership with local community-based organizations to make it happen. In that future, One Degree isn’t just building tools; we’re influencing policy through technology and data, and ensuring that in an AI-driven world, human dignity remains at the center.

When Coal Country Meets Silicon Valley

Article content
U.S. Fellows

One afternoon, as we were talking about “unlikely partnerships,” I found myself talking with two Fellows from Appalachia: Jacob Hannah from Coalfield Development Corporation in West Virginia and Colby Hall from Shaping Our Appalachian Region, Inc. (SOAR) in Kentucky.

Their work focuses on communities gutted by the collapse of the coal industry, towns where job losses rippled through every part of life. As they spoke, I heard echoes of something closer to home: the wave of layoffs hitting the tech industry, including my cousin and my brother (both software developers). AI is beginning to reshape the tech labor market the way automation and policy shifts reshaped the coal labor market.

It was a jarring parallel. Economic transitions don’t just “happen.” They can crush people and communities, or they can be managed with foresight and care. Appalachia’s strategies for resilience could hold lessons Silicon Valley urgently needs. That’s what struck me most: in California, we’re so accustomed to exporting our knowledge, goods, and services to the world, yet we rarely pause to consider how the wisdom of other communities could guide us.

This kind of cross-sector, cross-region learning is exactly what the Fellowship is designed to spark, and it’s helping me think about how our big bets must anticipate, and help manage, massive economic transitions.

Frustration in the Ecosystem Map

Article content
One Degree’s ecosystem map

We also did an exercise where we mapped the ecosystem around our work, and where the money, information, and power flow. I’ve done this exercise before, and the map looked painfully familiar. At the center are community members, surrounded by local governments, CBOs, and healthcare systems, each administering social services and benefits in their own silos. Around them sit One Degree, I&R hotlines, 211s, tech partners, and corporations that have moved into the space, also working in silos.

More than a decade in, the same gaps, silos, and slow-moving players remain. The social services sector often settles for incremental change because of funding limits, resource constraints, political challenges, gatekeeping, or simply because it feels safer. But incrementalism can’t meet the urgency of the need.

At that moment I realized: One Degree’s role has always been—and must continue to be—the disruptor… and to push the ecosystem forward when it would rather stand still. And our big bet is the tool to make that push both possible and sustainable.

Futuring, Not Forecasting

Article content
Some Megatrends cards

On another day, we pulled out Megatrend cards, which are futuristic prompts that describe large-scale forces already shaping the world, from technological shifts to demographic changes. They’re designed to spark imagination, helping you think beyond the immediate moment and consider how these trends might play out over decades. I loved reading through them. Too often, conversations about the future lean toward fear and loss, but these cards made space for optimism too. Among the trends we explored: AI woven into every aspect of life, growing mismatches between available jobs and workers’ skills, and the rapid urbanization of our planet.

From there, we envisioned our “preferred future” and worked backward, asking: What would need to be true in the next 5, 10, or 20 years to make this real? It reaffirmed something I believe deeply: the future isn’t something we simply adapt to. It’s something we shape. At One Degree, that means investing in AI development and design now, so that we’re not caught unprepared, but instead leading the creation of the future we want.

Economic Opportunity / Mobility / Inequality

Of course, since the Big Bets Fellowship is all about economic opportunity, we spent a lot of time unpacking what that really means. Too often, people use economic mobility, economic opportunity, and economic inequality interchangeably. But they’re not quite the same, and it’s not just about mobility—it’s about tackling inequality at its roots. And that requires shifting policy, particularly at the state level, where decisions on taxation, benefits, and resource allocation can directly shape people’s economic realities.

One example that stood out was New Mexico’s investment in its social safety net programs like childcare, healthcare, and income supports targeted to low- and middle-income families. By dedicating a greater share of tax revenue to these services, the state has seen measurable reductions in poverty rates in recent years.

That’s the power of policy aligned with people’s real lives. And it’s why hyperlocal work (like ours) must connect to long-term policy change. We can’t limit ourselves to day-to-day tasks and tactics; we need to think about how our big bet plugs into the bigger levers that can transform entire systems.

The Urgency of the Work Ahead

I left Bellagio feeling really optimistic about our future, especially being surrounded by the energy of other bold, committed social changemakers. I also left with more than just better and bigger ideas. I left with some more work to do: to sharpen One Degree’s big bet (in prep for the Rockefeller Foundation’s fall amplification event), bring more partners into these big bets, and show that bold, systemic change is not only necessary—it’s possible.

That stillness at Lake Como gave me the space to dream bigger. Back in the noise and urgency of daily life, that stillness has turned into resolve. The future won’t wait. Neither can we.

Huge gratitude to the incredible Rockefeller Foundation team Nathalia A. M. dos Santos (She, Her, Hers) , Sarah Troup Geisenheimer Danielle S. Goonan, IDEO John Won Alex Gallafent Bea Camacho, and Bellagio Center staff Nadia Gilardoni and many others, and to brilliant the U.S. Fellows Tiffany Terrell Dion Dawson Colby Hall Catherine P. Wilson Melissa Bukuru Jennifer Hankins Alexandre Imbot Marina Zhavoronkova Jacob Hannah Paul Huberty (and we missed you Gretchen Fauske) and amazing Asia-Pacific Fellows Aafreen Siddiqui Sherwani Alexia Hilbertidou Anusha Meher Bhargava Bobuchi Ken-Opurum, Ph.D. Eshrat W. Gaurav Godhwani Mustika Wijaya Ristika Putri Istanti Supatchaya “Ann” Techachoochert, PhD Uttam Pudasaini Yasser Naqvi Yumi Son for creating the perfect space for bold ideas, deep conversations, and a little lakeside magic.

Big Bets and Bold Futures: Reflections from the Rockefeller Big Bets Fellowship

A few weeks ago, I was in DC with an inspiring group of leaders for a gathering of the  Rockefeller Foundation’s U.S. Big Bets Fellowship. This four-month leadership program brings together changemakers across the country who are working to solve some of the world’s toughest challenges and who are ready to take their next bold leap.

This opportunity couldn’t have come at a better time. At One Degree, we’ve spent over a decade building technology that helps low-income families access life-changing resources, from food and housing to healthcare and legal support. What began as a radical vision to build a human-centered, tech-enabled safety net has evolved into critical infrastructure serving hundreds of thousands of people. But we’re now at a moment of transition, not just as an organization, but as a sector.

Demand for social services is rising. The systems people rely on are still deeply fractured. And while innovation is accelerating, it’s not always aligned with the realities and needs of communities. These are the kinds of challenges the Fellowship is designed to address. Not with small, incremental improvements, but with transformational thinking and bold action.

What is a Big Bet?

The Rockefeller Foundation defines a Big Bet as an ambitious commitment to tackle systemic problems at scale, whether that’s poverty or inequality. The Fellowship brings together leaders who are at inflection points in their work and are ready to reimagine what’s possible and make meaningful, lasting change.

Over the next four months, I will be part of a cohort of social impact leaders from across nonprofits, philanthropy, organizing, health, and more that are all navigating the space between where we’ve been and what’s next. The Fellowship gives us room to reflect, clarify our vision, and grow into the kind of leadership this moment demands.

How We’re Putting Big Bets into Action.

For us, this isn’t just a leadership development opportunity. It’s a strategic launchpad that is giving us space to sharpen our focus. We know the I&R (Information & Referral) ecosystem is fractured. We’ve seen firsthand how difficult it is for families to find help when they need it most. We’ve also seen how for-profit SDoH tech solutions have fallen short because they fail to center trust, collaboration, and lived experience.

So we’re asking: What’s the big bet that can truly transform how people access support?

Well, we’ve got a couple of big bets on our minds. First, we want to demonstrate that nonprofits can power regional, interoperable infrastructure for coordinated access to social services. We want to shift the frame from platform competition to ecosystem collaboration. Second, we’re building next-generation AI tools to leapfrog outdated systems and transform how people find and access support at scale. Our bet is that governments, nonprofits, healthcare providers, and tech platforms will all need trusted, underlying infrastructure to operate in an AI-powered (and agent-powered) world. This Fellowship gives us the space to test, refine, and position these ideas for transformational investment.

What’s Next?

These Big Bets aren’t abstract. They’re already in motion. And we’re using this Fellowship to double down: to clarify our vision, test our hypotheses, and unlock the kinds of transformational partnerships and investments that this moment demands.

We’re deeply grateful to the Rockefeller Foundation for believing in the potential of bold ideas and investing in the leaders who are bringing them to life.

If you’re also thinking about what your “big bet” might be, I’d love to connect. We’re all being called to lead differently, and we don’t have to do it alone.

Inside Google.org’s Gen AI Accelerator: What I learned about the future of Social Impact x Product Development

Last week, I joined 19 other visionary teams in London for the Google.org Gen AI Accelerator. Immersed in the heart of Google and surrounded by brilliant minds from DeepMind (Google’s AI research lab) and across the Googlesphere, it felt like stepping into the future. The energy was electric, and the pace of learning was like drinking from a firehose.

This experience has fundamentally shifted my perspective on AI and our overall work at One Degree | 1degree.org. Initially, I thought we’d make incremental improvements to our existing chatbot prototype, but now I see a much bigger opportunity. After a week of learning, testing, and dreaming, we realized: we have the chance to do something much more ambitious.

We’re not just building an incrementally better tool; I see the opportunity to position One Degree to be at the forefront of how AI can transform access to social services. Since our founding 13 years ago, One Degree has always been about using the most innovative, cutting edge technology to make an impact in communities, and today, AI represents the next frontier in that mission.

But here’s the reality: AI won’t fix the root causes of bureaucracy and inequity in our social safety net that prevent people from getting the help they need. Those are policy problems, not tech glitches. The systems we’re trying to improve were never designed for ease, dignity, or equity. AI won’t magically make them just.

And yet, I believe AI can help show us what’s possible. If done right, AI can help untangle some of the structural complexities — coordination, interoperability, delivery — that have long been too costly or labor-intensive for governments or private companies. But it must be grounded in community needs, equity, and deep humility.

That’s the path we’re on. The prototype we’re building won’t solve everything. But it will be a leap…  a visible, working example of what’s possible when we aim not just to automate, but to transform.

Here are some of the most thought-provoking lessons I’m still processing:

Article content

Gen AI = Green Banana.

Zack Akil, a machine learning engineer at DeepMind, described Gen AI as a green banana: it’s not fully ripe, but it will be soon. Models are improving at an astonishing pace. If a model doesn’t work exactly how you want today, it might just need a few more weeks. This calls for patience and iteration.

The product development lifecycle is being reimagined.

One of the most striking shifts is how Gen AI is changing the product development process. Instead of building full end-to-end systems, teams are now slicing development into smaller, more modular experiments: testing prompts, swapping models, integrating APIs, and layering in user feedback, often in parallel. This enables faster iteration and learning cycles. It’s less about writing all the code from scratch, and more about configuring tools, running rapid tests, and stitching together components that already exist. The speed and flexibility are game-changing, but they also require a product discipline with even more emphasis on evaluation and testing.

Agent-based systems are coming

AI agents (autonomous tools that can complete tasks or even communicate with other agents) are on the horizon. These agents could one day help navigate complex systems like healthcare or housing. Imagine an AI that not only helps someone find the right benefit program, but also coordinates between multiple service providers on their behalf. We’re not quite there yet, but it’s a glimpse of what might come. And it’s pushing us to ask: what infrastructure does the social sector need to build now to ensure we’re ready to harness this power?

Article content


Responsible AI is still a moving target.

We spent time learning frameworks for building AI responsibly, like weighing harm/benefit tradeoffs, establishing thresholds for acceptable and unacceptable behavior, and examining vulnerabilities (especially for historically excluded groups). And yet, there are still so many unknowns. Even common questions, like how models handle personal identifiable information (PII) or protected health information (PHI), how much hallucination is acceptable, or when to disclose AI involvement, don’t have consistent answers. The ethical terrain is evolving, and we’ll need to build our compass as we go. We’re starting with a strong framework, but we know this journey will require ongoing reflection, transparency, and adaptation.

If a human can do it, AI eventually will…?

One of the most mind-expanding provocations we heard was that any cognitive task a human can do today, a gen AI model will likely be able to do in the future (if not now). This got my wheels turning: What rote, repetitive, manual tasks are we doing, and how might we use AI to help us reclaim that time and energy for deeper impact?

We learned so much about the potential and challenges of generative AI. And our whole team is excited to leverage this technology to create meaningful impact. We came away not just with ideas, but with a prototype in motion, a community of peers, and massive energy. Stay tuned, because this is just the beginning of an incredible journey!

I want to send a HUGE thank you to all the incredible Googlers who made the Google.org Gen AI Accelerator an inspiring launchpad and transformative learning experience for our team at One Degree. We’re coming away energized, with new tools, bold ideas, and a clearer path for how we can use AI to expand access to life-changing services.

Article content


THANK YOU to the amazing Google.org team: Rowan Barnett Leslie Yeh Jen Carter Aaron Ogle Gabriel Doss Titobi Williams Dan Peterson Shiri Sivan Daley Gruen Amy Tang Omar M. . Our Google squad leads: Sanjana Sandeep Akshara Majjiga

Incredible speakers and mentors who pushed our thinking: Lyndsay Yerbic Doruk Caner Zack Akil Onajite Emerhor A.Mahdy Abdelaziz Archana Gupta Shifali Mudumba Michael Munn Lucrezia Noli Antonia Gawel Jani Cortesini Adam Connors Christopher Patnoe James Svensson Catherine Wah Umesh Telang Kimoon Kim Tirthankar Bose Ryan Burnell Ilia Udalov Emad Nadim Chak Yan Yeung

And thanks to all the amazing peers from organizations around the world — like Nava (Genevieve Gaudet Martelle Esposito, MS, MPH Foad Green Sundar Venugopalan ), Amrita Mahale, Paritii ( Shmona Simpson Lanre Akintujoye) — and to my One Degree | 1degree.org teammates, Steffi Brock-Wilson and Craig Summerill, who joined us and pushed my thinking every step of the way.