Introduction
At One Degree, we are exploring the potential of AI to enhance our platform and better serve our community. To guide our efforts, we conducted a survey starting July 7th, 2025, to understand how our members (including help seekers, service providers, and health workers) feel about AI.
This survey was distributed to the One Degree Community Feedback Group, a group of active One Degree members who have volunteered to participate in various feedback activities to help improve our platform, including surveys, software testing, and interviews. The group is primarily composed of “help seekers,” those who search for community resources for their own needs, and “direct service providers,” those who connect their clients to resources. A number of participants even have experience in both categories.
The promise of generative AI is impressive, but it comes with potential risks. We are mindful of concerns about accuracy, bias, and the potential for misinformation, especially in a sector as critical as community resource navigation, which relies heavily on accurate and up-to-date information. This survey was designed to ground our work in the real-world experiences and concerns of our community, ensuring that any AI functionality we develop is done mindfully with a sense of responsibility to the people we serve.
Current Challenges in Accessing Resources
Our survey began by asking participants about the primary challenges they face when searching for and accessing resources. We received over 60 completed surveys, and the responses revealed a complex web of systemic and operational hurdles. Many participants noted the difficulty of navigating a fragmented system, where crucial information is often outdated or hard to find.
Here is a collection of responses from our participants organized by theme:
- Systemic and Operational Hurdles:
- “Too many disconnected systems or organizations, which can make it hard to find.”
- “Frequent changes in program availability or requirements.”
- “Limited coordination between service providers.”
- “Finding one that is actually able to help me/ finding one that hasn’t hit its limit of clients.”
- Information and Navigation:
- “Services are often not in area needed or not updated frequently enough.”
- “Not knowing where to start or who to ask for help.”
- Human and Process Gaps:
- “A lack of follow-up or case management, where people are given information.”
- “Limited coordination between service providers.”
These responses highlight a clear need for tools that can simplify navigation and provide up-to-date, accurate information. They address the core friction points that prevent people from getting the help they need. Due to these challenges, we are carefully assessing how generative AI can help support seekers and professionals in a complicated, fragmented system. These problems have been persisting for 10+ years, and our team is optimistic about the potential for generative AI-powered technology to create real shifts and impact in the sector.
AI Perceptions and Applications
When asked to define AI in their own words, participants provided a range of perspectives, from seeing it as a helpful tool to viewing it with skepticism. Responses included:
- “It’s an alternative to a human customer service representative. AI provides information gathered from the web.”
- “helped by a streamlined, computer powered artificial personality that after answering a few questions can adapt and understand your individual needs and goals.”
- “Uses too much energy and is detrimental to the earth. If you say you care about connecting people to resources we cannot use AI as a go to.”
- “at my company we are not allowed to use A.I in any form”
The last quote, in particular, suggests that institutional policies around AI are already beginning to take shape.
We then asked participants about potential AI features they would find most useful.
For the 39 participants who identified as help seekers, the top-selected features were:
- App giving you personalized recommendations (26 votes)
- Chat assistant to answer questions in real-time (14 votes)
- AI-powered search (12 votes)
This data suggests a strong desire for personalized guidance, which could be particularly impactful for those navigating a complicated social services system with little prior knowledge.
For the 26 professional service providers, the highest-rated features were:
- Notifications when services are available (12 votes)
- Automatic client case summaries (7 votes)
- Digitizing your organization’s intake forms (7 votes)
These results point to AI’s potential to streamline administrative tasks, freeing up valuable time for service providers to focus on client interaction and support.
To further understand how people might use an AI assistant, we presented a mockup and asked what questions they would ask. Responses were varied but often centered on specific, time-sensitive needs:
- “Where can I find shelter tonight in my area?”
- “Show me what services are available for seniors over the age of 55 within the 91331 zip code?”
- “Help with sorting out medicare options.”
- “When was the last time eligibility requirements/provider information was updated.”
The final response is particularly insightful, as it highlights a critical concern about AI’s ability to provide timely and accurate information. This is a challenge that is paramount in the social services sector, regardless of technology.
Concerns about AI
We asked participants to rate their general feelings about “AI-powered” apps on a scale of 0 to 5. The average rating was 3.39, indicating a mixture of optimism and caution. While a significant number of people rated AI highly, a notable portion of the community expressed hesitation.
To dig deeper, we asked about their specific concerns regarding AI in resource search and referrals. The responses made it clear that trust, accuracy, and ethics are top of mind. The most frequently cited concerns were:
- Ensuring accurate responses (40 votes)
- Ensuring data privacy (27 votes)
- Transparency in how the AI works (19 votes)
- Not automating tasks that still need a human (13 votes)
- Adhering to ethical values (11 votes)
To get a better sense of the specific concerns survey respondents had, here are a few notable quotes:
- “One concern I have about AI being used in resource search and referrals is the risk of providing outdated or inaccurate information.”
- “AI may struggle to handle unique or urgent situations with the sensitivity and flexibility that a human advisor could provide.”
- “AI might unintentionally reinforce biases in the data it’s trained on.”
- “Environmental impacts; it directly affects communities of color all over the world.”
- “people may become too reliant on AI and miss out on the human support and empathy that caseworkers or community advocates can provide.”
This feedback highlights a critical point: the concerns extend beyond just accuracy. Participants also expressed apprehension about AI’s ability to provide the human connection essential for trust and understanding, especially when dealing with diverse and complex individual circumstances. Using AI as a wholesale replacement for people could raise significant red flags in a sector that relies so heavily on empathy and personal relationships.
Including AI in Your Work
Based on our survey findings, here are three key takeaways for any organization developing AI tools for the social sector or for communities with low incomes:
1. Tailor AI to Your Specific Users
Our survey revealed that help seekers and professionals have overlapping and different needs. An effective AI tool must be designed to serve these distinct use cases.
For help seekers, AI can offer personalized guidance to navigate the complex web of social services. They are looking for tools that simplify the search for eligible services and streamline intake processes .
For professionals, AI can be a powerful tool to improve administrative efficiency. Features like automatic case summaries or digitized intake forms can free up valuable time, allowing them to focus more on direct client interaction and support.
2. Prioritize Trust and Human Connection
Trust is fundamental in this work, and our survey respondents made it clear that trust in AI-powered platforms is not a given. Respondents emphasized the need for accurate information, especially during urgent situations. Unreliable AI responses can quickly lead to distrust.
In addition, many expressed concern that AI lacks the nuance and sensitivity a human can provide. AI should not sideline the essential role of human connection and empathy. However, when used intentionally, AI can handle routine tasks, which can free people to focus on building relationships and providing deeper support.
3. Ground AI in Real-World Systemic Challenges
AI should not be a surface-level tech solution; it must address the deep-rooted, systemic issues our community faces. Survey participants consistently pointed to a fragmented and disconnected system as a primary obstacle. Examples of major challenges include finding resources that match their specific needs, have current availability, and for which they meet the eligibility requirements. Therefore, AI development must be aimed at solving these underlying problems to be truly effective, rather than simply adding a new layer of technology.
Staying Connected
Thank you to everyone who completed our survey. We do this work to support you and our community. It is necessary to build with you to earn our place as a trusted partner.
If you’re interested in collaborating with us or learning more about One Degree:
- Join our Community Feedback Group
- Visit our FAQs
Reach out to us at help@1degree.org