Ready, Set, AI: From Groundwork to Guidelines for a Policy That Works

This blog explores when it’s the right time to begin developing an Artificial Intelligence (AI) policy — and what foundational steps should come first to avoid common missteps and ensure the policy meets your organization’s needs. As part of its ongoing work to support informed and ethical policy development, the Kansas Health Institute (KHI) offers a practical starting point for organizations considering the use of AI — or even questioning whether they should explore it at all. The piece invites readers to reflect on key questions, such as what problem you’re trying to solve — or what process you’re hoping to enhance or transform — and whether AI is truly the right solution. With a strong foundation, organizations can move beyond the hype and create thoughtful policies that balance innovation with responsibility.
The contents of this blog expand upon the concepts discussed in the evidence-informed tool Developing AI Policies for Public Health Organizations: A Template and Guidance developed by KHI in partnership with Health Resources in Action and Wichita State University’s Community Engagement Institute and supported by the Public Health Infrastructure Grant from the Centers for Disease Control and Prevention. The tool is designed to guide public health organizations in developing AI policy templates and guidance.
The Role of Artificial Intelligence in Public Health Transformation
AI holds transformative potential for the public health sector. From improving public health communications and enhancing predictive modeling and disease surveillance to streamlining administrative operations, AI can help organizations build capacity, respond more efficiently to complex challenges, and remain innovative in addressing the needs of the communities they serve. Used thoughtfully, AI tools can enhance, not replace, the critical judgement and expertise of public health professionals by allowing public health organizations to do more with existing resources. However, AI is not a one-size-fits-all solution. As with any powerful technology, its value lies not only in what it can do, but in the intentions and context behind how it is used.
Before the Policy: Laying the Groundwork for AI Readiness in Your Organization
Finding the Right Momentum to Begin
Knowing when to begin developing an AI policy can be challenging — but certain signs may indicate the time is right, such as growing organizational interest in AI exploration and increasing questions about how to approach it within the organizational context. Many organizations are still figuring out when that moment arrives. These early indicators might include colleagues asking what AI tools are permitted, how they should (or shouldn’t) be used, and/or who is responsible if an error occurs due to AI-generated output. These questions often signal that your organization may be ready, or even overdue, for a structured conversation about AI.
First Steps Toward an AI Policy:
✅ Determine the rationale for using AI.
✅ Assess AI literacy levels across organization.
✅Confirm your organization’s AI use cases.
✅ Establish an AI working group and define key roles.
✅ Develop guiding principles and scope questions to help bridge inspiration to action.
Don’t Skip the Foundation
It can be tempting to dive straight into drafting an AI policy, but doing so without foundational work could lead your organization down the wrong path, resulting in a policy that does not fit the needs of your organization. One of the most important building blocks of a meaningful AI policy is understanding what problems your organization is trying to solve by using AI in the first place — or what process you’re hoping to enhance or transform — and whether AI is the right solution. Take a step back and ask:
- What problems are we trying to solve?
- What aspects of our work are we trying to transform or enhance?
- What approaches have we already tried, and what were their limitations?
- Based on the needs we’ve identified, what tools or strategies could help us move forward?
- Where might AI be a good fit — and where might it not be?
Clear answers to these questions will help shape a policy that is grounded in purpose — not just one being developed to be reactive to AI hype.
Building AI Literacy: The Other Essential Ingredient
The second key milestone is building AI literacy. Without a shared understanding of what AI is, how it works and the types of tasks it can support, it can be difficult to fully develop a policy that aligns with your organization’s needs, values and goals. This need for shared understanding leads to an important question that many organizations are grappling with: what should come first — policy or literacy?
Policy or Literacy First? Understanding the Trade-Off
There’s no clear consensus in the field. Policy provides guardrails for safe and ethical AI use, but developing effective policy requires an understanding of AI’s capabilities and potential uses. At the same time, building that understanding often depends on hands-on experience with AI — which, without policy in place, can raise ethical or operational risks. This creates a tension: waiting for literacy can delay needed safeguards, but jumping into policy without experience can result in guidelines that are either too vague or overly rigid.
Low-Risk Exploration as a Launch Point
One practical way to resolve this tension is to create opportunities for staff to explore AI through low-risk tasks — defined by your organization, and in compliance with relevant legal, regulatory and ethical requirements — while the more detailed AI policy is still being developed. Typically, low-risk tasks are those that don’t involve sensitive data or critical decisions. These early learning experiences help staff build familiarity with AI and begin shaping the direction of your organization’s policy.
To make this more tangible, consider how this approach can look in a public health setting. For example, low-risk tasks might include using AI tools to summarize meeting notes, condense content for slides, identify action items from meeting transcripts, draft routine emails or generate ideas for community outreach campaigns. These everyday applications offer a safe environment to explore AI’s potential, while also fostering a shared understanding of its limitations and potential use.
Track Use, Reflect, and Refine
As your team experiments, it’s also valuable to track which tools are being used, for what purposes and what lessons emerge — these observations can help refine your policy so it reflects actual use cases and organizational needs.
Once the Need for an AI Policy is Confirmed, What’s Next?
Once your organization has established a rationale for using AI, begun building AI literacy and confirmed the need for an AI policy, the next step is to turn that intention into a clear and actionable policy. This involves assembling a team, defining policy scope and making sure that your policy supports your organization’s mission, vision, existing policies and the realities of day-to-day operations.
First, Build Your AI Team
Building an effective AI policy starts with bringing people with various expertise to the table. The depth and complexity of your policy should align with your organization’s unique needs, the potential risks associated with your AI use, and the resources you have available — including staff capacity. While engaging a wide range of expertise is ideal, don’t let limited capacity hold you back. You can begin developing a thoughtful, values-driven AI policy using the people and insights already within your organization and available resources such as Developing AI Policies for Public Health Organizations: A Template and Guidance. Start with who you have and build from there.
Who To Include — and Why It Matters
One practical way to begin is by forming a cross-functional AI working group. This team doesn’t need to be large or perfectly staffed from the start — it just needs to bring together key perspectives that reflect your operations, goals and values. Ideally, it should include individuals with technical expertise to evaluate feasibility and risks, legal and compliance professionals to ensure alignment with laws and regulations, organizational leadership, operations staff who understand how AI may affect day-to-day workflows, and someone who works directly with the communities and other stakeholders your organization serves. If you don’t have all the necessary expertise in-house, you should consider tapping into peer organizations, external partners or trusted advisors to fill in the gaps. Including individuals from the community your organization serves also can provide valuable insights and help shape your AI policy.
Let Use Inform the Policy — But Leave Room To Grow
The makeup of your team — and the degree of complexity of your policy — should reflect how your organization is currently using or planning to use AI. If your organization is likely to use only publicly available generative AI tools such as Copilot, ChatGPT or others for low-risk tasks, a foundational, principles-based policy may be sufficient. But if you’re considering integrating AI into high-stakes decisions — such as those that affect access to services, screening and automated decision making — you’ll need more detailed guidance, oversight structures and accountability mechanisms, such as ensuring human intervention is required to validate AI outputs. Regardless of where you’re starting, begin with what’s practical for your context and grow your approach as your use of AI evolves.
Next, Define the Policy’s Scope, Purpose and Principles
Once your team has been assembled, you can focus on the structural details of your policy. Before writing specific provisions, it’s important to determine:
- What is the purpose of the policy?
- Who should the policy apply to? Does it apply to all staff, certain departments, roles, contractors or vendors?
- What key principles (e.g., workforce sustainability) should be included in the policy?
At this stage, begin identifying which roles and responsibilities will be needed for policy oversight and enforcement. This helps clarify expectations and ensures accountability is built into the policy’s design from the outset.
Example of Key Principle
Workforce Sustainability: The use of AI tools will be focused on the primary goal of augmenting, rather than replacing, human workers.
Why Not Name AI Tools in Your Policy
AI policies should avoid naming or regulating specific tools. There are several reasons for this approach. First, the pace of technological change means that tool-specific regulations can quickly become outdated. Second, many tools are built on similar underlying technologies, making it difficult to draw clear boundaries between them. What appears as a distinct tool today may simply be a variation of another — or evolve into something entirely new tomorrow. A more sustainable approach focuses on guiding principles, intended uses and risk levels to remain flexible and relevant over time.
Developing Policy Provisions — Key Topics to Consider
Typically, sections in an AI policy focus on addressing potential risks associated with AI use while also maximizing its benefits. Some key topics to consider including in your policy are:
- Data privacy
- Bias mitigation
- Human oversight
- Transparency
- Community engagement
- Capacity building
- Authorship and copyright
- Environmental impacts
Balancing Guidance With Guardrails
As you proceed with deciding how to build provisions within each topic, keep in mind that policy provisions should offer practical guidance while also setting boundaries to prevent harm and promote the responsible, values-aligned use of AI. For example, if you are drafting a provision related to data privacy while using generative AI tools like ChatGPT or Copilot, your policy might include language such as:
“[Organization] prohibits users from transmitting personal, sensitive or protected information to generative AI tools. Any data shared with AI systems must be suitable for public release and must comply with all applicable legal and regulatory frameworks. Additionally, users must follow organizational data security protocols to prevent unauthorized access, leaks or breaches related to AI use.”1
Navigating Constraints When Drafting AI Provisions
Finally, it’s crucial to keep your policy practical and grounded in reality. Provisions should be realistic to implement within your organization’s current capacity, needs and structure — and should also reflect what is technically feasible given the current state of AI.
For example, provisions requiring full human comprehension of the processes AI uses to generate output — often referred to as the explainability of AI — should be crafted with a realistic understanding of what is technically and operationally feasible.
Many advanced AI models are often described as “black boxes” because their internal processes are not easily understood — even by the developers themselves. This complexity makes full explainability challenging — and sometimes unattainable — for many systems.
Before developing an AI policy, and during the early stages of identifying use cases AI might support, organizations should assess for which applications full explainability is necessary — particularly those tied to high-impact decisions in areas such as hiring, eligibility for services or resource allocation. In lower-risk use cases — such as administrative streamlining — explainability may be desirable but not essential. In these cases, if the policy includes provisions around explainability, they may reflect this nuance — emphasizing proportionality based on risk.
This issue connects to a broader debate in AI governance: Should the focus be on making all AI systems explainable, or on ensuring they are trustworthy and aligned with ethical standards, even if not fully explainable?
Conclusion
Developing an AI policy doesn’t need to be overwhelming — especially if you start with the right mindset. Focus on guiding principles and risk levels rather than specific tools, and take time to build AI literacy.
Most importantly, keep your policy practical, flexible and rooted in your organization’s day-to-day reality. Build your team, start where you are, learn as you go, and refine your approach as both your use of AI and the field itself continue to evolve. A valuable resource in this process is Developing AI Policies for Public Health Organizations: A Template and Guidance, which offers adaptable language and a structured framework to help you get started and stay aligned with best practices.
As part of that refinement, build in a plan for policy maintenance and review. AI technologies and use cases change rapidly, so your policy should include a clear process — such as an annual review cycle or a standing committee — to revisit and revise provisions regularly, ensuring continued relevance and accountability.
Endnotes
- Adapted for Developing AI Policies for Public Health Organizations: A Template and Guidance from the Mayor’s Office for the City of Baltimore, Maryland. (March 20, 2024). Mayoral Executive Order Establishing Principles and Policy Governing Use of Generative Artificial Intelligence.
While this blog references the resource Developing AI Policies for Public Health Organizations: A Template and Guidance, which is supported by funds made available from the Centers for Disease Control and Prevention (CDC) of the U.S. Department of Health and Human Services (HHS), this blog itself was not supported by CDC or HHS funding. The contents are solely those of the author(s) and do not necessarily represent the official views of, nor an endorsement by, CDC/HHS or the U.S. Government.
About Kansas Health Institute
The Kansas Health Institute supports effective policymaking through nonpartisan research, education and engagement. KHI believes evidence-based information, objective analysis and civil dialogue enable policy leaders to be champions for a healthier Kansas. Established in 1995 with a multiyear grant from the Kansas Health Foundation, KHI is a nonprofit, nonpartisan educational organization based in Topeka.