The State of Indiana has adopted an enterprise-level policy governing the use of Artificial Intelligence (AI) within state government. The State of Indiana AI Policy is issued and monitored by the Office of the Chief Data Officer (OCDO), in cooperation with the Chief Privacy Officer (CPO) and the Management Performance Hub (MPH).
While AI offers significant potential to enhance government services for Hoosiers, it also presents unique challenges that require careful management. State leadership and employees must understand the benefits and risks of AI including bias, privacy and security. The OCDO acknowledges both the opportunities and risks associated with AI use in state government, and draws on the National Institute of Standards and Technology’s AI Risk Management Framework (NIST AI RMF) in an attempt to balance innovation with responsibility by establishing guidelines that enable the efficient and ethical use of AI by state agencies while protecting individuals and communities from potential negative impacts.
To complement the AI Policy, the OCDO also issued the State Agency Artificial Intelligence Systems Standard which outlines the rationale behind the AI Readiness Assessment process required for the implementation or any use of AI by a state agency. The standard outlines the requirement for the submission of a Readiness Assessment Questionnaire prior to implementation or use of an AI tool or system. Readiness Assessment submissions are reviewed by the MPH AI Review Team, under the leadership of the CPO, to ensure ethical and privacy standards are considered prior to deployment.
- Note: If an agency is already utilizing an AI system or tool, they should submit an AI Readiness Assessment Questionnaire as soon as it is identified. Contact ResponsibleData@mph.in.gov with any questions or concerns.
Once reviewed and approved, the Readiness Assessment submission results in a State of Indiana AI Policy Exception Grant by the CPO, allowing for the use of the AI tool or system by the submitting agency. It is the responsibility of the submitting agency to provide follow-up assessments either annually or after substantial changes occur to the AI system/ tool or its use case (whichever comes first).
AI Readiness Assessment Submissions
An agency should submit an OCDO AI Readiness Assessment Questionnaire if any of the following are true: (1) AI is already in-use, (2) a program or system upgrade includes AI tools that the agency intends to utilize, (3) a new system, program or tool that includes AI will be implemented (this includes in-house developments, commercial/“off-the-shelf” procurements, or custom third-party developed solutions/programs).
Submit an AI Readiness Assessment Questionnaire
An Out of Scope Affirmation Form should be completed if any of the following are true: (1) an update or upgrade of an existing program/ system has AI tools that the agency does not intend to utilize, (2) a new program/ system includes AI tools that the agency does not intend to utilize.
- Note: if an agency submits an Out of Scope Affirmation Form, they do not need to submit an AI Readiness Assessment Questionnaire.
Frequently Asked Questions
- What is allowed by the State of Indiana Artificial Intelligence (AI) Policy?
- State employees may use AI systems that have been assessed and approved by the Office of the Chief Data Officer (OCDO).
- The use of any AI system(s) not approved pursuant to the AI policy is expressly prohibited. Approvals for use are administered via AI Policy Exceptions granted by the OCDO in conjunction with the Chief Privacy Officer through the AI Readiness Assessment process.
- The policy applies to all AI Systems, including those that are open source, developed by a State Agency, developed by a third party, purchased as commercial off-the-shelf solutions, and combinations thereof.
- Are there any exceptions to the AI policy? How does an AI system get approved by the OCDO?
- Exceptions to the policy may be granted as a result of the AI Readiness Assessment process as prescribed by AI policy. This process includes a risk assessment and review by the Chief Privacy Officer and a team of subject matter experts within the Management Performance Hub (reference as the MPH AI Team).
- Exceptions are reevaluated annually in a manner prescribed by the OCDO and CPO.
- What role does the Office of the Chief Data Officer play in AI implementation?/ Who do I contact with AI-related questions?
- What is considered an AI System under the State of Indiana AI policy?
Under this policy and in accordance with the NIST AI Risk Management Framework, an AI System is defined as an engineered or machine-based system that can generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments for a given set of objectives. AI Systems are designed to operate with varying levels of autonomy.
- What should I do if I'm unsure whether a system qualifies as AI under this policy?
If you're unsure whether a system qualifies as AI under this policy, you should consult with your designated Agency Privacy Officer (APO) or the OCDO via ResponsibleData@mph.in.gov. They can provide guidance on whether the system falls under the policy's definition of an AI System and what steps need to be taken.
- Who reviews and approves AI implementation proposals?/ What is the AI Readiness Assessment Process for a Policy Exception?
- AI Readiness Assessment Questionnaires are reviewed by the MPH AI Team, which includes reviewers from Legal & Privacy, Data Science, Data Governance, and Enterprise Solutions teams. Each reviewer considers the objectives, efficacy, and risks of the proposed AI Implementation Activities from their professional perspective.
- The agency wishing to utilize an AI system must submit an AI Readiness Questionnaire for consideration by the MPH AI Team. The Questionnaire provides information for an initial triage of risk level which assists in prioritizing the submissions. Please see the AI Readiness Assessment Process Flow diagram. Most low and moderate risk submissions are automatically approved/ granted an exception, but the MPH AI Team reserves the right to apply more scrutiny at any time. High risk submissions are fully reviewed by the team.
- What if a system has AI features or capabilities that I do not intend to use?
If an agency does not intend to utilize AI features or capabilities of a specific system or software, they should complete an Out of Scope Affirmation form . This form serves as documentation and fulfills the requirement to consult with the OCDO/Management Performance Hub (MPH) by IOT’s Software Authorization Request.
- Note: Completing this form and submitting it to IOT indicates that the agency is willing to take necessary measures to disable or block the AI features in the method(s) prescribed by IOT.
- Who is responsible for ensuring compliance with the AI policy in my agency?
The Agency Privacy Officer is responsible for ensuring compliance with the AI policy in your agency. This individual is designated under the State of Indiana Policy: Information Privacy and is responsible for submitting readiness and maturity assessment documentation.
- What happens if there's a violation of the AI policy ?
For state employees, it may lead to removal from relevant AI Implementation Activities and a review of their access. It can also constitute employee misconduct. For external partners, violations may result in removal from AI Implementation Activities and a review of their access, potentially leading to contractual remedies. State agencies violating the policy may face termination of relevant AI Implementation Activities or other remedial actions as determined by the OCDO or IOT.
- What is the NIST AI Risk Management Framework and why is it important?
The NIST AI Risk Management Framework (AI RMF) is a widely-used framework adopted by the State of Indiana for managing risks associated with AI systems. It is important because it provides a structured approach to identify, assess and mitigate risks related to AI implementation. The framework emphasizes the need for responsible AI practices. It helps organizations think critically about the context and potential impacts of AI systems, enhancing trustworthiness and cultivating public trust.
- How does this policy affect AI systems we've already implemented?
The policy applies to all AI Systems, including those that have already been implemented. For existing systems, the agency should submit an AI Readiness Questionnaire and receive an AI Policy exception, in order to be in compliance.
- Do we need to provide notice to individuals when using an AI system?
Yes, notice must be provided to individuals when using some AI systems. The policy requires a 'just-in-time' (JIT) notice to adequately inform users of processes associated with their interaction with the AI system. This notice should be posted or presented at the point of information collection or interaction, making it clear that AI is involved and detailing relevant data processing that might otherwise be unclear. These notices must be reviewed and approved by the State Chief Privacy Officer.
- How often are AI Policy Exceptions reviewed?
AI Policy exceptions need to be reviewed when significant changes are made to the system, or annually, whichever occurs first. Significant changes include new policies or procedures affecting the AI system, merging with other systems, changes in stakeholder management or ownership, modifications to accessibility or processing, alterations to the character of information in the system, or significant modifications in the content or scope of AI system outputs. It is the responsibility of the agency using the AI system to identify and report significant changes. MPH initiates the annual review process which includes a self-report form for the agency using the AI system to complete.
- How does OCDO/ MPH classify the risk level of an AI system?
AI systems are classified into three risk levels: High-Risk, Moderate-Risk, and Low-Risk. The classification is based on the system's potential impact and scope of application.
- High-Risk systems generally have broad-context applicability or may impact fundamental rights, safety, or critical sectors.
- “Broad-context” means a system that operates across diverse domains and/or impacts individuals in unpredictable ways.
- Moderate-Risk systems typically have narrow-context applicability or are deployed in less sensitive contexts.
- “Narrow-context” means a system that operates in a specific domain and/or conducts well-defined tasks with limited scope.
- Low-Risk systems have very limited ability to cause harm and the risks are easily mitigated.
- High-Risk systems generally have broad-context applicability or may impact fundamental rights, safety, or critical sectors.
- What documentation is required for the AI Readiness Questionnaire?
Required documentation for the AI Readiness Questionnaire includes (but is not limited to): executed contract(s) for the requested system(s) and/or system Terms & Conditions, data flow documentation/ diagram(s), and executed Data Sharing Agreement(s) (if applicable).
- If an AI tool is accessible, (i.e. not blocked by IOT) am I allowed to use it? Are we allowed to use integrated AI tools pushed through in updates by vendors to existing systems (e.g. Microsoft Co-pilot)?
- No. The only AI tools that are allowable for use by state employees are those covered under AI Policy Exceptions granted via the AI Readiness Questionnaire process. If you wish to use the AI tool, you should work with your Agency Privacy Officer to submit an AI Readiness Questionnaire to have an AI Policy Exception granted for the AI tool.
- No. If a vendor pushes an update that includes an AI tool, the agency should report it to IOT immediately.
- Is there a list of approved AI tools available?
At present, a “whitelist”/ list of approved tools is not available. However, there is a joint effort by MPH and IOT to create one for State of Indiana agency reference.
- If an AI tool has been “whitelisted” by IOT, is an AI assessment still required prior to utilization?
This is situational as data sources and data use can vary from use case to use case, even in the same AI tool.
- Is an AI assessment only required for tools/projects involving generative AI, or does traditional AI also require an AI Assessment?
Any type of AI use, tool, or system is required to be reviewed through the OCDO AI Readiness Questionnaire process, and use is only permitted once an AI Policy Exception is granted.
- What is MPH looking for when reviewing the AI assessments?
The OCDO/ MPH AI Review Team is looking at the use case for the AI tool/ system including the type of data to be used as input and scope of use. Risk is assessed based on the responses to the AI Readiness Questionnaire. See “How does MPH classify risk-level?” above.
- Can anyone submit an AI assessment?
It is at the discretion of the Agency as to who is able to submit the AI Readiness Questionnaire. The questionnaire requires the name and email address of the Agency Privacy Officer (APO) as they are responsible for ensuring the agency is compliant with the State of Indiana AI Policy, and a copy of the submission is sent to the email address provided. MPH has provided a roster of designated APOs for reference of those completing the Questionnaire.
- When should an AI assessment be submitted?
An AI Readiness Questionnaire should be submitted as soon as an AI system/ tool is identified as a resource to be used by the agency. Ideally, this occurs prior to procurement and installation of the tool (IOT will not approve the Software Authorization request for systems with AI included without either an AI Policy Exception or Out of Scope Affirmation form). However, if AI is identified in a system/ tool that is already in use, it is important to submit a Questionnaire as soon as possible.
- If our agency is collaborating with another agency on an AI project – which agency should complete the assessment submission?
The agency completing the AI work should be the one to submit the AI Readiness Questionnaire. Please always feel free to reach out to ResponsibleData@mph.in.gov with any questions or for assistance in determining the responsible party for the AI Readiness Review.
- Will agency leadership of the submitting agency be looped in when MPH receives an AI assessment submission?
Also reference the question above: “Can anyone submit an AI assessment?”.
While we cannot individually contact the agency leadership of every submitting agency, MPH has put various controls in place to facilitate communication.
- The Questionnaire asks the submitter to affirm that their agency’s leadership is aware and supportive of the use of the requested tool/ system.
- The submitter must provide the name and email address of the Agency Privacy Officer (APO), and a copy of the submission is sent to the provided email address. MPH provides a link to a roster of APOs as a reference for those completing the form.
State of Indiana Resources
- State of Indiana Policy: Information Privacy
- State of Indiana Standard: Agency Privacy Officer Job Description
- List of Agency Privacy Officers
To view all policies, standards, procedures and guidance released from the OCDO, visit the Indiana OCDO Policies page.