AI Governance Resources

AI Governance Resources

What’s Happening?

  • We are planning a 2-month discovery period to understand AI’s role in our operations, analytics, productivity, and decision-making.
  • After the discovery period, general LLM access (e.g. ChatGPT, Copilot) and domain-specific tooling will be paused.
  • The pause is expected to take effect after a 2-month grace period, during which we encourage collaboration, self-reporting, and discovery. This will help prioritize the continued usage of AI tooling.

Why the Pause?

  • By pausing LLMs, we aim to assess their impact, address any unintended consequences, and ensure responsible usage and consumption.
  • We will be reviewing to address any ethical, security, and/or bias-related concerns.
  • AI tools are powerful. Their complexity and potential risks also require careful consideration.

Strategic Objectives

  • Protect and better serve our tribal citizens, associates, and intellectual property.
  • Maintain
  • Identify mission critical verticals with the goal of derisking those first.
  • Understand how AI is being used in those mission critical verticals and its impact on strategic tribal goals.
  • Educate and train associates in their respective domains.
  • Serve as an archetype and leader to other tribes.

What Does This Mean for You?

  • Help us understand how AI plays a role in your work, by telling us what AI tools you use today.
  • If you do not inform us which AI tools you use today, there is a risk that you will not have access after the grace period.
  • Be prepared for a temporary halt in LLM usage after the grace period ends.
  • We would recommend viewing FAQs and then if you still have questions, please contact your manager or the Help Desk (fill out an Itapela ticket or email: [email protected] or call 580-642-4357).

Frequently Asked Questions (FAQs)

To evaluate its impact and ensure responsible use, while addressing ethical, security, and bias concerns.

Continue using AI tools while anonymizing sensitive data and self-reporting any issues.

Duration will be communicated after the grace period, with regular updates on progress.

No, but report any anomalies, as search results may still be AI-influenced.

Unreported projects may be paused, allowing time for analysis and addressing risks. However, priority will be given to multi-user applications both existing and new when appropriate.

Critical tasks will be evaluated on a case-by-case basis, with exceptions made if necessary.

Regular updates will be provided via email and internal communications.

Training and resources are being developed to promote responsible AI use.

Refer to TW IT 304 Acceptable Use of Information Systems Policy for guidelines on AI and software use.

 

 

Reach out to your manager or the IT Help Desk.

Projects will be assessed based on their impact, alignment with business goals, and ethical considerations.

Yes, but with close monitoring to ensure responsible practices.

Vendor contracts will be reviewed during the pause to align with our goals.

The roadmap will be adjusted based on insights from the discovery period.

Guiding Principles

When evaluating AI systems for responsible usage, we focus on six key principles to ensure ethical, secure, and reliable deployment across the organization. These criteria guide us in maintaining accountability and trust in our AI initiatives:

  1. Privacy – Safeguarding individual data privacy is paramount. We evaluate whether privacy obligations are fully understood and adhered to in AI applications.
  2. Fairness and Bias Detection – Ensuring that data and models are free from bias is crucial. We assess whether AI models represent diverse uses fairly within the testing data and outputs.
  3. Explainability and Transparency – Clear explanations of AI decision-making are vital. We measure how well the model’s behavior can be communicated in simple, non-technical terms.
  4. Safety and Security – Robust systems that prioritize safety are a must. We consider unintended consequences and the overall security of the AI in our evaluations.
  5. Validity and Reliability – Consistent performance is critical. We ensure that the data and AI model are monitored for accuracy and effectiveness over time.
  6. Accountability – AI systems must be governed with clear responsibility. We ensure a risk assessment is conducted and that any decisions made by AI are traceable to accountable parties.

These guiding principles form the backbone of our approach to responsible AI, helping us navigate the complex landscape while upholding the values of security, transparency, and ethical integrity.

Glossary of Terms

Artificial Intelligence (AI)

Technology that enables computers to perform tasks like decision-making, language processing, and visual recognition.

Bias

Unintended prejudice in AI, often due to biased data or flawed algorithms.

Chain-of-Thought (CoT) Prompting

A method where prompts guide AI through logical steps to maintain context and coherence.

Generative AI

AI that creates new content like text, images, or music based on existing data.

Hallucination

When an AI system generates information that is not based on reality or input data, often due to inadequate training.

Large Language Model (LLM)

AI models trained on vast amounts of text to understand and generate human-like language (e.g., virtual assistants).

Machine Learning (ML)

A subset of AI where systems improve through experience and data without explicit programming.

Prompt

Input or command given to an AI system to generate a response or complete a task.

Token

A unit of text (e.g., a word or part of a word) used by AI models to process language.

 

 

 

To top