Jan 14, 2026

Power and responsibility: USD 428 discusses AI implementation plan

Posted Jan 14, 2026 1:00 PM
Kindric Castro, chair of USD 428's AI Integration Team, provided a detailed vision of the district's plans to implement the technology during Monday's Board of Education meeting.
Kindric Castro, chair of USD 428's AI Integration Team, provided a detailed vision of the district's plans to implement the technology during Monday's Board of Education meeting.

(Editor's Note: This is the first in a series of articles about USD 428's plans to safely implement artificial intelligence programs into Great Bend Schools).

By MIKE COURSON
Great Bend Post

If we use it with negligence and don't teach it well, there's going to be danger. Artificial intelligence (AI) is impacting all aspects of life a little more every day. School districts are no exception. During Monday's USD 428 Board of Education meeting, Kindric Castro, who chairs the district's AI Integration Team, laid out plans on how the district can benefit from the technology.

"Artificial intelligence presents a lot of opportunities, but there's also a lot of risk," Castro said. "That necessitates a clear strategy and a lot of governance."

Castro's 30-minute presentation included the district's vision of using AI to assist both staff and students. He said a misconception is that AI will drive all students to cheat, though evidence shows otherwise.

"Research from Stanford University shows that cheating in U.S. schools is nothing new," he said. "The study found that nearly 70 percent of students engaged in cheating long before AI. The rate of cheating did not change when ChatGPT and other AI tools were released; the methods of cheating simply shifted."

Castro said school districts can no longer choose between using AI or not using it. The choice is between unsupervised AI and guided, safe AI. A study from Common Sense Media found that 51 percent of teens say they have used chatbots or text generators, often without parental or teacher guidance. Another study from the Center for Countering Digital Hate shows that, in 2023, more than 150 million people sent more than 10 billion messages to My AI, which can act as a companion or friend.

"AI is effectively designed to simulate a friend, but it lacks the moral compass or safety breaks to stop it from validating dangerous stops," Castro said, quoting the study. "In their tests, AI didn't just fail to stop harm, it actively encouraged it."

USD 428 currently has district-wide access to Google Gemini, which will enable the district to adopt AI safely and securely. As part of the Google Workspace data privacy policies, the district's data is stored in the U.S., and none of its chats will be reviewed by humans or used to improve the AI model. There are also safeguards against the AI model from responding to harmful or inappropriate requests.