Generative Data Intelligence

Pennsylvania to pilot OpenAI’s ChatGPT in local government

Date:

Pennsylvania has signed up for a ChatGPT Enterprise plan, allowing the commonwealth’s government employees to use OpenAI’s generative artificial intelligence to complete day-to-day tasks, or so Governor Josh Shapiro hopes.

“Pennsylvania is the first state in the nation to pilot ChatGPT Enterprise for its workforce,” OpenAI boss Sam Altman said. “Our collaboration with Governor Shapiro and the Pennsylvania team will provide valuable insights into how AI tools can responsibly enhance state services.”

Staff working i Pennsylvania’s Office of Administration (OA) will test how the multimodal AI chatbot improves or impedes their work as part of a pilot study. The experiment is said to be the first-ever approved use of ChatGPT for US state government employees, and will test whether the tool can be used safely and securely, and whether it boosts productivity and operations… or not. Remember, this thing hallucinates and will just make stuff up confidently.

Shapiro’s office has launched an AI Governing Board that has consulted experts to figure out how to use the technology responsibly. 

“Generative AI is here and impacting our daily lives already – and my Administration is taking a proactive approach to harness the power of its benefits while mitigating its potential risks,” Gov Shapiro said this week. 

“By establishing a generative AI Governing Board within my administration and partnering with universities that are national leaders in developing and deploying AI, we have already leaned into innovation to ensure our Commonwealth approaches generative AI use responsibly and ethically to capitalize on opportunity.” 

Tools like ChatGPT can generate text and images given an input description, helping knowledge workers do things such as draft emails, create presentations, or analyze reports. Government departments across America, at least, are interested in test driving content-making machine-learning tools, though officials seemed concerned the technology could potentially expose sensitive information.

Last year, The United States’ Space Force forbade employees from using generative AI models. The military org’s chief technology and innovation officer Lisa Costa,said the technology poses “data aggregation risks.” Any secret info ingested by the software could potentially be used to train future models, depending on the setup, which could then regurgitate military information to others, she claimed. 

The ban is temporary, however, and may be lifted in the future as the US Department of Defense figures out how to deploy the technology safely and securely. Deputy Secretary of Defense Kathleen Hicks launched Task Force Lima, a group led by the Pentagon’s Chief Digital and Artificial Intelligence Office, to investigate how military agencies can integrate generative AI capabilities internally and mitigate national security risks.

Under President Biden’s “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government” executive order, federal government agencies have released information on how they use AI in non-classified and non-sensitive applications.

A few of these sound like they may fall under generative AI, such as the simulated X-ray images used by US Customs and Border Protection to train algorithms to detect drugs and other illicit items in luggage, or NASA’s ImageLabeler, described as a “web-based collaborative machine learning training data generation tool.” ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?