Generative Data Intelligence

US Space Force halts ChatGPT-like tool use because of security concerns: Examine

Date:

At least 500 Space Force personnel have been affected, as reported by the former chief software officer of the department.

The United States Space Force has implemented a temporary ban on the use of generative artificial tools by its personnel while on duty. This measure is aimed at safeguarding government data.

Members of the Space Force have received notifications that they are not permitted to use web-based generative AI tools for creating text, images, or other media unless granted specific approval. This information is derived from an October 12 report by Bloomberg, referencing a memorandum addressed to the Guardian Workforce (Space Force members) on September 29.

In the memorandum, Lisa Costa, Space Force’s deputy chief of space operations for technology and innovation, acknowledged that generative AI has the potential to revolutionize their workforce and enhance the speed at which Guardian operates. However, she also raised concerns about the current cybersecurity and data handling standards, emphasizing the need for more responsible adoption of AI and large language models (LLMs).

The United States Space Force, a branch of the U.S. Armed Forces dedicated to safeguarding U.S. and allied interests in space, has already felt the consequences of this decision. It has impacted over 500 individuals using a generative AI platform called “Ask Sage.” This information comes from Bloomberg, citing remarks from Nick Chaillan, the former chief software officer for the United States Air Force and Space Force.

Chaillan expressed his dissatisfaction with the Space Force’s choice, noting, “Clearly, this is going to put us years behind China.” He conveyed this sentiment in a September email of complaint to Costa and other senior defense officials.

He went on to add, “It’s a very short-sighted decision.” Chaillan highlighted that the U.S. Central Intelligence Agency and its departments have developed their own generative AI tools that meet data security standards.

Recent months have witnessed growing concerns that large language models (LLMs) could inadvertently disclose private information to the public. In March, Italy temporarily blocked the AI chatbot ChatGPT, citing suspicions of data privacy breaches. However, the decision was later reversed about a month later.

Notably, tech giants such as Apple, Amazon, and Samsung have also taken measures to either ban or restrict their employees from using AI tools similar to ChatGPT in their workplace.

Latest News

CFTC commissioner: Voyager Digital was “no better than

Latest News

According to Terraform Labs, Citadel Securities was involved

Latest News

10/13 price analysis: ADA, DOGE, TON, DOT, MATIC,

Latest News

European regulator: DeFi has both substantial hazards and

Latest News

JPMorgan launches BlackRock, a tokenization platform, among important

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?