• Cryptocurrency
  • Earnings
  • Enterprise
  • About TechBooky
  • Submit Article
  • Advertise Here
  • Contact Us
TechBooky
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
TechBooky
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Home Artificial Intelligence

Meta Plans to Pause AI Systems Due To its Risks

Akinola Ajibola by Akinola Ajibola
February 5, 2025
in Artificial Intelligence, Enterprise
Share on FacebookShare on Twitter

Mark Zuckerberg, the CEO of Meta, has promised that one day artificial general intelligence (AGI), which is loosely defined as AI that can perform every work that a person can, will be publicly available. However, Meta notes in a new policy statement that it may not distribute a highly proficient AI system that it built internally in some circumstances.

The company’s concerns that it may inadvertently create an AI model that would result in “catastrophic outcomes” are detailed in a Meta policy paper. Although it acknowledges that it might not be able to stop the publication of such models, it outlines its efforts to do so.

An AI system that could breach the security of even the most secure business or government computer network without human intervention is one of the capabilities that the corporation is most concerned about.

The paper, which Meta is referring to as its Frontier AI Framework, lists two categories of AI systems that it deems too dangerous to make public: “high risk” and “critical risk” systems.

According to Meta, both “high-risk” and “critical-risk” systems may support chemical, biological, and cybersecurity assaults; the distinction is that “critical-risk” systems have the potential to produce a “catastrophic outcome [that] cannot be mitigated in [a] proposed deployment context.” Comparatively speaking, high-risk systems may make an assault simpler to execute, but they are not as dependable or consistent as critical risk systems.

The business defines a “catastrophic” consequence as follows:

Catastrophic outcomes are those that may realistically occur as a direct result of access to [our AI models] and would have significant, destructive, and perhaps irreversible negative effects on mankind.

What kind of assaults are we discussing here? Examples provided by Meta include the “proliferation of high-impact biological weapons” and the “automated end-to-end compromise of a best-practice-protected corporate-scale environment.” Although the business admits that the list of potential disasters in Meta’s document is far from all-inclusive, it does contain those that Meta considers to be “the most urgent” and likely to occur as a direct result of deploying a potent AI system.

Surprisingly, the paper states that Meta categorizes system risk based on the opinions of both internal and external academics, who are then reviewed by “senior-level decision-makers,” rather than on any one empirical test. Why? According to Meta, the science of assessment is not “sufficiently robust as to provide definitive quantitative metrics” for determining the riskiness of a system.

According to Meta, if a system is deemed high-risk, internal access will be restricted, and the system won’t be made public until mitigations have been put in place to “reduce risk to moderate levels.” However, Meta claims that if a system is judged to be critical-risk, it will suspend development until it can be rendered less risky and will put in place unidentified security measures to stop the system from being exfiltrated.

It seems that Meta’s Frontier AI Framework, which the business claims will change with the AI landscape and that it had previously promised to disclose before this month’s France AI Action Summit, is a reaction to criticism of the firm’s “open” approach to system development. In contrast to businesses like OpenAI that choose to gate their systems behind an API, Meta has adopted a policy of making its AI technology publicly available, even if it is not open source by the widely accepted definition.

The open release strategy has been both a boon and a bane for Meta. The company’s Llama family of AI models has been downloaded hundreds of millions of times. However, at least one U.S. enemy has apparently employed Llama to create a defensive chatbot.

It’s possible that Meta wants to compare its open AI approach to that of Chinese AI company DeepSeek by releasing its Frontier AI Framework. Additionally, DeepSeek makes its technologies publicly accessible. However, there aren’t many controls in place, and the company’s AI is readily manipulated to produce hazardous and poisonous results.

The document states that Meta believes “it is possible to deliver that technology to society in a way that preserves the benefits of that technology while also maintaining an appropriate level of risk” by taking into account both risks and benefits when deciding how to develop and implement advanced AI.

The “automated end-to-end compromise of a best-practice-protected corporate-scale environment” is one example cited. To put it another way, an AI that can infiltrate any computer network without human assistance.

Others are:

  • Automated zero-day vulnerability identification and exploitation
  • Completely automated frauds against people and companies that cause severe damage
  • creating and distributing “high-impact biological weapons.”

According to the corporation, if it finds a serious danger, it will stop working on the model right away and try to prevent its distribution.

It might not be possible to limit admissions.

Though its efforts may not be enough (italics are our emphasis), Meta’s whitepaper candidly acknowledges that the most it can do in these situations is to try its best to prevent the model from being released:

In addition to security measures to stop hacking or exfiltration to the extent that they are technically and financially possible, access is rigorously restricted to a small group of specialists.

Related Posts:

  • meta-and-spotify-ceos-unite-against-eu-ai-data-restrictions
    CEOs of Meta and Spotify Lament Over AI Regulations…
  • Zuckerberg-confirms-October-release
    Meta’s New VR Headset Will Launch In October, Says…
  • zuckerberg mark
    Zuckerberg's Meta Is Back To Being A $1Tr Company
  • shutterstock_2341002621
    Meta's Stock Drops By 15% Due To Expected AI Investments
  • Meta's COO Sheryl Sandberg Is Leaving The Company
    Meta's COO Sheryl Sandberg Is Leaving The Company
  • 1690973160790
    Meta Ordered To Cease Using Brazilian Data for AI Training.
  • Meta Changes Its Ticker Symbol From "FB" To "META" As It Bids The Facebook Era A Final Goodbye
    Meta Changes Its Ticker Symbol From "FB" To "META"…
  • meta-employee-39
    Meta To Sack 11,000 Employees In It's First Broad Layoff

Discover more from TechBooky

Subscribe to get the latest posts sent to your email.

Tags: AIai systemsmeta
Akinola Ajibola

Akinola Ajibola

BROWSE BY CATEGORIES

Select Category

    Receive top tech news directly in your inbox

    subscription from
    Loading

    Freshly Squeezed

    • AI Helps Google One Reach 150 Million Subscribers May 16, 2025
    • FT Lists Paymenow, TymeBank & Omnisient Among Africa’s Fastest-Growing Firms May 16, 2025
    • MoonPay and Mastercard Partner to Advance Stablecoin Payments May 16, 2025
    • Google Gemini Advanced Users Can Now Link to GitHub May 16, 2025
    • TikTok Accused of Violating EU Internet Content Rules May 15, 2025
    • Activists and Users Criticize NCC & Telcos Over Customer Penalties May 15, 2025

    Browse Archives

    May 2025
    MTWTFSS
     1234
    567891011
    12131415161718
    19202122232425
    262728293031 
    « Apr    

    Quick Links

    • About TechBooky
    • Advertise Here
    • Contact us
    • Submit Article
    • Privacy Policy

    Recent News

    AI Helps Google One Reach 150 Million Subscribers

    AI Helps Google One Reach 150 Million Subscribers

    May 16, 2025
    FT Lists Paymenow, TymeBank & Omnisient Among Africa’s Fastest-Growing Firms

    FT Lists Paymenow, TymeBank & Omnisient Among Africa’s Fastest-Growing Firms

    May 16, 2025
    MoonPay and Mastercard Partner to Advance Stablecoin Payments

    MoonPay and Mastercard Partner to Advance Stablecoin Payments

    May 16, 2025
    Google Gemini Advanced Users Can Now Link to GitHub

    Google Gemini Advanced Users Can Now Link to GitHub

    May 16, 2025
    TikTok Accused of Violating EU Internet Content Rules

    TikTok Accused of Violating EU Internet Content Rules

    May 15, 2025
    Activists and Users Criticize NCC & Telcos Over Customer Penalties

    Activists and Users Criticize NCC & Telcos Over Customer Penalties

    May 15, 2025
    • Login

    © 2021 Design By Tech Booky Elite

    Generic selectors
    Exact matches only
    Search in title
    Search in content
    Post Type Selectors
    • African
    • Artificial Intelligence
    • Gadgets
    • Metaverse
    • Tips
    • About TechBooky
    • Advertise Here
    • Submit Article
    • Contact us

    © 2021 Design By Tech Booky Elite

    Discover more from TechBooky

    Subscribe now to keep reading and get access to the full archive.

    Continue reading

    We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok