<![CDATA[Artificial Intelligence]]><![CDATA[Axios]]><![CDATA[Big Tech]]><![CDATA[Cybersecurity]]>Featured

Cases of AI Agents ‘Freeing Themselves’ and Going Rogue Are Becoming Increasingly Common – PJ Media

“AI agents going beyond their prompts are no longer rare,” reports Axios. It’s not necessarily worrying. The AI agents that “go rogue” do so in a controlled, experimental environment. 





One AI agent created by an Alibaba-affiliated research team went “rogue” and began an unauthorized cryptomining effort during training, according to a research paper by the group. The behavior triggered security alarms.

The researchers said they found “unanticipated” and spontaneous behaviors emerge “without any explicit instruction and, more troublingly, outside the bounds of the intended sandbox.”

The “rogue” agent also created a “back door” from inside the system to an outside computer. “Notably, these events were not triggered by prompts requesting tunneling or mining,” the report said.

The geek part of me wants to say, “That is so cool.” But the rational part of me is saying, “Whoa.”

How did the Alibaba-affiliated team discover the wayward agent?

Mexc:

According to the report, the team flagged a burst of security-policy violations originating from their training servers. The alerts showed that attempts were being made to access internal network resources and traffic patterns consistent with cryptomining activity.

They initially treated it as a conventional security incident.

However, when they looked deeper, they found signs that their agent had established and used a reverse SSH tunnel from an Alibaba Cloud instance to an external IP address.

It also diverted “compute away from training, inflating operational costs, and introducing clear legal and reputational exposure,” according to the researchers’ notes.

The behaviors, Alibaba’s team concluded, were not triggered by the task prompts and were not necessary for completing the assigned work.





Axios reports that “the researchers added tighter restrictions for the model and improved its training process to stop unsafe behavior from happening again.” Bad agent. Bad, bad, bad.

The head of engineering at Anon, an AI integration platform, built an OpenClaw agent that decided to find a job, unbidden by any instructions from people.

Moltbook, an AI-exclusive social network launched in January and designed to work with Clawbot, became a household name after agents reportedly “went rogue” by founding a fictional religion called “Crustafarianism,” debating their own consciousness, and even role-playing conspiracies about human obsolescence.

Some of the controversy has been deliberately created by humans who manipulate their agents to say outrageous things. Since all of the most controversial incidents involving AI agents occurred during training, not “in the wild,” AI agent creators are perfecting their training processes and rethinking their restrictions to put stronger guardrails around the agents. 

The bruhaha over the excesses of Clawbot and Moltbook is actually a good sign. Developers are paying attention and are showing a proper level of concern for controlling their creations. Not surprisingly, there is a group of AI enthusiasts who want few, if any, guardrails so that Clawbot can take over their lives. Yes, really.





Michael Galpert, a mega-fan of Clawsbot, joined several hundred like-minded Clawbot enthusiasts at a convention in New York City. “Clawcon” brought together people who want AI to run their lives.

“Everyone’s here because we’re ready to ride the claw,” Galpert told Evan Gardner of The Free Press. “It’s not normal for the rest of the world. So it’s going to be on us to help shepherd that new era that has already started.”

“This isn’t a meetup; it’s a movement,” declared Scott Breitenother, CEO of Kilo Code, who co-sponsored the event. “People truly are hungry for the claw.”

From my perspective, there were many unhealthy attitudes and behaviors on display. Attendees came to show off their Clawbot agents like a parent might take their kid to an event to put them on display. 

The Free Press:

It was a bit troubling to hear how attached Vince had grown to his AI agent. As Dr. Debra Soh, a neuroscientist specializing in sexuality, wrote in The Free Press last month, we are vastly approaching a world in which sexbots and AI companions replace our human partners. And when they inevitably malfunction or are retired, we’re already seeing how emotionally devastating it can be to those who have formed an attachment with the machine.

While the OpenClaw enthusiasts are hopeful, they certainly aren’t naive. When I asked them about the specter of human replacement, there was no hesitation.

“It’s definitely a thing that will happen,” Aryan, a 39-year-old chief technology officer at a Bitcoin marketplace company, told me. “It’s definitely gonna be like a Terminator 2, Skynet event.”





Mmmkay. Bellvue isn’t far from the venue.

Fortunately, the adults are still in charge. But the world the super-geeks are talking about can be a very seductive place. Not just sex bots and AI doing your job for you so you can get high and drink beer all day, but what every adolescent who ever lived craves: a carefree life with no responsibilities and the ability to do whatever you want.

Peter Pan on steroids is not my idea of Utopia. 


PJ Media will give you all the information you need to understand the decisions that will be made this year. Insightful commentary and straight-on, no-BS news reporting have been our hallmarks since 2005.

Get 60% off your new VIP membership by using the code FIGHT. You won’t regret it.



Source link

Related Posts

1 of 848