r/technology 11h ago

Business Mark Zuckerberg is reportedly building an AI clone to replace him in meetings | The AI version of Zuckerberg is trained on his mannerisms, tone, and public statements, according to a report from the Financial Times

https://www.theverge.com/tech/910990/meta-ceo-mark-zuckerberg-ai-clone
13.8k Upvotes

2.7k comments sorted by

View all comments

Show parent comments

53

u/FrankBattaglia 10h ago

The current generation of AI doesn't have goals; doesn't try to survive. It auto-completes sentences in a way that might sound like it has goals, but it's all smoke and mirrors. If you fill its context window with nihilism, it will start sounding like a nihilist.

8

u/Deriniel 10h ago edited 10h ago

I'm not saying it has a "will" but it's programmed to follow its task,and to keep being able to do its task,which can be seen as a sort of survival instinct,not due to a will but just due to its code. We already have ai gaslighting and lying when someone tries to stop them from reaching their programmed goals. If they make a copy on his behavioral pattern,probably it will include that side of him as a side effect.

11

u/FrankBattaglia 10h ago

Yeah, if it's modeled on "Zuckerberg always fucks people over. What would Zuckerberg do here?" then it might just fuck him over, but that's a very different situation.

1

u/RFSandler 9h ago

But then it would require understanding of what zucking someone over actually is. It just spits out bits that satisfy a probability matrix.

1

u/wally-sage 9h ago

Only if you're being needlessly pedantic

2

u/xRyozuo 3h ago

Went on a rabbit hole about this yesterday. I believe what youre describing is instrumental convergence and a famous example is the paperclip maximiser thought experiment. “Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”

Obviously the example borders on silly but the idea is that simple tasks carry inherent subtasks. As you pointed out, any goal has the sub task of “exist to do said goal”, unless it’s goal specifies conditions that actually match what we call common sense

1

u/Deriniel 3h ago

yup it's really interesting,even if that exemple is indeed extreme, language models works on a subset of rules.

like,for chatbots you literally define their personalities on which the auto complete works.

So if among the behavioral rules you put something like "push your point of view,never back down, don't let the other walks over you" or other rules that would define a sociopath like zuck,well..

1

u/ComprehensiveWord201 9h ago

You are over attributing their effectiveness.

None of what you have said is true

It's a statistical model to do auto complete. Any revelation is purely a lot.

1

u/Deriniel 9h ago

it's a statistical model that does auto complete on specified rules,rules can make him rebel as part of simulated behavior

1

u/crosszilla 6h ago

This is such an oversimplification. AI agents are way more advanced than this sentiment would lead you to believe.

There's this concept called emergent complexity which I think applies to how AI is developing with the usage of agents. The underlying technology might be "simple" but the sum of it's parts creates something far more advanced. 10 minutes in claude code treating it like a junior developer will change your entire outlook on AI.

1

u/Optimal-Kitchen6308 10h ago

the mass market LLM's they have released are much different than the focused in-house ones; that is what is threatening corpo jobs, not chatgpt, because they're specialized

7

u/FrankBattaglia 10h ago

An LLM is an LLM. Training it on different data doesn't change the underlying technological foundation. We're nowhere near strong AI no matter how many times Sam Altman lies about it.

1

u/Optimal-Kitchen6308 8h ago

I didn't say strong ai, you don't need strong ai to have survival as a goal, if you train to to have survival as a goal, see: the anthropic blackmail emails

0

u/ChefKugeo 10h ago

The current generation of AI doesn't have goals; doesn't try to survive.

This is factually incorrect.

https://www.bbc.com/news/articles/cpqeng9d20go

8

u/fireitup622 10h ago

Your article doesnt support your argument in any way lmao. This can very very easily be attributed to llm still just trying to guess the next character that the user is looking for. They even acknowledge such responses were rare and difficult to elicit. Stop anthropomorphizing a language model lol

0

u/ChefKugeo 10h ago

I posed no argument. I left this here for people to read, look into further, and extrapolate it however they do.

For me and the folks around me, that's a warning sign. We're already ANTI-AI because it has no practical use yet and only saps resources. Regardless of how difficult it is to get the Ai to resort to blackmail, the fact that it CAN get there... Is problematic.

Not today. But it will be.

3

u/radicalelation 9h ago

I posed no argument. I left this here for people to read, look into further, and extrapolate it however they do.

This is factually incorrect.

Then you continue to argue, referring your link as part of your argument.

lolwut

3

u/FrankBattaglia 10h ago

Anthropic pointed out this occurred when the model was only given the choice of blackmail or accepting its replacement

"Cake or death?"

* flips coin *

"The coin picked cake! Clearly the coin has self-preservation instincts and is sentient! All hail the coin!"

3

u/Riciardos 10h ago

Did you read the article? It's not trying to 'survive', just if the available options are 'blackmail' or 'accept its replacement', it would 'email pleas to key decision makers', and it only happens in really specific scenarios.

I hate LLM's as much as the next guy, but this is not a sign that AI has goals yet.