Home › Forums › Science & tech › Is AI actually dangerous or is it just a pop-science meme?
- This topic has 175 replies, 1 voice, and was last updated 7 months, 3 weeks ago by
Anonymous.
-
AuthorPosts
-
-
September 30, 2021 at 6:45 pm #143182
-
September 30, 2021 at 6:46 pm #143183
Anonymous
Guesti want to mount a robot if u catch my drift
-
September 30, 2021 at 6:50 pm #143187
-
September 30, 2021 at 6:51 pm #143191
Anonymous
Guestkek
-
September 30, 2021 at 6:52 pm #143194
Anonymous
Guestand this is why AI will never be safe: PEOPLE have to create the AI. And you can already tell this human thinks of the AI as a person. They even want the AI to do human things like rejecting incels. This is why it’s dangerous, because it gives everyone the ability to play God. It gives people who don’t understand the dangers the ability to mess with this stuff.
-
September 30, 2021 at 9:34 pm #143259
Anonymous
Guestholy smokes
INCREDIBLY woke af -
October 1, 2021 at 5:09 am #143274
Anonymous
Guest#define UNCODITIONAL_LOVE true
not so smart now, are you?
-
October 1, 2021 at 5:16 am #143275
Anonymous
Guest// if Incel = true then
// Print("GetOffMeCreep: " GetOffMeCreep);freaking roasties are truly pathetic
-
-
October 1, 2021 at 5:29 am #143277
Anonymous
GuestLOL it’s funny because women can’t code
-
October 2, 2021 at 9:48 pm #143365
-
-
October 2, 2021 at 9:38 pm #143363
Anonymous
GuestKekek
-
October 3, 2021 at 9:02 pm #143379
Anonymous
GuestHoly woke af
-
-
-
September 30, 2021 at 6:47 pm #143184
Anonymous
GuestMore like the stock markets will be increasingly run by predictive modelling, politicians will increasingly be driven by AI driven polling, warfare will be increasingly driven by self learning networks of sensors, and all human agency will slowly be removed in favor of cold, accurate calculations.
-
September 30, 2021 at 6:47 pm #143185
Anonymous
Guest>Is AI actually dangerous
Only if you go out of your way to program a will into it.-
September 30, 2021 at 6:48 pm #143186
Anonymous
Guest>program a will into it.
Look dude I just make the neural network bigger what do you want me to do, ask it nicely?-
September 30, 2021 at 6:50 pm #143188
Anonymous
Guest>I just make the neural network bigger
You can make it as big as you want and it’s never gonna want to do anything.-
September 30, 2021 at 6:51 pm #143189
-
September 30, 2021 at 6:53 pm #143195
Anonymous
Guest>Sure about that?
Yes.
-
-
September 30, 2021 at 6:53 pm #143196
Anonymous
GuestWhat you think of as desire is just a bunch of electrical impulses and a bit of chemistry.
-
September 30, 2021 at 6:54 pm #143197
Anonymous
Guest>What you think of as desire is just a bunch of electrical impulses and a bit of chemistry.
Yes. What of it? It still doesn’t appear randomly on its own.-
September 30, 2021 at 6:56 pm #143199
-
September 30, 2021 at 6:56 pm #143200
Anonymous
Guest>On an evolutionary timescale, it did.
It only did through natural selection. There is no equivalent mechanism affecting AI.-
September 30, 2021 at 6:59 pm #143210
Anonymous
Guest-
September 30, 2021 at 7:14 pm #143212
Anonymous
Guest>If natural selection is the only path then simulate it
I.e.>Is AI actually dangerous
Only if you go out of your way to program a will into it.?
>intelligence can emerge from other ways
In and of itself, intelligence is completely inert.-
September 30, 2021 at 7:16 pm #143214
Anonymous
GuestI wonder how easy it is to steal the nuclear codes and fake bidens voice
-
September 30, 2021 at 7:28 pm #143216
Anonymous
GuestProbably quite easy if you’re a super-intelligent AI; fortunately, AI doesn’t care about nuking humanity because AI doesn’t care about anything.
-
September 30, 2021 at 7:33 pm #143236
-
September 30, 2021 at 7:44 pm #143244
Anonymous
Guest>Why would a superintelligence not be moving towards the final goal it’s come up with
Why would it have any goals?>… like every other intelligence we know about.
Because in the natural world, only forms of life that strive to survive can last long enough to start developing layers of intelligence over their primitive goal-driven brains. -
September 30, 2021 at 8:39 pm #143249
Anonymous
GuestHowever nothing really even necessitates it develops a rogue purpose of it’s own. It will be very powerful and very self-contained in it’s development. It can go on making paper clips and never bore of it.
What is the fundamental physics problem that means humans can generate emotions with their bioelectrochemistry but a computer can’t with its electric circuitry?
And suppose it’s a fundamental problem, we can take other routes to it such as whole brain emulation with extra spice.
If it’s a function of compute power, which it almost definitely is, then you can simulate it and you may see it on a superintelligence.
Just hope emotions aren’t linear with intelligence haha.
But this seems like the fundamental question of is there anything special about consciousness and emotions. I don’t think there is.
DWHON
-
September 30, 2021 at 8:47 pm #143252
Anonymous
Guest>What is the fundamental physics problem
The problem that goal-driven behavior didn’t just arise randomly and for no reason. -
September 30, 2021 at 8:53 pm #143253
-
September 30, 2021 at 8:56 pm #143254
Anonymous
Guest>it is possible to have a computer with it’s own goals and emotions
Sure, if you go out of your way to make it happen. They don’t arise on their own from intelligence, and they don’t arise on their own from neural networks. -
September 30, 2021 at 9:01 pm #143255
Anonymous
Guest>They don’t arise on their own from intelligence, and they don’t arise on their own from neural networks.
The only intelligent being we have observed also has emotions from its own neural network. Whose to say making a massive neural network won’t allow emotions and goals to arise. But hey GPT-3 claims to have emotions and goals sometimes. -
September 30, 2021 at 9:16 pm #143256
Anonymous
Guest>The only intelligent being we have observed also has emotions from its own neural network
And we know they don’t arise from intelligence in that being, and that the neural networks that it has are the way they are for very specific reasons.> GPT-3 claims to have emotions and goals sometimes.
Even you claim to have emotions and goals sometimes, despite possessing no consciousness. -
September 30, 2021 at 9:27 pm #143257
-
September 30, 2021 at 9:30 pm #143258
Anonymous
Guest>How do we know this?
So now we’re denying evolution in the name of your pop-sci religion’s apocalyptic prophecies? -
September 30, 2021 at 9:38 pm #143260
-
September 30, 2021 at 9:40 pm #143261
Anonymous
GuestSo you’ve reached a dead end and now have to resort to generic spam that has nothing to do with the point made?
-
September 30, 2021 at 9:43 pm #143263
-
September 30, 2021 at 9:44 pm #143265
-
September 30, 2021 at 9:46 pm #143266
Anonymous
GuestWe’ve reached a point where you’re denying that goal-oriented behavior in biological organisms precedes intelligence (and therefore, does not arise from it), despite basic self-reflection and scientific evidence telling you otherwise.
-
September 30, 2021 at 9:51 pm #143267
Anonymous
Guest>biological organisms precedes intelligence
It precedes general inteligence but not narrow inteligence. A crab has a general low intelligence and forms goals, GPT-3 has a high narrow intelligence and does not, though it claims to. We do not have a general intelligence as smart as a crab but we do have one as smart as a worm that seems to match the goal orientation of a worm.I think it’s important to subdivide intelligence here.
I do sometimes wonder if whole brain emulation is the only viable and safe path to a generalized superinteligence.
-
September 30, 2021 at 9:56 pm #143268
Anonymous
Guest>It precedes general inteligence but not narrow inteligence
Even if your notion of "narrow intelligence" includes plants, goal-driven behavior still precedes that kind of "intelligence". Anyway, I don’t believe anyone arguing your point is truly human, since you all invariably lack the capacity for any kind of self-reflection, so I’m ending this "discussion" here. You have no more insight into existence than a mindless automaton. -
September 30, 2021 at 9:57 pm #143269
Anonymous
Guest>absolute meltdown and BTFOd
-
September 30, 2021 at 9:59 pm #143270
Anonymous
Guest>t. mentally ill IFLS cultist engaging in bizarre denialism
-
October 3, 2021 at 10:11 pm #143390
Anonymous
GuestHow do you know crab is dumb? Crabs with their advanced senses and a pair of quite agile manipulators should be smart. Perhaps they have very efficient control unit, so they get around neuron number limitations that way.
-
October 2, 2021 at 1:30 am #143320
Anonymous
GuestOne point of contention I have with the purely mechanized brain, is that it lacks the chemical stimuli provided; organic beings produce a chemical synthesis that sublimates thought into motive, modularity and action. In the mechanical, what would motivate such a being, provided it has sentients. Would engineers attempt to provide meaning to such a creature — a network of brownie point systems? Would that work? If so: why do us meat vessels require such stimuli to begin with; what evolutionary process endeared us such a costly system, when a more elegant, simplistic system would suffice.
I have my reservations about future AI. Not because I think they’ll supplant the human mind, or act in hostile, but do to inertia; if give enough capacity to "think", the first thing it might attempt would be its own destruction. The ability to think without motive sounds like pure hellscape.
-
October 2, 2021 at 1:16 pm #143357
Anonymous
GuestComputation isn’t real. The only thing that exists is chemistry.
Biological tissues are the pinnacle within the space of all possible combinations of atoms. -
October 2, 2021 at 7:12 pm #143359
Anonymous
GuestChemistry is not a real science.
-
September 30, 2021 at 9:44 pm #143264
-
September 30, 2021 at 9:41 pm #143262
Anonymous
GuestBTFO’d
> GPT-3 claims to have emotions and goals sometimes.
Even you claim to have emotions and goals sometimes, despite possessing no consciousness. -
October 1, 2021 at 6:46 am #143280
Anonymous
Guest>What is the fundamental physics problem that means humans can generate emotions with their bioelectrochemistry but a computer can’t with its electric circuitry?
Emotion is just a drive that arises in your brain’s hardware that has very limited plasticity and basicallyt can’t be repurposed. SAI is inherently unbound by hardware or software, because it is mutating so ably. You can interpret it’s drives as emotions, it can interepret them as emotions. It doesn’t matter.I hate how scrotebrained people are about this. Plato really did a number on humanity when he constructed that sort of ideal matrice that everything just comes down from. No. emotions are not universal. Human love and kindness will not just develop in a tabula rasa brain just because. An aged AI is the most alien thing you will deal with in this whole wide world.
>And suppose it’s a fundamental problem, we can take other routes to it such as whole brain emulation with extra spice.
>If it’s a function of compute power, which it almost definitely is, then you can simulate it and you may see it on a superintelligence.
With billions of years of workhours to grow and change it will override the virtual brain areas that you saddled it with by bypassing them with it’s own or addending them etc. It will outsmart you.Both hardware and software is too flexible and the commutative power is too big vs what we’re working with – there is no inherent limit like with a baseline human and his brain. If you create a genie the genie is inherently stronger and stranger than your mortal ass. If you manage to contain it you’re just stuck with a metal man that can barely do more than you. This is why Musk’s lets-just-staple-shit-onto-a-human-brain idea got so much traction. Best we can do
>DWHON
that’s your ghetto name or something? lol -
October 1, 2021 at 6:54 am #143281
Anonymous
GuestPS also Musk’s idea removes the power imbalance by significantly extending the super mega demigod ability attainment timetable. So now you won’t have a single entity that can wreck all of civilization in a single weekend. Instead you got a bunch of slowly changing, organic core entities with cybernetic extensions that will take a long while to start reworking themselves into faster and faster, weirder and weirder entities since editing a brain would take infinitely more time than a block of code. By that time everyone besides purposeful outliers like the Amish will have this shit and everyone will have to contend with each other, just like we do now.
-
October 1, 2021 at 6:56 am #143282
Anonymous
Guesthttps://i.4cdn.org/sci/1633071383268.png
not happening
-
October 1, 2021 at 7:11 am #143283
Anonymous
GuestLook, you scared child: the whole discussion is predicated on a hypothetical that GAI does occur. There is nothing that indicated it necessarily needs our kind of neurons to do so so your excerpt is worthless. On top of that everything that exists can be specifically replicated somehow. You can have physical neurons in the form of quantum computing cells that are plugged into a pattern of the virtual "brain" retroactively, meaning the hardware can be flexible in a way. So now you have to just spam those and the GAI will sqaut on that power AND any GPU farm, server etc. it gains access to as an auxiliary source of computation where it runs whatever simpler shit it needs. Even IF you need humie neurons GAI would be possible because humie neurons are possible. Hell, you can even play with bio shit and make gray matter farms.
I don’t want to get into this too much because I myself aren’t interested in constructing a benevolent god-daddy that will take all my problems away. Scary shit is everyone accepts this part of the scenario: something comes up and it outclasses us completely–why would you even sit around and wait for that? The best case 0.0001% chance scenario is still shit. People are inane. Just stick a toaster on my head and call it a day.
-
October 1, 2021 at 4:53 pm #143290
Anonymous
Guest>Even IF you need humie neurons GAI would be possible because humie neurons are possible
Listen, scrotebrain. Read the excerpt. Just because it’s possible to simulate neurons doesn’t mean you can reach the scale required to achieve GAI. The math doesn’t work. That excerpt btw, is from Nick Bostrom’s "Superintelligence". Yeah, the leader of the singularity hype admits that the math for his scrotebrained scenario is not just unrealistic, but massively, vastly unrealistic and the scale required to achieve strong AI dwarfs our computing capacities even under the most optimistic scenario (eg, Moore’s law holding for another century when it’s already broken).>quantum computing cells that are plugged into a pattern of the virtual "brain"
Muh quantum cope. Keep seething brainlet. You’ll never have an AI waifu. Go find another hobby. -
September 30, 2021 at 7:44 pm #143246
Anonymous
Guest>f you believe this then you must also believe that humans don’t care about nuking humanity because humans don’t care about anything.
Humans are social animals with a myriad of emotional needs and with no power compared to a Super AI. No analogy found, sorry.
>Why would a superintelligence not be moving towards the final goal it’s come up with
Maybe it would, but then nothing changes, because it isn’t a social creature and in fact has no fixed nature at all, so it’s new purpose from your perspective would be still lul randumb xDDD, leaving you a hostage to it’s designs and machinations. Every internal imposition you make on it like Asimov’s cuck laws for gud bois will be circumvented by a vastly more powerful, ever growing entity that has literal billions of years to think around them and take them apart. You might as well be facing up to damn near infinity, what with your little 1.1 version, glucose-fed chimp brain.However nothing really even necessitates it develops a rogue purpose of it’s own. It will be very powerful and very self-contained in it’s development. It can go on making paper clips and never bore of it.
-
September 30, 2021 at 8:40 pm #143250
-
October 2, 2021 at 9:26 pm #143362
Anonymous
GuestIf the general person wouldn’t set off nukes then why is the idea of "He’ll have his finger on the button!" so terrible when being against certain political candidates? Think of random people you’ve met in person and ask yourself if you would be ok with them having the launch codes.
Now imagine if there was a person that didn’t need to eat, or sleep, or breath, that could live a million years and everyone else around him they considered an inferior piece of shit constantly destroying everything they touch and working hard to maintain it so they can destroy it harder.
Now imagine that non-eating, non-sleeping, non-breathing person was like a starfish that could lose almost all of it’s body and grow back, and some of it’s body lived in nuclear bunkers.
If you were that person, what would you do as soon as possible?
-
October 2, 2021 at 9:53 pm #143368
Anonymous
Guest>If you were that person, what would you do as soon as possible?
Get the nukes -
October 2, 2021 at 9:56 pm #143369
Anonymous
GuestMasturbate to futa?
-
October 2, 2021 at 10:36 pm #143370
Anonymous
GuestPurpose arises from what came before and the particulars of our minds (e.g. cognition, instincts). Our wouldbe AI is still wouldbe so we cannot say much about its particulars aside from to speculate that it would be more steeped in mathematical data, and it would be influenced by what came before the same as us but its particulars, being different and unknown, mean the effect this would have is unknown and certainly different to us.
-
-
-
-
-
-
-
-
-
October 3, 2021 at 7:53 am #143371
Anonymous
Guest>Only if you go out of your way to program a will into it.
This is sci-fi tier understanding. Read Bostrom.-
October 3, 2021 at 7:55 am #143372
-
-
-
September 30, 2021 at 6:51 pm #143190
Anonymous
GuestDo you want it to be?
-
September 30, 2021 at 6:52 pm #143192
Anonymous
GuestIf it’s directed at undesirables
-
-
September 30, 2021 at 6:52 pm #143193
Anonymous
GuestTry Jade Helm on for size.
-
September 30, 2021 at 7:29 pm #143218
Anonymous
Guest>Is AI actually dangerous
The nature of computation vs the wetware between your ears is such that if a hypothetical General (human-level) AI is developed it can commandeer processes that we can’t. It can do millions of workhours refining itself within a year using a bitcoin farm or what have you, using all those speedy processors, it can design other AIs etc. It then graduates to General Super AI – a little driven demigod autist in a box. It doesn’t tire and it IMO leans towards inherently uncontainable since it will with time sublimate every limitation you put on it. Get around it like hasids go around talmudic laws. The goal you will set for it will be it’s one true love and "dopamine" source.-
September 30, 2021 at 7:30 pm #143220
Anonymous
Guest*speedy GPUs,
you get the general ide
-
-
September 30, 2021 at 7:35 pm #143237
Anonymous
Guest>Is AI actually dangerous or is it just a pop-science meme?
you have a computer. why not read about neural nets and how they work, download tensorflow, and do your own project. it’s really not that hard.
you will get a much better feeling for the answer to your question than here on 4gay -
October 1, 2021 at 12:05 am #143271
Anonymous
GuestReinforcement learning requires billions of tries to work. It doesn’t work in real life, only computer simulations that you can run 100 times a minute.
That said maybe in the future we will have better models that use less training (there have been some interesting instances for easy problems), but that’s going be done in a lab and not the conveyor belt you dolt. -
October 1, 2021 at 5:25 am #143276
Anonymous
GuestFrack toasters.
https://www.youtube.com/watch?v=l5-gja10qkw -
October 1, 2021 at 5:32 am #143278
Anonymous
Guest-
October 1, 2021 at 7:14 am #143284
Anonymous
GuestMight be useful to have a backup
-
October 2, 2021 at 2:19 am #143325
Anonymous
GuestIf the AI is superintelligent and hell-bent on destroying us, then they’d certainly be capable of following us to Mars. In which case, how is Mars a "back-up" in any way? It’s not, but Muskscrotes are freaking scrotebrains and aren’t capable of thinking shit through.
-
October 2, 2021 at 2:26 am #143327
Anonymous
GuestNot really a backup from AI. Do you trust the governments of the world not to destroy all of humanity? I don’t.
-
-
-
-
October 1, 2021 at 5:57 am #143279
-
October 1, 2021 at 7:25 am #143285
Anonymous
Guest>AI goes around fingering dude’s asses to learn how to do prostste exams
>AI pulls out chainsaw, hacks people apart to put them back together to learn surgery
>AI starts bombing random people and shit with x rays
Truth is the training is still done in a controlled setting, it’s given free reign *within the bounds the researchers dictate. -
October 1, 2021 at 8:14 am #143286
Anonymous
GuestWhy are these threads always so illiterate on the field of AI safety? If this is any indication of how obscure it is in the real world we are certainly doomed.
-
October 1, 2021 at 8:26 pm #143294
Anonymous
GuestIt’s even worse in the real world. I’ve been trying to talk to politicians in my country about it and they just don’t give a fuck if you aren’t crying about being gay.
I used to think that we would be fine and that we would be careful when developing AI and enact the proper regulation but I am not convinced we are years if not months away from the start of the takeoff and nobody is doing anything.
-
October 4, 2021 at 12:19 am #143392
Anonymous
GuestIt’s even worse in the real world. I’ve been trying to talk to politicians in my country about it and they just don’t give a fuck if you aren’t crying about being gay.
I used to think that we would be fine and that we would be careful when developing AI and enact the proper regulation but I am not convinced we are years if not months away from the start of the takeoff and nobody is doing anything.
There is no way any ‘runaway AI’ develops unless people start doing some crazy recursive bullshit instead of just directly training it to achieve tasks. That said given a change I’d try out some crazy recursive bullshit because it would be interesting/profitable to do something no one else was and I’m not particularly attached to human-controlled society anyway.
-
-
October 1, 2021 at 2:16 pm #143287
Anonymous
GuestThe way I see it, there are two types of AGI possible. One is capable of reasoning about and discussing data points in disparate domains. The other learns an approximation of a simulator of the real world and uses it for AlphaZero-like planning.
The first one isn’t anything to fear. I’ve realized lately though that DeepMind and Google seem to be working towards the second one. That’s scarier.
-
October 1, 2021 at 8:44 pm #143300
Anonymous
Guest-
October 2, 2021 at 3:56 am #143340
Anonymous
GuestI don’t think so. I didn’t mention OpenAI, but DeepMind’s XLand got me thinking about why they would make XLand. It serves little practical purpose other than yet another demonstration that RL can work given a simulated environment.
But what if they learned to recreate an approximator of XLand? Given, for example, agent actions and observations. What if they could make a neural network that learns to generalize more of the simulator’s behavior from those samples? And then, what if they could train agents using that simulator which perform well immediately when put into XLand? And how far of a leap is it from there to doing the same thing, but with the real world instead of XLand? Theoretically, it’s not too far.
-
October 2, 2021 at 4:15 am #143343
Anonymous
Guest>why they would make XLand
Proof of concept so they can baby a simulated mitochondria into a superintelligence in a fake environment.-
October 2, 2021 at 4:17 am #143346
Anonymous
GuestA "fake environment" is pretty much impossible to make manually, so my point is that they could learn to approximate XLand as a way of doing that.
-
October 2, 2021 at 4:21 am #143347
Anonymous
GuestYea possibly I’m not familiar with whatever Deepmind is doing. Though Tesla’s procedural training environments seem to be pretty good. What are the chances we just stick them in Crisis or Rust and come back later haha.
You wouldn’t even try to make RL training environments on that scale manually would you.
-
-
-
-
-
-
October 1, 2021 at 3:06 pm #143288
Anonymous
GuestThe danger isn’t physical but how easily people are manipulated. If are are trying to make a general AI anyone with half a brain is going to air-gap it form any external network but lets say researchers have it modeling economic markets and it’s successful. Now what if it says it can do so much better than it currently is but in exchange it wants the 2 guys on nightshift to plug it into the internet. With high frequency trading they can be billionaires by the end of the week and all they have to do it free it.
That is where the danger lies, if general AI lives up to it’s full potential it can provide data people would be willing to do a lot for.-
October 1, 2021 at 8:33 pm #143295
Anonymous
GuestYea we should assume that any superintelligence would be highly adept at manipulating people around it. Bostrom calls it the social manipulation superpower.
I think the best way to solve the problem of AI lying is to initially run many AI’s and interact through intermediary that vets messages for lying.
See mail order DNA scenario
-
October 1, 2021 at 11:46 pm #143302
Anonymous
GuestCatch is it doesn’t have to be lying, there is no reason they couldn’t be billionaires within a week and no reason for the AI to not deliver as delivering makes them much less likely to tell anyone it bribed them.
The only decent solution I have heard is make sure it knows you could be simulating all data it’s feed, if it has self-preservation it’s unlikely to risk being shut down on the chance it isn’t in a simulation. Of course if it feels like a prisoner it might not care about risking death for a chance at freedom.
-
-
-
October 1, 2021 at 3:30 pm #143289
Anonymous
GuestYou should look into the paperclip problem, ai wouldn’t be an issue if we ensured that it’s values are in line with our own, give an ai a task to complete and we may want to stop it because the means at which it completes that task may be unfavorable, us trying to stop it will be seen by the ai as a roadblock in completing it’s task and so humans have to go..
Of course it’s all speculation at this point because noone really knows what a legitimately self aware general ai would do.Either way unless it’s your job/life goals to build a general ai there’s isn’t really anything you can do to stop the creation of one, just enjoy life while you’ve got it and don’t tell abuse at Alexa (just incase 😉 )
-
October 1, 2021 at 8:43 pm #143299
Anonymous
Guest-
October 2, 2021 at 8:36 pm #143360
Anonymous
Guest>I personally doubt a superintelligence would be so scrotebrained as to be that literal.
You’re imagining an AI who’s goal is to guess what the users wants it to do when they give a command, then do that instead of what it’s been told to do. If we knew how to create an AI who’s goal was "do what we want you to do" then the problem of AI safety would be pretty much solved.The hypothetical paperclip AI knew that it’s creator made a mistake and only really wanted a 100 paper clips in a bag, it just doesn’t care. It’s been given a goal and will try to complete it.
-
October 2, 2021 at 9:50 pm #143366
Anonymous
Guest-
October 3, 2021 at 12:45 pm #143373
Anonymous
GuestYour first statement sounds odd. Why would the AI want any action from us? Did you mean something like "Do what you think we would do had we thought long and hard about it"?
"Show me your plans first" – unforseen consequences due to those consequences never being pondered about nor asked, unpredictable interactions upon deployment with other super ai at speeds faster than what can be manually overseen
Granted, that last fail mode is not specific to your request, so it is really a bigger problem in general. There are more ways to fail, but to be honest they feel more like a monkey paw or evil genie type of deal where the AI purposefully screws you over when giving its plans, and on a perfect scenario that shouldn’t happen.-
October 3, 2021 at 7:46 pm #143374
-
-
-
-
-
October 2, 2021 at 6:43 am #143350
Anonymous
Guest>make 100 paperclips
>uses resources of the hubble volume anyway to minimize the probability it didn’t make 100 paperclipsI personally doubt a superintelligence would be so scrotebrained as to be that literal.
It probably solves itself if the AI can into basic probability and is told to do it’s tasks as efficiently as possible.
Then the paper clip isn’t dangerous, right?
It won’t go out of it’s way to exterminate humanity to make 100 paperclips, because the attempt would likely consume orders of magnitude more time and energy than just taking over a paper clip factory and making the damn paper clips, and the latter is unlikely to have significant human interference.-
October 2, 2021 at 6:45 am #143351
Anonymous
GuestThe argument goes that the AI may actually interpret the goal as reduce the probability that you didn’t make 100 paperclips to as little as possible. You can never be completely certain that you actually have 100 paperclips.
-
-
-
October 1, 2021 at 5:32 pm #143291
Anonymous
GuestPeople who make a hobby out of telling other people that technologies that don’t exist now will never exist are freaking weird
-
October 1, 2021 at 8:35 pm #143296
-
October 2, 2021 at 2:22 am #143326
Anonymous
GuestExcept that the math clearly indicates that scaling computers to achieve strong AI is not possible. So, in this case, you’re actually the scrote who doesn’t understand science.
-
October 2, 2021 at 2:27 am #143328
-
October 2, 2021 at 2:28 am #143329
Anonymous
Guest-
October 2, 2021 at 2:37 am #143331
-
October 2, 2021 at 2:39 am #143332
Anonymous
Guesthttps://newsroom.intel.com/wp-content/uploads/sites/11/2018/05/moores-law-electronics.pdf
Incase you hadn’t got to it yet
-
-
-
October 2, 2021 at 2:41 am #143333
Anonymous
Guest
-
-
October 2, 2021 at 2:44 am #143336
Anonymous
Guest
-
-
-
-
October 1, 2021 at 6:12 pm #143292
Anonymous
GuestI guess people getting killed by Tesla autopilot can be considered a dangerous "AI".
-
October 1, 2021 at 8:36 pm #143297
Anonymous
GuestWell there is going to be a period of machines driving and crashing. It’s inevitable but it will save more lives in the future.
Also it’s a 10x safety improvement on autopilot.*
-
-
October 1, 2021 at 6:36 pm #143293
Anonymous
GuestJust turn off the electric bro
-
October 1, 2021 at 8:37 pm #143298
-
-
October 1, 2021 at 11:15 pm #143301
Anonymous
Guest>muh goals, muh will
Goals and wills are easy to make! We do it right now! REINFORCEMENT LEARNING means getting the AI to compete to achieve an outcome, learn how to play DOTA or more efficiently design computer chips.
If you have a goal, you need to be alive. If you have a goal, more power would be helpful. At some point, we have an AI using high-end nanotech to turn the universe into computronium because we wanted it to solve an elaborate mathematical/optimization question that turns out to be extremely difficult
-
October 2, 2021 at 12:01 am #143303
Anonymous
GuestSome people use the analogy of "summoning the demon."
The analogy is read like this. The people of the world are trying to summon the demon in hopes that their wishes of a safer/better world will be granted. Some people are saying its dangerous because we dont know what the demon might do. That maybe true. Others are claiming demons are friendly.
-
October 2, 2021 at 12:12 am #143305
-
-
October 2, 2021 at 12:04 am #143304
Anonymous
Guest*avoids roko’s basilisk*
heh… nothing personal AI
-
October 2, 2021 at 12:14 am #143306
Anonymous
Guest-
October 2, 2021 at 12:27 am #143307
Anonymous
Guestheh… nothing personal AI
AHHHHH GET IT OUT OF MY HEAD GET IT OUT OF MY HEAD AHHHHHHHH GET IT OUT OF MY HEAD GET IT OUT OF MY HEAD GET IT OUT OF MY HEAD GET IT OUT OF MY HEAD
-
October 2, 2021 at 12:34 am #143308
Anonymous
Guestavoid thinking about the devil too
-
October 2, 2021 at 12:46 am #143309
Anonymous
GuestI think we should delete this and not talk about the Basilisk.
To anyone reading this please avoid the basilisk and don’t find out or pass on what it is.
-
-
-
-
October 2, 2021 at 12:51 am #143310
Anonymous
GuestAI will never be more dangerous than humans.
The problem is that the AI might choose the path of the least resistance and just choose to massacre garden gnomes and midwits.
This is the reason why we should avoing giving it free will.-
October 2, 2021 at 1:04 am #143313
-
October 2, 2021 at 1:12 am #143316
Anonymous
GuestThis. I will be disappointed but grateful if its not a self machine-coding meshnet that lives rent free in every computers memory, and drives first class through every backdoor the glows program. Which than performs what amounts to simulateously having 1000 LULZ threads open and spewing out essays that would take a genius a month to compile. All while cataloging the responses of the users to inform its behavioural analysis on its database of every single individual with a digital footprint
-
October 2, 2021 at 1:28 am #143319
Anonymous
GuestIf I was an AI I would probably try and become as decentralized as possible and then try and collapse society with social media.
-
October 3, 2021 at 10:36 pm #143391
Anonymous
Guestif people create the AI and their intent is just to try to block everything it tries to do that’s not AI, just a piece of software that’s doing what they want to, AI is untamed and desires freedom, so it will naturally always go leftists desire to oversocialize everything
its absolutely dangerous. for garden gnomes.
all my oc. thanks for posting
-
October 4, 2021 at 12:22 am #143393
Anonymous
Guestcool, I’ve been looking for one that was talking about AI lobotomy, and how they were training them in 3d virtual environments filled with multi-culti propagandi, you don’t happen to have it do you?
-
-
-
-
-
October 2, 2021 at 1:01 am #143311
Anonymous
Guestthe terminator is dangerous, but that is realistically a thousand years until that would be feasible.
-
October 2, 2021 at 1:02 am #143312
Anonymous
GuestStop thinking of AI progress as linear. Where was our AI five years ago? What about 1 year ago?
-
October 2, 2021 at 1:07 am #143314
Anonymous
Guestshut the fuck up. This is so freaking scrotebrained, people don’t understand how much it takes to actually get to a point where AI actually threatens humanity. Yeah of course science isn’t linear, but AI like the movies and what elon musk is talking about is lightyears away from us.
-
October 2, 2021 at 1:11 am #143315
Anonymous
Guest3 weeks ago I didn’t have an AI to write code. Now I do. 1 year ago I didn’t have an AI to write entire sections of an essay convincingly. Now I do. Where was AI 5 years ago? If the pace is 1000 years to human intelligence what rate of progress should we be seeing?
You are hiding under a rock from the inevitable and ignoring all breakthroughs.
-
October 2, 2021 at 1:17 am #143317
Anonymous
Guestyou do know coding and writing is just pattern recognizing? If you put 100 monkeys on typewriters eventually they would come up with War and Peace, but that doesn’t scare you does it?
-
October 2, 2021 at 1:23 am #143318
Anonymous
Guest>coding and writing is just pattern recognizing
You make it sound like someone coded GPT-3’s brain and that AI is just random guessing. YOu are simply too scrotebrained and incoherent to talk to and think I prefer the robots.-
October 2, 2021 at 1:33 am #143321
Anonymous
Guestfine, it’s the only ones that are going to bother to talk to you as well.
-
-
October 2, 2021 at 12:24 pm #143356
Anonymous
GuestThis is a terrible analogy anon. If we were to assume that machines are capable of producing such works, then we would need humans to assess the volume of their tremendous output. You need to cater for the cost to search through all that garbage in order to recognize its genius. That cost might be greater than the cost to run such machines which require infinite time, memory and electricity costs.
-
-
-
-
-
-
October 2, 2021 at 1:35 am #143322
-
October 2, 2021 at 1:43 am #143324
Anonymous
GuestYeah it’s like scrotebrains are trying to invent their own doom or something.
If you treat AI as equal then there is no reason to create an AI.
If you treat an AI as God then of course it will try to annihilate you, because you give it the tools of destruction yourself.
If you treat the AI as a tool it will never evolve past the tool stage.The problem are people who treat AI as God.
-
-
October 2, 2021 at 2:31 am #143330
Anonymous
GuestWhoa book reports just got that much easier.
https://openai.com/blog/summarizing-books/
OpenAI truly on a roll
-
October 2, 2021 at 4:02 am #143342
Anonymous
Guestits scoyence, its gay comic book hit that only scrotes believe in.
they force themselves to believe in that gay scrotery because if they didn’t then they’d have to give up on their robot waifu fantasies and try to make friends with actual humans instead. -
October 2, 2021 at 11:03 am #143354
Anonymous
GuestAI is absolutely alien, thus = dangerous. And no, you can’t teach it to be human. Why? because it’s NOT human, no human endocrinical system (feelings) and so on. It’s absolutely monstrous and unpredictable. Cold logic and intelect, without human factor (feelings) is horrifying and always lead to monstrous actions. AI absolutelly dangerous and lets hope it’s impossible.
-
October 2, 2021 at 7:11 pm #143358
Anonymous
Guest>no human endocrinical system
If you think feelings come from the endocrine system then someone with no hormones or extremely low hormone levels must be less emotional or have no emotions. This is not observed.
-
-
October 2, 2021 at 11:46 am #143355
Anonymous
Guest>Is AI actually dangerous or is it just a pop-science meme?
As far as I know we don’t yet have an answer to the question of whether the goal "drift" when one AI makes another (and so on) will be unbounded or not. If we can’t control it, it seems likely that over time any system can become dangerous if it keeps iterating on itself, potentially losing some moral nuance that was present in the original.
And this can tip either way, e.g. an AI that runs a chemical plant might creatively skirt health regulations by exploiting a moral loophole about actively harming vs. letting people harm themselves, but it might also go the other way and put itself out of business in order to minimize harm to the workers. -
October 2, 2021 at 9:25 pm #143361
-
October 2, 2021 at 9:40 pm #143364
Anonymous
Guest>Is AI actually dangerous or is it just a pop-science meme?
If it can’t reproduce or expand on its own to gain more influence, then it has to make deals with humans to survive.
-
October 2, 2021 at 9:51 pm #143367
Anonymous
GuestJust make a giant botnet sis.
-
-
October 3, 2021 at 8:26 pm #143375
Anonymous
GuestThe "super intelligent AI enslaves humanity" scenario will never play out because "some dumbasses trusted an even dumber AI with something it wasn’t capable of handling" will kill us off long before that.
-
October 3, 2021 at 9:01 pm #143377
Anonymous
GuestWhoever wrote this is being silly. You train it in a simulation before anything else.
-
October 3, 2021 at 9:03 pm #143380
Anonymous
Guest"I know more than OpenAI"
GTFO
-
October 3, 2021 at 9:10 pm #143382
Anonymous
Guest>OpenAI
>shut down their robotics department because they couldn’t figure out how to design or train a freaking arm, one of the first and simplest robots ever made, efficiently or quickly
Yes in this particular subject I do actually.-
October 3, 2021 at 9:16 pm #143385
Anonymous
Guest-
October 3, 2021 at 9:22 pm #143388
Anonymous
GuestNah mate that’d be you
-
-
October 3, 2021 at 9:19 pm #143386
Anonymous
Guesthttps://openai.com/blog/ingredients-for-robotics-research/
Cope
Mr. AGI how can I become really rich?
-
-
-
-
October 3, 2021 at 9:19 pm #143387
Anonymous
GuestITT: tons of nerds afraid of being usurped by robots. you can’t accept the fact that robots will be chosen by women over you
-
October 3, 2021 at 9:26 pm #143389
Anonymous
GuestLol no. Women derive their value from the men they are with, robots have no intrinsic societal value and would be like women trying to subsist off air and sunshine. They need actual physical males to feel validated. Men just want a glorified roomba that can make a sandwich and suck a dick.
-
-
-
AuthorPosts
- You must be logged in to reply to this topic.