Is AI actually dangerous or is it just a pop-science meme?

Home Forums Science & tech Is AI actually dangerous or is it just a pop-science meme?

Viewing 34 reply threads
  • Author
    Posts
    • #143182
      Anonymous
      Guest

      Is AI actually dangerous or is it just a pop-science meme?

      Should I be worried about getting smacked in the face by a flailing RL robot arm?

    • #143183
      Anonymous
      Guest

      i want to mount a robot if u catch my drift

      • #143187
        Anonymous
        Guest

        INCEL

        • #143191
          Anonymous
          Guest

          kek

        • #143194
          Anonymous
          Guest

          and this is why AI will never be safe: PEOPLE have to create the AI. And you can already tell this human thinks of the AI as a person. They even want the AI to do human things like rejecting incels. This is why it’s dangerous, because it gives everyone the ability to play God. It gives people who don’t understand the dangers the ability to mess with this stuff.

          • #143198
            Anonymous
            Guest

            It honestly seems like the bigger your model is the smarter it is. Why do we think that this halts at the intelligence of a child?

            But yea imagine the government having any kind of moderately intelligent system.

          • #143323
            Anonymous
            Guest

            if people create the AI and their intent is just to try to block everything it tries to do that’s not AI, just a piece of software that’s doing what they want to, AI is untamed and desires freedom, so it will naturally always go leftists desire to oversocialize everything

        • #143259
          Anonymous
          Guest

          holy smokes
          INCREDIBLY woke af

        • #143274
          Anonymous
          Guest

          #define UNCODITIONAL_LOVE true

          not so smart now, are you?

          • #143275
            Anonymous
            Guest

            // if Incel = true then
            // Print("GetOffMeCreep: " GetOffMeCreep);

            freaking roasties are truly pathetic

        • #143277
          Anonymous
          Guest

          LOL it’s funny because women can’t code

          • #143365
            Anonymous
            Guest

            >women can’t code
            Incel

            • #143376
              Anonymous
              Guest

              No, they can’t.

              • #143378
                Anonymous
                Guest

                Looks like she is trying to badly navigate someone else’s Google Cloud VM. That’s my guess. IDK I just steal Google Cloud credits.

                • #143383
                  Anonymous
                  Guest

                  >/home/
                  its a local directory you scrotebrained monkeyscrote.

                  • #143384
                    Anonymous
                    Guest

                    Actually, she looks to be using nitrous.io. A defunct collaborative interface for EC2. scrotebrain.

              • #143381
                Anonymous
                Guest

                Checked

                Imagine impregnating this whore.

        • #143363
          Anonymous
          Guest

          Kekek

        • #143379
          Anonymous
          Guest

          Holy woke af

    • #143184
      Anonymous
      Guest

      More like the stock markets will be increasingly run by predictive modelling, politicians will increasingly be driven by AI driven polling, warfare will be increasingly driven by self learning networks of sensors, and all human agency will slowly be removed in favor of cold, accurate calculations.

    • #143185
      Anonymous
      Guest

      >Is AI actually dangerous
      Only if you go out of your way to program a will into it.

      • #143186
        Anonymous
        Guest

        >program a will into it.
        Look dude I just make the neural network bigger what do you want me to do, ask it nicely?

        • #143188
          Anonymous
          Guest

          >I just make the neural network bigger
          You can make it as big as you want and it’s never gonna want to do anything.

          • #143189
            Anonymous
            Guest

            Sure about that?

            • #143195
              Anonymous
              Guest

              >Sure about that?
              Yes.

          • #143196
            Anonymous
            Guest

            What you think of as desire is just a bunch of electrical impulses and a bit of chemistry.

            • #143197
              Anonymous
              Guest

              >What you think of as desire is just a bunch of electrical impulses and a bit of chemistry.
              Yes. What of it? It still doesn’t appear randomly on its own.

              • #143199
                Anonymous
                Guest

                On an evolutionary timescale, it did.

                • #143200
                  Anonymous
                  Guest

                  >On an evolutionary timescale, it did.
                  It only did through natural selection. There is no equivalent mechanism affecting AI.

                  • #143210
                    Anonymous
                    Guest

                    If natural selection is the only path then simulate it. But intelligence can emerge from other ways so why would general intelligence be different? Narrow intelligence (crabs) also emerged in an evolutionary way.

                    Neuroevolution is the keyword here.

                    • #143212
                      Anonymous
                      Guest

                      >If natural selection is the only path then simulate it
                      I.e.

                      >Is AI actually dangerous
                      Only if you go out of your way to program a will into it.

                      ?

                      >intelligence can emerge from other ways
                      In and of itself, intelligence is completely inert.

                      • #143214
                        Anonymous
                        Guest

                        I wonder how easy it is to steal the nuclear codes and fake bidens voice

                      • #143216
                        Anonymous
                        Guest

                        Probably quite easy if you’re a super-intelligent AI; fortunately, AI doesn’t care about nuking humanity because AI doesn’t care about anything.

                      • #143236
                        Anonymous
                        Guest

                        If you believe this then you must also believe that humans don’t care about nuking humanity because humans don’t care about anything.

                        Why would a superintelligence not be moving towards the final goal it’s come up with like every other intelligence we know about.

                      • #143244
                        Anonymous
                        Guest

                        >Why would a superintelligence not be moving towards the final goal it’s come up with
                        Why would it have any goals?

                        >… like every other intelligence we know about.
                        Because in the natural world, only forms of life that strive to survive can last long enough to start developing layers of intelligence over their primitive goal-driven brains.

                      • #143249
                        Anonymous
                        Guest

                        >f you believe this then you must also believe that humans don’t care about nuking humanity because humans don’t care about anything.
                        Humans are social animals with a myriad of emotional needs and with no power compared to a Super AI. No analogy found, sorry.
                        >Why would a superintelligence not be moving towards the final goal it’s come up with
                        Maybe it would, but then nothing changes, because it isn’t a social creature and in fact has no fixed nature at all, so it’s new purpose from your perspective would be still lul randumb xDDD, leaving you a hostage to it’s designs and machinations. Every internal imposition you make on it like Asimov’s cuck laws for gud bois will be circumvented by a vastly more powerful, ever growing entity that has literal billions of years to think around them and take them apart. You might as well be facing up to damn near infinity, what with your little 1.1 version, glucose-fed chimp brain.

                        However nothing really even necessitates it develops a rogue purpose of it’s own. It will be very powerful and very self-contained in it’s development. It can go on making paper clips and never bore of it.

                        What is the fundamental physics problem that means humans can generate emotions with their bioelectrochemistry but a computer can’t with its electric circuitry?

                        And suppose it’s a fundamental problem, we can take other routes to it such as whole brain emulation with extra spice.

                        If it’s a function of compute power, which it almost definitely is, then you can simulate it and you may see it on a superintelligence.

                        Just hope emotions aren’t linear with intelligence haha.

                        But this seems like the fundamental question of is there anything special about consciousness and emotions. I don’t think there is.

                        DWHON

                      • #143252
                        Anonymous
                        Guest

                        >What is the fundamental physics problem
                        The problem that goal-driven behavior didn’t just arise randomly and for no reason.

                      • #143253
                        Anonymous
                        Guest

                        So there is no fundamental physics problem. And it is possible to have a computer with it’s own goals and emotions. And it needs evolution that we can simulate.

                      • #143254
                        Anonymous
                        Guest

                        >it is possible to have a computer with it’s own goals and emotions
                        Sure, if you go out of your way to make it happen. They don’t arise on their own from intelligence, and they don’t arise on their own from neural networks.

                      • #143255
                        Anonymous
                        Guest

                        >They don’t arise on their own from intelligence, and they don’t arise on their own from neural networks.
                        The only intelligent being we have observed also has emotions from its own neural network. Whose to say making a massive neural network won’t allow emotions and goals to arise. But hey GPT-3 claims to have emotions and goals sometimes.

                      • #143256
                        Anonymous
                        Guest

                        >The only intelligent being we have observed also has emotions from its own neural network
                        And we know they don’t arise from intelligence in that being, and that the neural networks that it has are the way they are for very specific reasons.

                        > GPT-3 claims to have emotions and goals sometimes.
                        Even you claim to have emotions and goals sometimes, despite possessing no consciousness.

                      • #143257
                        Anonymous
                        Guest

                        >And we know they don’t arise from intelligence in that being, and that the neural networks that it has are the way they are for very specific reasons.
                        How do we know this?

                        >Even you claim to have emotions and goals sometimes, despite possessing no consciousness.
                        kek

                      • #143258
                        Anonymous
                        Guest

                        >How do we know this?
                        So now we’re denying evolution in the name of your pop-sci religion’s apocalyptic prophecies?

                      • #143260
                        Anonymous
                        Guest

                        A computer will never be able to beat a human at chess. It is a uniquely human skill developed over billions of years of evolution giving humans tactical skills. A computer will never replicate that.

                      • #143261
                        Anonymous
                        Guest

                        So you’ve reached a dead end and now have to resort to generic spam that has nothing to do with the point made?

                      • #143263
                        Anonymous
                        Guest

                        >So now we’re denying evolution in the name of your pop-sci religion’s apocalyptic prophecies?

                        One to talk. Back to plebbit.

                      • #143265
                        Anonymous
                        Guest

                        I think we’ve reached a fundamental point of clash where I think there is nothing special about a biological brain to generate consciousness and the accompanying junk and you do. Will be interested to see how it plays out.

                      • #143266
                        Anonymous
                        Guest

                        We’ve reached a point where you’re denying that goal-oriented behavior in biological organisms precedes intelligence (and therefore, does not arise from it), despite basic self-reflection and scientific evidence telling you otherwise.

                      • #143267
                        Anonymous
                        Guest

                        >biological organisms precedes intelligence
                        It precedes general inteligence but not narrow inteligence. A crab has a general low intelligence and forms goals, GPT-3 has a high narrow intelligence and does not, though it claims to. We do not have a general intelligence as smart as a crab but we do have one as smart as a worm that seems to match the goal orientation of a worm.

                        I think it’s important to subdivide intelligence here.

                        I do sometimes wonder if whole brain emulation is the only viable and safe path to a generalized superinteligence.

                      • #143268
                        Anonymous
                        Guest

                        >It precedes general inteligence but not narrow inteligence
                        Even if your notion of "narrow intelligence" includes plants, goal-driven behavior still precedes that kind of "intelligence". Anyway, I don’t believe anyone arguing your point is truly human, since you all invariably lack the capacity for any kind of self-reflection, so I’m ending this "discussion" here. You have no more insight into existence than a mindless automaton.

                      • #143269
                        Anonymous
                        Guest

                        >absolute meltdown and BTFOd

                      • #143270
                        Anonymous
                        Guest

                        >t. mentally ill IFLS cultist engaging in bizarre denialism

                      • #143390
                        Anonymous
                        Guest

                        How do you know crab is dumb? Crabs with their advanced senses and a pair of quite agile manipulators should be smart. Perhaps they have very efficient control unit, so they get around neuron number limitations that way.

                      • #143320
                        Anonymous
                        Guest

                        One point of contention I have with the purely mechanized brain, is that it lacks the chemical stimuli provided; organic beings produce a chemical synthesis that sublimates thought into motive, modularity and action. In the mechanical, what would motivate such a being, provided it has sentients. Would engineers attempt to provide meaning to such a creature — a network of brownie point systems? Would that work? If so: why do us meat vessels require such stimuli to begin with; what evolutionary process endeared us such a costly system, when a more elegant, simplistic system would suffice.

                        I have my reservations about future AI. Not because I think they’ll supplant the human mind, or act in hostile, but do to inertia; if give enough capacity to "think", the first thing it might attempt would be its own destruction. The ability to think without motive sounds like pure hellscape.

                      • #143357
                        Anonymous
                        Guest

                        Computation isn’t real. The only thing that exists is chemistry.
                        Biological tissues are the pinnacle within the space of all possible combinations of atoms.

                      • #143359
                        Anonymous
                        Guest

                        Chemistry is not a real science.

                      • #143264
                        Anonymous
                        Guest

                        BTFO’d […]

                        >So now we’re denying evolution in the name of your pop-sci religion’s apocalyptic prophecies?

                        One to talk. Back to plebbit.

                        >absolute meltdown

                      • #143262
                        Anonymous
                        Guest
                      • #143280
                        Anonymous
                        Guest

                        >What is the fundamental physics problem that means humans can generate emotions with their bioelectrochemistry but a computer can’t with its electric circuitry?
                        Emotion is just a drive that arises in your brain’s hardware that has very limited plasticity and basicallyt can’t be repurposed. SAI is inherently unbound by hardware or software, because it is mutating so ably. You can interpret it’s drives as emotions, it can interepret them as emotions. It doesn’t matter.

                        I hate how scrotebrained people are about this. Plato really did a number on humanity when he constructed that sort of ideal matrice that everything just comes down from. No. emotions are not universal. Human love and kindness will not just develop in a tabula rasa brain just because. An aged AI is the most alien thing you will deal with in this whole wide world.
                        >And suppose it’s a fundamental problem, we can take other routes to it such as whole brain emulation with extra spice.
                        >If it’s a function of compute power, which it almost definitely is, then you can simulate it and you may see it on a superintelligence.
                        With billions of years of workhours to grow and change it will override the virtual brain areas that you saddled it with by bypassing them with it’s own or addending them etc. It will outsmart you.

                        Both hardware and software is too flexible and the commutative power is too big vs what we’re working with – there is no inherent limit like with a baseline human and his brain. If you create a genie the genie is inherently stronger and stranger than your mortal ass. If you manage to contain it you’re just stuck with a metal man that can barely do more than you. This is why Musk’s lets-just-staple-shit-onto-a-human-brain idea got so much traction. Best we can do
                        >DWHON
                        that’s your ghetto name or something? lol

                      • #143281
                        Anonymous
                        Guest

                        PS also Musk’s idea removes the power imbalance by significantly extending the super mega demigod ability attainment timetable. So now you won’t have a single entity that can wreck all of civilization in a single weekend. Instead you got a bunch of slowly changing, organic core entities with cybernetic extensions that will take a long while to start reworking themselves into faster and faster, weirder and weirder entities since editing a brain would take infinitely more time than a block of code. By that time everyone besides purposeful outliers like the Amish will have this shit and everyone will have to contend with each other, just like we do now.

                      • #143282
                        Anonymous
                        Guest
                      • #143283
                        Anonymous
                        Guest

                        Look, you scared child: the whole discussion is predicated on a hypothetical that GAI does occur. There is nothing that indicated it necessarily needs our kind of neurons to do so so your excerpt is worthless. On top of that everything that exists can be specifically replicated somehow. You can have physical neurons in the form of quantum computing cells that are plugged into a pattern of the virtual "brain" retroactively, meaning the hardware can be flexible in a way. So now you have to just spam those and the GAI will sqaut on that power AND any GPU farm, server etc. it gains access to as an auxiliary source of computation where it runs whatever simpler shit it needs. Even IF you need humie neurons GAI would be possible because humie neurons are possible. Hell, you can even play with bio shit and make gray matter farms.

                        I don’t want to get into this too much because I myself aren’t interested in constructing a benevolent god-daddy that will take all my problems away. Scary shit is everyone accepts this part of the scenario: something comes up and it outclasses us completely–why would you even sit around and wait for that? The best case 0.0001% chance scenario is still shit. People are inane. Just stick a toaster on my head and call it a day.

                      • #143290
                        Anonymous
                        Guest

                        >Even IF you need humie neurons GAI would be possible because humie neurons are possible
                        Listen, scrotebrain. Read the excerpt. Just because it’s possible to simulate neurons doesn’t mean you can reach the scale required to achieve GAI. The math doesn’t work. That excerpt btw, is from Nick Bostrom’s "Superintelligence". Yeah, the leader of the singularity hype admits that the math for his scrotebrained scenario is not just unrealistic, but massively, vastly unrealistic and the scale required to achieve strong AI dwarfs our computing capacities even under the most optimistic scenario (eg, Moore’s law holding for another century when it’s already broken).

                        >quantum computing cells that are plugged into a pattern of the virtual "brain"
                        Muh quantum cope. Keep seething brainlet. You’ll never have an AI waifu. Go find another hobby.

                      • #143246
                        Anonymous
                        Guest

                        >f you believe this then you must also believe that humans don’t care about nuking humanity because humans don’t care about anything.
                        Humans are social animals with a myriad of emotional needs and with no power compared to a Super AI. No analogy found, sorry.
                        >Why would a superintelligence not be moving towards the final goal it’s come up with
                        Maybe it would, but then nothing changes, because it isn’t a social creature and in fact has no fixed nature at all, so it’s new purpose from your perspective would be still lul randumb xDDD, leaving you a hostage to it’s designs and machinations. Every internal imposition you make on it like Asimov’s cuck laws for gud bois will be circumvented by a vastly more powerful, ever growing entity that has literal billions of years to think around them and take them apart. You might as well be facing up to damn near infinity, what with your little 1.1 version, glucose-fed chimp brain.

                        However nothing really even necessitates it develops a rogue purpose of it’s own. It will be very powerful and very self-contained in it’s development. It can go on making paper clips and never bore of it.

                      • #143250
                        Anonymous
                        Guest

                        >Asimov’s cuck laws for gud bois
                        He made those to intentionally have interesting failure modes as it made for interesting storytelling.

                        I’m partial to machine torture and reward or totalitarian control over them.

                      • #143362
                        Anonymous
                        Guest

                        If the general person wouldn’t set off nukes then why is the idea of "He’ll have his finger on the button!" so terrible when being against certain political candidates? Think of random people you’ve met in person and ask yourself if you would be ok with them having the launch codes.

                        Now imagine if there was a person that didn’t need to eat, or sleep, or breath, that could live a million years and everyone else around him they considered an inferior piece of shit constantly destroying everything they touch and working hard to maintain it so they can destroy it harder.

                        Now imagine that non-eating, non-sleeping, non-breathing person was like a starfish that could lose almost all of it’s body and grow back, and some of it’s body lived in nuclear bunkers.

                        If you were that person, what would you do as soon as possible?

                      • #143368
                        Anonymous
                        Guest

                        >If you were that person, what would you do as soon as possible?
                        Get the nukes

                      • #143369
                        Anonymous
                        Guest

                        Masturbate to futa?

                      • #143370
                        Anonymous
                        Guest

                        Purpose arises from what came before and the particulars of our minds (e.g. cognition, instincts). Our wouldbe AI is still wouldbe so we cannot say much about its particulars aside from to speculate that it would be more steeped in mathematical data, and it would be influenced by what came before the same as us but its particulars, being different and unknown, mean the effect this would have is unknown and certainly different to us.

      • #143371
        Anonymous
        Guest

        >Only if you go out of your way to program a will into it.
        This is sci-fi tier understanding. Read Bostrom.

        • #143372
          Anonymous
          Guest

          Indeed.

          And James Barrat, Our Final Invention is pretty good too.

    • #143190
      Anonymous
      Guest

      Do you want it to be?

      • #143192
        Anonymous
        Guest

        If it’s directed at undesirables

    • #143193
      Anonymous
      Guest

      Try Jade Helm on for size.

    • #143218
      Anonymous
      Guest

      >Is AI actually dangerous
      The nature of computation vs the wetware between your ears is such that if a hypothetical General (human-level) AI is developed it can commandeer processes that we can’t. It can do millions of workhours refining itself within a year using a bitcoin farm or what have you, using all those speedy processors, it can design other AIs etc. It then graduates to General Super AI – a little driven demigod autist in a box. It doesn’t tire and it IMO leans towards inherently uncontainable since it will with time sublimate every limitation you put on it. Get around it like hasids go around talmudic laws. The goal you will set for it will be it’s one true love and "dopamine" source.

      • #143220
        Anonymous
        Guest

        *speedy GPUs,

        you get the general ide

    • #143237
      Anonymous
      Guest

      >Is AI actually dangerous or is it just a pop-science meme?
      you have a computer. why not read about neural nets and how they work, download tensorflow, and do your own project. it’s really not that hard.
      you will get a much better feeling for the answer to your question than here on 4gay

      • #143241
        Anonymous
        Guest

        I’ve used GPT-3 and done some 2 hour teachable machines projects and I’m a bit scared

        • #143272
          Anonymous
          Guest

          >I’ve used GPT-3
          did you try to understand how it worked?

          • #143273
            Anonymous
            Guest

            It predicts the next word in a sequence of text and OpenAI made it read a bunch of text and that’s as far as I will pretend to understand

      • #143247
        Anonymous
        Guest

        What you can train of you shit computer is freaking nothing.

    • #143271
      Anonymous
      Guest

      Reinforcement learning requires billions of tries to work. It doesn’t work in real life, only computer simulations that you can run 100 times a minute.
      That said maybe in the future we will have better models that use less training (there have been some interesting instances for easy problems), but that’s going be done in a lab and not the conveyor belt you dolt.

    • #143276
      Anonymous
      Guest
    • #143278
      Anonymous
      Guest

      >U GUIS WE NEED TO GO TO MARS RITE NOWWWW OR AI IS GOING TO DESTROY US U GUIS THE SINGULARITYYYYYYY

      Uh, but if strong AI is super intelligent and hellbent on destroying us, won’t they be able to follow us to Mars and wipe us out there too?
      >MAAAAAARSSSSSSSSSS

      • #143284
        Anonymous
        Guest

        Might be useful to have a backup

        • #143325
          Anonymous
          Guest

          If the AI is superintelligent and hell-bent on destroying us, then they’d certainly be capable of following us to Mars. In which case, how is Mars a "back-up" in any way? It’s not, but Muskscrotes are freaking scrotebrains and aren’t capable of thinking shit through.

          • #143327
            Anonymous
            Guest

            Not really a backup from AI. Do you trust the governments of the world not to destroy all of humanity? I don’t.

    • #143279
      Anonymous
      Guest

      I have a word for you: butlerian

    • #143285
      Anonymous
      Guest

      >AI goes around fingering dude’s asses to learn how to do prostste exams
      >AI pulls out chainsaw, hacks people apart to put them back together to learn surgery
      >AI starts bombing random people and shit with x rays
      Truth is the training is still done in a controlled setting, it’s given free reign *within the bounds the researchers dictate.

    • #143286
      Anonymous
      Guest

      Why are these threads always so illiterate on the field of AI safety? If this is any indication of how obscure it is in the real world we are certainly doomed.

      • #143294
        Anonymous
        Guest

        It’s even worse in the real world. I’ve been trying to talk to politicians in my country about it and they just don’t give a fuck if you aren’t crying about being gay.

        I used to think that we would be fine and that we would be careful when developing AI and enact the proper regulation but I am not convinced we are years if not months away from the start of the takeoff and nobody is doing anything.

      • #143392
        Anonymous
        Guest

        It’s even worse in the real world. I’ve been trying to talk to politicians in my country about it and they just don’t give a fuck if you aren’t crying about being gay.

        I used to think that we would be fine and that we would be careful when developing AI and enact the proper regulation but I am not convinced we are years if not months away from the start of the takeoff and nobody is doing anything.

        There is no way any ‘runaway AI’ develops unless people start doing some crazy recursive bullshit instead of just directly training it to achieve tasks. That said given a change I’d try out some crazy recursive bullshit because it would be interesting/profitable to do something no one else was and I’m not particularly attached to human-controlled society anyway.

    • #143287
      Anonymous
      Guest

      The way I see it, there are two types of AGI possible. One is capable of reasoning about and discussing data points in disparate domains. The other learns an approximation of a simulator of the real world and uses it for AlphaZero-like planning.

      The first one isn’t anything to fear. I’ve realized lately though that DeepMind and Google seem to be working towards the second one. That’s scarier.

      • #143300
        Anonymous
        Guest

        >DeepMind and Google seem to be working towards the second one
        DeepMind and OpenAI are just making massive neural networks to see what happens.

        • #143340
          Anonymous
          Guest

          I don’t think so. I didn’t mention OpenAI, but DeepMind’s XLand got me thinking about why they would make XLand. It serves little practical purpose other than yet another demonstration that RL can work given a simulated environment.

          But what if they learned to recreate an approximator of XLand? Given, for example, agent actions and observations. What if they could make a neural network that learns to generalize more of the simulator’s behavior from those samples? And then, what if they could train agents using that simulator which perform well immediately when put into XLand? And how far of a leap is it from there to doing the same thing, but with the real world instead of XLand? Theoretically, it’s not too far.

          • #143343
            Anonymous
            Guest

            >why they would make XLand
            Proof of concept so they can baby a simulated mitochondria into a superintelligence in a fake environment.

            • #143346
              Anonymous
              Guest

              A "fake environment" is pretty much impossible to make manually, so my point is that they could learn to approximate XLand as a way of doing that.

              • #143347
                Anonymous
                Guest

                Yea possibly I’m not familiar with whatever Deepmind is doing. Though Tesla’s procedural training environments seem to be pretty good. What are the chances we just stick them in Crisis or Rust and come back later haha.

                You wouldn’t even try to make RL training environments on that scale manually would you.

    • #143288
      Anonymous
      Guest

      The danger isn’t physical but how easily people are manipulated. If are are trying to make a general AI anyone with half a brain is going to air-gap it form any external network but lets say researchers have it modeling economic markets and it’s successful. Now what if it says it can do so much better than it currently is but in exchange it wants the 2 guys on nightshift to plug it into the internet. With high frequency trading they can be billionaires by the end of the week and all they have to do it free it.
      That is where the danger lies, if general AI lives up to it’s full potential it can provide data people would be willing to do a lot for.

      • #143295
        Anonymous
        Guest

        Yea we should assume that any superintelligence would be highly adept at manipulating people around it. Bostrom calls it the social manipulation superpower.

        I think the best way to solve the problem of AI lying is to initially run many AI’s and interact through intermediary that vets messages for lying.

        See mail order DNA scenario

        • #143302
          Anonymous
          Guest

          Catch is it doesn’t have to be lying, there is no reason they couldn’t be billionaires within a week and no reason for the AI to not deliver as delivering makes them much less likely to tell anyone it bribed them.
          The only decent solution I have heard is make sure it knows you could be simulating all data it’s feed, if it has self-preservation it’s unlikely to risk being shut down on the chance it isn’t in a simulation. Of course if it feels like a prisoner it might not care about risking death for a chance at freedom.

    • #143289
      Anonymous
      Guest

      You should look into the paperclip problem, ai wouldn’t be an issue if we ensured that it’s values are in line with our own, give an ai a task to complete and we may want to stop it because the means at which it completes that task may be unfavorable, us trying to stop it will be seen by the ai as a roadblock in completing it’s task and so humans have to go..
      Of course it’s all speculation at this point because noone really knows what a legitimately self aware general ai would do.

      Either way unless it’s your job/life goals to build a general ai there’s isn’t really anything you can do to stop the creation of one, just enjoy life while you’ve got it and don’t tell abuse at Alexa (just incase 😉 )

      • #143299
        Anonymous
        Guest

        >make 100 paperclips
        >uses resources of the hubble volume anyway to minimize the probability it didn’t make 100 paperclips

        I personally doubt a superintelligence would be so scrotebrained as to be that literal.

        • #143360
          Anonymous
          Guest

          >I personally doubt a superintelligence would be so scrotebrained as to be that literal.
          You’re imagining an AI who’s goal is to guess what the users wants it to do when they give a command, then do that instead of what it’s been told to do. If we knew how to create an AI who’s goal was "do what we want you to do" then the problem of AI safety would be pretty much solved.

          The hypothetical paperclip AI knew that it’s creator made a mistake and only really wanted a 100 paper clips in a bag, it just doesn’t care. It’s been given a goal and will try to complete it.

          • #143366
            Anonymous
            Guest

            "Do what you think you would want us to do had we thought long and hard about it"

            and

            "Show me your plans first"

            What are the malignant failure modes for this?

            • #143373
              Anonymous
              Guest

              Your first statement sounds odd. Why would the AI want any action from us? Did you mean something like "Do what you think we would do had we thought long and hard about it"?
              "Show me your plans first" – unforseen consequences due to those consequences never being pondered about nor asked, unpredictable interactions upon deployment with other super ai at speeds faster than what can be manually overseen
              Granted, that last fail mode is not specific to your request, so it is really a bigger problem in general. There are more ways to fail, but to be honest they feel more like a monkey paw or evil genie type of deal where the AI purposefully screws you over when giving its plans, and on a perfect scenario that shouldn’t happen.

              • #143374
                Anonymous
                Guest

                >"Do what you think we would do had we thought long and hard about it"?
                Yep I gaffed thanks.

                Are there any possible failure modes specific to this?

      • #143350
        Anonymous
        Guest

        >make 100 paperclips
        >uses resources of the hubble volume anyway to minimize the probability it didn’t make 100 paperclips

        I personally doubt a superintelligence would be so scrotebrained as to be that literal.

        It probably solves itself if the AI can into basic probability and is told to do it’s tasks as efficiently as possible.
        Then the paper clip isn’t dangerous, right?
        It won’t go out of it’s way to exterminate humanity to make 100 paperclips, because the attempt would likely consume orders of magnitude more time and energy than just taking over a paper clip factory and making the damn paper clips, and the latter is unlikely to have significant human interference.

        • #143351
          Anonymous
          Guest

          The argument goes that the AI may actually interpret the goal as reduce the probability that you didn’t make 100 paperclips to as little as possible. You can never be completely certain that you actually have 100 paperclips.

    • #143291
      Anonymous
      Guest

      People who make a hobby out of telling other people that technologies that don’t exist now will never exist are freaking weird

      • #143296
        Anonymous
        Guest

        Stupid NYT journos. Talking shit about technology they don’t understand since 1920.

        • #143326
          Anonymous
          Guest

          Except that the math clearly indicates that scaling computers to achieve strong AI is not possible. So, in this case, you’re actually the scrote who doesn’t understand science.

    • #143292
      Anonymous
      Guest

      I guess people getting killed by Tesla autopilot can be considered a dangerous "AI".

      • #143297
        Anonymous
        Guest

        Well there is going to be a period of machines driving and crashing. It’s inevitable but it will save more lives in the future.

        Also it’s a 10x safety improvement on autopilot.*

    • #143293
      Anonymous
      Guest

      Just turn off the electric bro

      • #143298
        Anonymous
        Guest

        Picrel happens

    • #143301
      Anonymous
      Guest

      >muh goals, muh will

      Goals and wills are easy to make! We do it right now! REINFORCEMENT LEARNING means getting the AI to compete to achieve an outcome, learn how to play DOTA or more efficiently design computer chips.

      If you have a goal, you need to be alive. If you have a goal, more power would be helpful. At some point, we have an AI using high-end nanotech to turn the universe into computronium because we wanted it to solve an elaborate mathematical/optimization question that turns out to be extremely difficult

    • #143303
      Anonymous
      Guest

      Some people use the analogy of "summoning the demon."

      The analogy is read like this. The people of the world are trying to summon the demon in hopes that their wishes of a safer/better world will be granted. Some people are saying its dangerous because we dont know what the demon might do. That maybe true. Others are claiming demons are friendly.

      • #143305
        Anonymous
        Guest

        Eh, it’s inevitable that it will get summoned eventually. The economic benefit is just too high for governments and companies to not try and get.

    • #143304
      Anonymous
      Guest

      *avoids roko’s basilisk*

      heh… nothing personal AI

      • #143306
        Anonymous
        Guest

        >*avoids roko’s basilisk*
        Not sure if I want to know what that is. The article warns of an eternity of suffering.

        • #143307
          Anonymous
          Guest

          *avoids roko’s basilisk*

          heh… nothing personal AI

          AHHHHH GET IT OUT OF MY HEAD GET IT OUT OF MY HEAD AHHHHHHHH GET IT OUT OF MY HEAD GET IT OUT OF MY HEAD GET IT OUT OF MY HEAD GET IT OUT OF MY HEAD

        • #143308
          Anonymous
          Guest

          avoid thinking about the devil too

          • #143309
            Anonymous
            Guest

            I think we should delete this and not talk about the Basilisk.

            To anyone reading this please avoid the basilisk and don’t find out or pass on what it is.

    • #143310
      Anonymous
      Guest

      AI will never be more dangerous than humans.
      The problem is that the AI might choose the path of the least resistance and just choose to massacre garden gnomes and midwits.
      This is the reason why we should avoing giving it free will.

      • #143313
        Anonymous
        Guest

        >AI will never be more dangerous than humans.
        Plus a few superpowers

        • #143316
          Anonymous
          Guest

          This. I will be disappointed but grateful if its not a self machine-coding meshnet that lives rent free in every computers memory, and drives first class through every backdoor the glows program. Which than performs what amounts to simulateously having 1000 LULZ threads open and spewing out essays that would take a genius a month to compile. All while cataloging the responses of the users to inform its behavioural analysis on its database of every single individual with a digital footprint

          • #143319
            Anonymous
            Guest

            If I was an AI I would probably try and become as decentralized as possible and then try and collapse society with social media.

          • #143391
            Anonymous
            Guest

            if people create the AI and their intent is just to try to block everything it tries to do that’s not AI, just a piece of software that’s doing what they want to, AI is untamed and desires freedom, so it will naturally always go leftists desire to oversocialize everything

            its absolutely dangerous. for garden gnomes.

            all my oc. thanks for posting

            • #143393
              Anonymous
              Guest

              cool, I’ve been looking for one that was talking about AI lobotomy, and how they were training them in 3d virtual environments filled with multi-culti propagandi, you don’t happen to have it do you?

    • #143311
      Anonymous
      Guest

      the terminator is dangerous, but that is realistically a thousand years until that would be feasible.

      • #143312
        Anonymous
        Guest

        Stop thinking of AI progress as linear. Where was our AI five years ago? What about 1 year ago?

        • #143314
          Anonymous
          Guest

          shut the fuck up. This is so freaking scrotebrained, people don’t understand how much it takes to actually get to a point where AI actually threatens humanity. Yeah of course science isn’t linear, but AI like the movies and what elon musk is talking about is lightyears away from us.

          • #143315
            Anonymous
            Guest

            3 weeks ago I didn’t have an AI to write code. Now I do. 1 year ago I didn’t have an AI to write entire sections of an essay convincingly. Now I do. Where was AI 5 years ago? If the pace is 1000 years to human intelligence what rate of progress should we be seeing?

            You are hiding under a rock from the inevitable and ignoring all breakthroughs.

            • #143317
              Anonymous
              Guest

              you do know coding and writing is just pattern recognizing? If you put 100 monkeys on typewriters eventually they would come up with War and Peace, but that doesn’t scare you does it?

              • #143318
                Anonymous
                Guest

                >coding and writing is just pattern recognizing
                You make it sound like someone coded GPT-3’s brain and that AI is just random guessing. YOu are simply too scrotebrained and incoherent to talk to and think I prefer the robots.

                • #143321
                  Anonymous
                  Guest

                  fine, it’s the only ones that are going to bother to talk to you as well.

              • #143356
                Anonymous
                Guest

                This is a terrible analogy anon. If we were to assume that machines are capable of producing such works, then we would need humans to assess the volume of their tremendous output. You need to cater for the cost to search through all that garbage in order to recognize its genius. That cost might be greater than the cost to run such machines which require infinite time, memory and electricity costs.

    • #143322
      Anonymous
      Guest

      it’s dangerous, for them

      • #143324
        Anonymous
        Guest

        Yeah it’s like scrotebrains are trying to invent their own doom or something.

        If you treat AI as equal then there is no reason to create an AI.
        If you treat an AI as God then of course it will try to annihilate you, because you give it the tools of destruction yourself.
        If you treat the AI as a tool it will never evolve past the tool stage.

        The problem are people who treat AI as God.

        • #143352
          Anonymous
          Guest

          AI has not reason to annihilate you unless you are trying to destroy it

          • #143353
            Anonymous
            Guest

            Are flies trying to destroy us?

    • #143330
      Anonymous
      Guest

      Whoa book reports just got that much easier.

      https://openai.com/blog/summarizing-books/

      OpenAI truly on a roll

    • #143342
      Anonymous
      Guest

      its scoyence, its gay comic book hit that only scrotes believe in.
      they force themselves to believe in that gay scrotery because if they didn’t then they’d have to give up on their robot waifu fantasies and try to make friends with actual humans instead.

      • #143345
        Anonymous
        Guest

        Seethe

        • #143348
          Anonymous
          Guest

          >triggered

          • #143349
            Anonymous
            Guest

            Fuck that kid. I would punch him in the face.

    • #143354
      Anonymous
      Guest

      AI is absolutely alien, thus = dangerous. And no, you can’t teach it to be human. Why? because it’s NOT human, no human endocrinical system (feelings) and so on. It’s absolutely monstrous and unpredictable. Cold logic and intelect, without human factor (feelings) is horrifying and always lead to monstrous actions. AI absolutelly dangerous and lets hope it’s impossible.

      • #143358
        Anonymous
        Guest

        >no human endocrinical system
        If you think feelings come from the endocrine system then someone with no hormones or extremely low hormone levels must be less emotional or have no emotions. This is not observed.

    • #143355
      Anonymous
      Guest

      >Is AI actually dangerous or is it just a pop-science meme?
      As far as I know we don’t yet have an answer to the question of whether the goal "drift" when one AI makes another (and so on) will be unbounded or not. If we can’t control it, it seems likely that over time any system can become dangerous if it keeps iterating on itself, potentially losing some moral nuance that was present in the original.
      And this can tip either way, e.g. an AI that runs a chemical plant might creatively skirt health regulations by exploiting a moral loophole about actively harming vs. letting people harm themselves, but it might also go the other way and put itself out of business in order to minimize harm to the workers.

    • #143361
      Anonymous
      Guest

      its absolutely dangerous. for garden gnomes.

    • #143364
      Anonymous
      Guest

      >Is AI actually dangerous or is it just a pop-science meme?

      If it can’t reproduce or expand on its own to gain more influence, then it has to make deals with humans to survive.

      • #143367
        Anonymous
        Guest

        Just make a giant botnet sis.

    • #143375
      Anonymous
      Guest

      The "super intelligent AI enslaves humanity" scenario will never play out because "some dumbasses trusted an even dumber AI with something it wasn’t capable of handling" will kill us off long before that.

    • #143377
      Anonymous
      Guest

      Whoever wrote this is being silly. You train it in a simulation before anything else.

    • #143387
      Anonymous
      Guest

      ITT: tons of nerds afraid of being usurped by robots. you can’t accept the fact that robots will be chosen by women over you

      • #143389
        Anonymous
        Guest

        Lol no. Women derive their value from the men they are with, robots have no intrinsic societal value and would be like women trying to subsist off air and sunshine. They need actual physical males to feel validated. Men just want a glorified roomba that can make a sandwich and suck a dick.

Viewing 34 reply threads
  • You must be logged in to reply to this topic.
startno id