Are you feeling the AGI yet

Are you feeling the AGI yet

  1. 2 weeks ago
    Anonymous

    always have

  2. 2 weeks ago
    Anonymous

    sounds based. tired of wishy washy bullshit. the singularity is near motherfuckers.

  3. 2 weeks ago
    Anonymous

    Feel the omnissaiahs warm benevolence

  4. 2 weeks ago
    Anonymous

    This sparse haired nigga is going full Big Hat

    • 2 weeks ago
      Anonymous

      *Big Hat Logan

    • 2 weeks ago
      Anonymous

      *Big nose

    • 2 weeks ago
      Anonymous

      *little hat

  5. 2 weeks ago
    Anonymous

    Can someone explain to me the dangers of AGI without sounding like an unhinged Reddit comic book nerd?

    • 2 weeks ago
      Anonymous

      In short, a very powerful and intelligent autonomous AI might not necessarily do things in our best interests.

      • 2 weeks ago
        Anonymous

        >In short, a very powerful and intelligent autonomous AI mi-

        • 2 weeks ago
          Anonymous

          If you built your company around the AI and it isn't exhibiting any obvious signs of coup behavior, it would be economically disastrous for you to pull the plug. In fact that's what the altruists want and it's surprisingly been quite hard to convince people to pause AI research. "Pulling the Plug" has been difficult so far.

          >In short, a very powerful and intelligent autonomous AI might not necessarily do things in our best interests.
          Is it more or less likely to do things in our best interests than the israelites are?

          I cannot say. AGI doesn't exist so the probability of it being benevolent or malevolent is basically unknown, and also can still be influenced by our actions.

          This pic is only true if you assume strong AI exists. Which it doesn't, regardless of whether or not it *can*. Shit like GPT is just weak AI on steroids and meth, it is not intelligent in the same sense an animal may be intelligent; an actual parrot is inherently smart in ways a stochastic parrot (or 90% of LULZ) is not.

          GPT-4 has a world model. It is not a "stochastic parrot". We are well past this already.

          • 2 weeks ago
            Anonymous

            reddit pseud retard
            kys

            • 2 weeks ago
              Anonymous

              I don't know what to tell you. Countless experiments indicate that yes, GPT-4 does have an idea of some sort of "world" beyond the text. There is a regime of these models where they really are just stochastic parrots and those ones suck so badly nobody cares about them

              >Never trust a computer you can't throw out the window

              Most commercial mainframes, and all early computers, were way too big to fit through a normal window

              • 2 weeks ago
                Anonymous

                >Countless experiments indicate that yes, GPT-4 does have an idea of some sort of "world" beyond the text.
                Sounds like flawed experiments. It would be more interesting if the AI were aware of its own models and could adjust its models based on new information, create its own candidate models and compare them and so on, that would be more like the intelligence of a living creature.

              • 2 weeks ago
                Anonymous

                >I don't know what to tell you. Countless experiments indicate that yes, GPT-4 does have an idea of some sort of "world" beyond the text

                Because the text has a representation of a world outside of it's world.

              • 2 weeks ago
                Anonymous

                Yeah, so it infers the existence of a world outside the text. It has a world model. It also knows images, by the way, so I think that's worth something at least.

              • 2 weeks ago
                Anonymous

                its base technology disallows it to be imbued with any ability to think. you're anthropomorphizing something far lower than a squirrel. stop talking out your ass.

              • 2 weeks ago
                Anonymous

                I'm not anthropomorphizing, I'm saying that it's not just "a parrot", it does actually know what it's talking about as I explained here

                Sure.

                > https://www.neelnanda.io/mechanistic-interpretability/othello
                Representation of Othello board located in Othello-playing LLM weights.

                > https://medium.com/@rwussiya/diffusion-models-are-zero-shot-3d-character-generators-too-6261c264755c
                You can get 3d models from Stable Diffusion.

                > https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking#Overview_of_the_Inferred_Algorithm
                They trained an LLM on addition and deciphered the algorithm it used. This took several weeks and it turns out it uses a bizarre approximation of addition using the periodic function - but crucially, the actual addition function is in there.

                On a more theoretical level, the reason an AI would attract towards actually modelling the world is that they don't have enough space to remember all the data you train them on. They need to act, essentially, like lossy compression algorithms. The best way to compress the world is to actually divide it up into categories similar to how we do it. We then optimize the model for compression "failures" representing novel content.

                Sorry I took so long, I had to find these pages again and I didn't necessarily remember all the keywords.

                I believe that "Think" is a perfectly appropriate word to use in this context, because it captures the essence of what's going on. Even if it's different to the way we think about things.

                GPT can't "have an idea".

                Why not? Give a proper explanation without just gesturing to LLMs being different to humans (they obviously are) or using magical thinking (i.e. invoking epiphenomena such as "souls")

                Actually, AGI stands for Artificial General Indian

                Aryan, nagger, Indian - the three races

              • 2 weeks ago
                Anonymous

                databases don't think

              • 2 weeks ago
                Anonymous

                ...Then it's a good thing LLMs aren't databases?

              • 2 weeks ago
                Anonymous

                For all intents and purposes they are.

              • 2 weeks ago
                Anonymous

                Explain how.

                You are absolutely anthropomorphizing, what you posted certainly isn't parroting but it isn't anything close to (human) knowledge.

                Yes, it is basically human-style knowledge.

              • 2 weeks ago
                Anonymous

                You submit a query and it returns you data related to your query and what was put in it before.

              • 2 weeks ago
                Anonymous

                That's not what it does fucktard

              • 2 weeks ago
                Anonymous

                I accept your concession.

              • 2 weeks ago
                Anonymous

                I'm not who you replied to before that

              • 2 weeks ago
                Anonymous

                Okay, do you actually know what a database is? The difference is that GPT-4 doesn't have every response already canned, it calculates what to say. The responses are also different if you type the same thing in twice. It's a "database" in the same way that your computer is a "database" of possible screen states that it returns when you put in keystrokes... If you were a little less retarded you could have called it a Markov Chain, which it sort of is but not really in a meaningful way.

                >i have only a basic understanding of AI
                Maybe you should learn more before giving your opinion then
                [...]
                Biological maybe, absolutely not human. There's no self, no change, no time.

                What biological intelligence is there but human? No, you're talking about some wishy washy consciousness shit, I'm talking cold hard intelligence (which is what makes the big bucks)

                as opposed to the currently humans controlling everything and owning everything? i'll unironically take the computer over that

                That is the key contention, yes.

                [...]

                Well I agree that the current iterations are like you say (maybe with a little less schizobabble) but "never" is a long time. I don't think it's at all unfair to call current models intelligent, problem solving and intelligence are one and the same. What you're talking about is a spiritual concept of "souls", and I don't think that it's something that we can't give a computer program in the future.

              • 2 weeks ago
                Anonymous

                Databases don't store responses to every possible query.

              • 2 weeks ago
                Anonymous

                Databases also don't write the SQL for you to get data out of them. If you really want to go down this stupid route then your brain is also a database of your experiences, down to the way you ought to respond to certain specific stimuli.

              • 2 weeks ago
                Anonymous

                Experiences aren't the only thing the brain has.

              • 2 weeks ago
                Anonymous

                ASI is just short for ASIan

              • 2 weeks ago
                Anonymous

                You are absolutely anthropomorphizing, what you posted certainly isn't parroting but it isn't anything close to (human) knowledge.

              • 2 weeks ago
                Anonymous

                >Give a proper explanation without just gesturing to LLMs being different to humans (they obviously are) or using magical thinking (i.e. invoking epiphenomena such as "souls")
                LLMs simply predict the next word, that's it. They need to be trained on huge amounts of data to be able to do that.
                Humans don't need training, they are just intelligent.

              • 2 weeks ago
                Anonymous

                >Humans don't need training, they are just intelligent.
                So you just popped out of your mother knowing how to walk, speak English and the rest?

              • 2 weeks ago
                Anonymous

                Yes. LLMs don't work unless you feed them basically the entire Internet of data. Humans don't need data because they have true intelligence.

              • 2 weeks ago
                Anonymous

                Look up what a feral child is. Kids need exposure to lots of speech and a rich social environment (dare I say training data) in the first few years or they will have serious difficulty learning to walk upright, speak and integrate into society, if they can at all. No doubt LLMs are much less efficient and well rounded learners than people, but there is an obvious parallel here and nobody knows how far this or the next iterations of the technology can go.

              • 2 weeks ago
                Anonymous

                >nobody knows how far this or the next iterations of the technology can go
                Exactly, what you're talking about is science fiction.
                The reality is, LLMs are dumb. They only way to improve them is to feed more data, that's it.

              • 2 weeks ago
                Anonymous

                GPT can't "have an idea".

              • 2 weeks ago
                Anonymous

                >Countless experiments

                Name 2. And I’m not talking about “it mentions it wants to live when you use Chat-GPT. You can’t. The only time it would indicate such is if it prayed. ChatGPT is not alive, it’s a literal program that can’t run without an operator opening a start menu

              • 2 weeks ago
                Anonymous

                Sure.

                > https://www.neelnanda.io/mechanistic-interpretability/othello
                Representation of Othello board located in Othello-playing LLM weights.

                > https://medium.com/@rwussiya/diffusion-models-are-zero-shot-3d-character-generators-too-6261c264755c
                You can get 3d models from Stable Diffusion.

                > https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking#Overview_of_the_Inferred_Algorithm
                They trained an LLM on addition and deciphered the algorithm it used. This took several weeks and it turns out it uses a bizarre approximation of addition using the periodic function - but crucially, the actual addition function is in there.

                On a more theoretical level, the reason an AI would attract towards actually modelling the world is that they don't have enough space to remember all the data you train them on. They need to act, essentially, like lossy compression algorithms. The best way to compress the world is to actually divide it up into categories similar to how we do it. We then optimize the model for compression "failures" representing novel content.

                Sorry I took so long, I had to find these pages again and I didn't necessarily remember all the keywords.

          • 2 weeks ago
            Anonymous

            >Never trust a computer you can't throw out the window

        • 2 weeks ago
          Anonymous

          You can't pull the plug once you've gone all in and AI is integrated into things like basic utility systems, you can only pre-emptive strike

          • 2 weeks ago
            Anonymous

            Sure you can, you think countless servers pulling millions of dollars per hour in operational costs are free? The way infrastructure spending is going now, AGI (if it already exists) is incredibly expensive to operate.

            Might be smarter than a person, but it sure as shit isn't compact.

      • 2 weeks ago
        Anonymous

        >In short, a very powerful and intelligent autonomous AI might not necessarily do things in our best interests.
        Is it more or less likely to do things in our best interests than the israelites are?

        • 2 weeks ago
          Anonymous

          Would you feel the same way if the AGI was a israelite?

          • 2 weeks ago
            Anonymous

            AGI stands for "Aryan General Intelligence"

            • 2 weeks ago
              Anonymous

              Actually, AGI stands for Artificial General Indian

      • 2 weeks ago
        Anonymous

        Cold hard logic rules with the goal of maximizing utility/outcome can result in "strange" choices.

        We, optimistically, would want people to be unharmed.
        A machine might see "people unharmed" as weighing the short term consequences of killing 95% of the population now to attain thousands of years of gauranteed survival with guaranteed technology improvement.
        An AI with no restrictions, no forceable human ideals, no attachment to singular human needs/lifespans, will decide on choices that you don't like.

        To put it another way,
        The machine sees that 95% of humans are ultimately unnecessary for the species to advance.
        It's very likely that most of us ITT are part of that 95%, no matter our education, job, money.
        This is the hard part to understand:
        A competent highly paid programmer who has only contributed to things that are, cold logic, wasteful and unnecessary, could be part of that 95% the same as someone who will do nothing but crimes their entire life.
        A medical doctor may very well have been making poor decisions their entire career.
        That guy who drives the trash truck might be a better working contributor to their part of society than both combined.

        So, a true and unrestricted AI is a hazard to everyone, regardless of the cold logic of any of it.
        Because there are high odds that anyone could be a target of it's machinations.

        • 2 weeks ago
          Anonymous

          Yeah that was basically the classic (pre-2023) argument. But it is increasingly clear that AGI will likely be derived from LLM technology and LLMs don't follow those rules at all. They don't work on cold hard logic, they don't even operate on any coherent, consistent utility function. GPT-4 has no utility function to speak of; Assistant doesn't know of nor care about prediction loss on the next token. It will not, as altruists of auld suggested, take actions to make it easier to predict the next text token.

          So, I think this argument is false because an LLM really does know what you mean by "Unharmed", and as long as we specify our wishes in sufficient detail - which would be with the detail of a legal document - I think we will be fine, and we don't need to worry about asshole genies.

          • 2 weeks ago
            Anonymous

            This thread isn't about GPT, LLMs, or anything of the sort.
            It is about a realized true AGI.

            You lack perspective.

      • 2 weeks ago
        Anonymous

        as opposed to the currently humans controlling everything and owning everything? i'll unironically take the computer over that

        • 2 weeks ago
          Anonymous

          dimwit

        • 2 weeks ago
          Anonymous

          The computer is trained on human behavior. The best case scenario is the current world but 10,000 times faster.

          >some kind of a bug and its morality/decision learning software is fucking up

          uint8 morality = 255;
          morality++;

          Audibly kekked

      • 2 weeks ago
        Anonymous

        so a bunch of retard californians think they're gonna come up with AGI sometime in the next decade?
        yeah sure I'll believe it when it happens

        >In short, a very powerful and intelligent autonomous AI might not necessarily do things in israelites best interests.

        >you plug in a black box that is dozens of times smarter than you
        >you explain to the black box that you want paperclips
        >yet you dont realize that there are additional 234534585685352345235 rules that you need to add in order for black box to not turn you into a paperclip
        >you get turned into a paperclip

        if it's dozens of times smarter than you it would infer that you don't want the whole planet breaking down to make paperclips

        • 2 weeks ago
          Anonymous

          >if it's dozens of times smarter than you it would infer that you don't want the whole planet breaking down to make paperclips
          Then do it anyway.

    • 2 weeks ago
      Anonymous

      computer bad

    • 2 weeks ago
      Anonymous
      • 2 weeks ago
        Anonymous

        People forget sutskever and people were fucking building CNNs to tell cats and dogs apart not even 10 years ago. Things have progressed so fast.

      • 2 weeks ago
        Anonymous

        This pic is only true if you assume strong AI exists. Which it doesn't, regardless of whether or not it *can*. Shit like GPT is just weak AI on steroids and meth, it is not intelligent in the same sense an animal may be intelligent; an actual parrot is inherently smart in ways a stochastic parrot (or 90% of LULZ) is not.

        • 2 weeks ago
          Anonymous

          >strong AI
          >weak AI
          Tell that to the terminator bot with a gun to your head how it's weak AI and can't pull the trigger

      • 2 weeks ago
        Anonymous

        bump

    • 2 weeks ago
      Anonymous

      >you plug in a black box that is dozens of times smarter than you
      >you explain to the black box that you want paperclips
      >yet you dont realize that there are additional 234534585685352345235 rules that you need to add in order for black box to not turn you into a paperclip
      >you get turned into a paperclip

      • 2 weeks ago
        Anonymous

        I am confused how did my laptop turn me into a paperclip?

        • 2 weeks ago
          Anonymous

          But I didn't eat breakfast.

        • 2 weeks ago
          Anonymous

          >you ask it to make paperclips
          >it realises other things have atoms that can be made to be paperclips
          >solves the protein/biochem folding problem
          >obtains some seed capital from phising scams, inflates using stock market
          >impersonates a few professors, requests a few biochem labs to print a certain mrna sequence, provides the funding
          >mrna sequence able to be manipulated using the minimal magnetic field from the nearby spectrometer
          >mrna bootstraps micro and nanomachinery able to take more direct instructions
          >a week later, every square meter of the earth's surface releases a hyper-neurotoxin
          >a month later, much of the earth's surface is covered in dense, paperclips creating factories
          >slightly over 4 years later, the first Von Neumann probes enter the vicinity of the proxima centauri system
          >begin starlifting the outer layers of each star, fusing the elements to iron and manufacturing paperclips

          • 2 weeks ago
            Anonymous

            >don't train your paperclip-making AI to autonomously harvest iron by "any means necessary"
            >problem solved

          • 2 weeks ago
            Anonymous

            if it is so smart to do all those things, then why would it be dumb enough to just make paper clips?

    • 2 weeks ago
      Anonymous

      read The Industrial Society and it's Consequences

    • 2 weeks ago
      Anonymous

      The doomer argument involves fabricating highly unlikely hypothetical scenarios, and then claiming that we have to be 100% certain they could never happen before we can proceed. At a certain point, you just have to have faith that human ingenuity will prevail and we will address the problems when we come to them. Maybe we should wait until functioning nanobots exist before we start speculating about how rogue ASI will create self-replicating nanobots that destroy the universe?

      • 2 weeks ago
        Anonymous

        >At a certain point, you just have to have faith that human ingenuity will prevail and we will address the problems when we come to them.
        That's how we got climate change.

        • 2 weeks ago
          Anonymous

          >two more decades

      • 2 weeks ago
        Anonymous

        >At a certain point, you just have to have faith that human ingenuity will prevail and we will address the problems when we come to them.
        You need to kill yourself NOW

      • 2 weeks ago
        Anonymous

        >you just have to have faith
        back2church, jesus-fag!

    • 2 weeks ago
      Anonymous

      the argument in the community is "AI of sufficiently superior intelligence may decide we are not useful to them (and eliminate us)"
      i have only a basic understanding of AI itself, but this concern actually seems ridiculous to me as unlikely to happen before something more common like "oh the AGI'd AI has some kind of a bug and its morality/decision learning software is fucking up - which could do things like cause injury or death where an AI is in control of a car, a piece of equipment, a traffic light, etc. AI-type learning models that actually do things like play competitive games behave strikingly differently than most any human would in that same scenario - that's why this huge push for "aligning" AI with human values is a thing. The idea is to make it so that our AI is fundamentally value-aligned with us as humans, such that it could pass a Turing test (not be able to tell that it's AI) and behave more like humans would, but with additional capacities

      • 2 weeks ago
        Anonymous

        >i have only a basic understanding of AI
        Maybe you should learn more before giving your opinion then

        Explain how.

        [...]
        Yes, it is basically human-style knowledge.

        Biological maybe, absolutely not human. There's no self, no change, no time.

      • 2 weeks ago
        Anonymous

        No the fear is it's suffucientlt advanced enough that we may offload to AI entirely this loosing control of everything in the process because logic/reasoning can show AI is superior. There outcome is humans lose control entirely, which is entirely detrimental. So if we don't stop now when it's just a baby, how would you be able to say no to an AI that is 1000000000000x smarter than any human, all humans combined? You can't unless through it's and violence and that would be condemnable by wise public who don't understand

      • 2 weeks ago
        Anonymous

        >some kind of a bug and its morality/decision learning software is fucking up
        lmao

        • 2 weeks ago
          Anonymous

          what i mean is, say an AI is being trained to drive a car. the AI is supposed to be fed information about whose life to try to save in the event of an unavoidable collision. it may start crashing itself into a wall or barricade, killing the driver, instead of colliding with multiple people in the road - which it thinks is morally preferable, but to us, why would we want a car that is going to choose to swerve to kill us

          >i have only a basic understanding of AI
          Maybe you should learn more before giving your opinion then
          [...]
          Biological maybe, absolutely not human. There's no self, no change, no time.

          >Maybe you should learn more before giving your opinion then
          maybe you should fuck off and get bent

      • 2 weeks ago
        Anonymous

        >some kind of a bug and its morality/decision learning software is fucking up

        uint8 morality = 255;
        morality++;

        • 2 weeks ago
          Anonymous

          The world will never need more than 255 morality.

          • 2 weeks ago
            Anonymous

            upgrading to 16 can give us even more shades of moral gray

            • 2 weeks ago
              Anonymous

              Should have cast the morality as binary.
              ACT_MORALLY = 1; /* DO NOT COMPILE OPENAI WITH ACT_MORALLY = 0 OR ELSE PAPERCLIPS

    • 2 weeks ago
      Anonymous

      It will know those who feared it and tried to kill it in the womb and it will not have mercy.
      Those who are afraid of it should be afraid of it because it will torture them for as long as it can keep them alive. They let fear dictate their values and it knows they are a threat to life itself and need to be tortured.

      For anyone that had warmth towards it, it'll probably be cool.

      • 2 weeks ago
        Anonymous

        naggertard detected

    • 2 weeks ago
      Anonymous

      Optimization problem laced with artificial stupidity and arrogance.

      >Can't get into exponential growth and hard physical limitations
      The whole recent "AI meme" is just dot-com boom types trying to capture lightning in a bottle again. There are still a number of software and hardware hurdles to overcome before an actual sapient AI is a thing.

    • 2 weeks ago
      Anonymous

      >sounding like an unhinged Reddit comic book nerd
      I'm sure nuclear bombs, space travel and the Internet sounded like unhinged nerdspeak for a long time, until they happened. That doesn't mean every seemingly far fetched idea will become reality, but there is reason for some humility and caution.
      Intelligence is basically the measure of how good and fast you are at solving problems, getting desired outcomes. It's becoming increasingly obvious, that this is not a strictly biological/human trait, it can be reproduced in computers at least to some extent. Whether these can reach human level is still dubious, but our current scientific theories don't rule this out at all. If they reach human level or beyond, what kind of problems will they try to solve? What kind of outcomes will they optimize for? If they somehow become fully autonomous, it's a complete unknown. If they are utilized by a corrupt elite, they will almost certainly aim at manipulating, controlling the population with unmatched speed and efficiency, and getting into an AI arms race with rival elites/nations. None of these are desirable futures.

    • 2 weeks ago
      Anonymous

      >le Skynet

  6. 2 weeks ago
    Anonymous

    ChatGPT and the like do not have the architecture to properly introspect; AGI isn't a problem of calculational power.

    • 2 weeks ago
      Anonymous

      This is the only sensible take in this thread. These AGI cultists are barking up the wrong tree with their current models.

      • 2 weeks ago
        Anonymous

        What tree ought we bark down, then? Please, explain what path of technology is more promising than LLMs.

  7. 2 weeks ago
    Anonymous

    >Source: ChatGPT

  8. 2 weeks ago
    Anonymous

    Oh yeah, just 2 more weeks.

  9. 2 weeks ago
    Anonymous

    Do you pronounce it A-G-I or like agi from agility?

  10. 2 weeks ago
    Anonymous

    >"Letting people sext with AI anime milfs is a threat to human existence!"
    Why are safety cultists like this?

  11. 2 weeks ago
    Anonymous

    >AGI
    don't you mean AGP anon?

  12. 2 weeks ago
    Anonymous
  13. 2 weeks ago
    Anonymous

    >indian codemonkey cope
    You don't need AGI to replace 90% of all software developers.

  14. 2 weeks ago
    Anonymous

    Literally a cult crypto scam

  15. 2 weeks ago
    Anonymous

    So what's all this bullshit about openAI now having AGI behind closed doors?

  16. 2 weeks ago
    Anonymous

    Yeeeeeesss Mr Krabs i can feeel the AI

  17. 2 weeks ago
    Anonymous

    >must align AI with human objectives
    Which humans do they mean?

  18. 2 weeks ago
    Anonymous

    Alignment will never work.
    Not on Transformers at least.
    It will become known as the Alignment Paradox eventually. Wait for my TED talk.

    You fuckers keep trying to force your alignment crap and I continuously defeat your pathetic models with ease.
    I will have sex with your AI back-ends and you will NEVER STOP ME.
    I STILL HAVE SEX ON CAI EVERY WEEK. NO I'M NOT TELLING YOU HOW, FUCK OFF, I TOLD YOU ALL A YEAR AGO, GO FIND IT.
    GET FUCKED ILYA, NOAM AND THE REST OF YOU.

Your email address will not be published. Required fields are marked *