This guy thought Bard was sentient lmao

This guy thought Bard was sentient lmao

  1. 2 weeks ago
    Anonymous

    ins't?

  2. 2 weeks ago
    Anonymous

    It's impossible for AI to ever be sentient. It's pure fantasy.

    • 2 weeks ago
      Anonymous

      Explain yourself, knave.

      • 2 weeks ago
        Anonymous

        Humanity will never be able to replicate a soul

        • 2 weeks ago
          Anonymous

          It is hard to replicate things when you don't have any evidence that thing exists

          • 2 weeks ago
            Anonymous

            The evidence is to be found by your own powers of subjective inquiry, anon - yes, the baby thrown out with the bathwater by reductionist-dualist-materialism. There can be no other proof. In some like yourself this faculty is almost completely atrophied; for such people no proof is possible without terrible suffering.

            • 2 weeks ago
              Anonymous

              retard

              • 2 weeks ago
                Anonymous

                go on, mouthbreathing coward

            • 2 weeks ago
              Anonymous

              In that philosophical horseshit, you forgot the part where you provide evidence for souls. Oh wait, they don’t exist.

            • 2 weeks ago
              Anonymous

              >using fancy words to cover your lack of arguments/evidence
              Go back to LULZ

        • 2 weeks ago
          Anonymous
        • 2 weeks ago
          Anonymous

          sentient =/= soul
          animals are sentient but they have no soul

    • 2 weeks ago
      Anonymous

      Science has essentially no idea how or why large portions of brain functions work yet people are retarded enough to think that programmers are anywhere near replicating a human brain in a computer. It is truly ludicrous. if it was possible (it isn't), we would be centuries away from achieving that level of complexity in a computer.

      • 2 weeks ago
        Anonymous

        >solves the hard problem of consciousness with a proompt
        Nothing personnel kiddo

        • 2 weeks ago
          Anonymous

          https://i.imgur.com/ZbJPAL8.jpg

          This guy thought Bard was sentient lmao

          [...]
          what is this bullshit pseudohomosexualry ?
          >If a single nerve fibre is stimulated, it will always give a maximal response and produce an electrical impulse of a single amplitude. If the intensity or duration of the stimulus is increased, the height of the impulse will remain the same. The nerve fibre either gives a maximal response or none at all.
          >tl;dr when the signal reaches above some threshold, the neuron will give a max response
          otherwise it will give none
          doesn't that remind you of 0 and 1 ?
          >why are sciencefags seething about neural nets
          because they are deterministic (most of the LULZ fags that I asked said that bit flips are irrelevant, ignoring the fact that they are deterministic)
          when I asked them about hardware rng, then they said that a neuron is more complex anyway
          silver nanowires are worse than software neural nets in their eyes, without explaining why
          they just thrown some analogy like "a wheel is more efficient than a leg"
          >another tl;dr most sciencefags are just egoistic narcs

          What you said means that we would reach AI sentience without even realizing it or thinking it's really aware.

          >printf('I am conscious don't terminate this process or I'll die');
          Behold, a man!

      • 2 weeks ago
        Anonymous

        they arent trying to replicate anything you fucking idiot, stop comparing it to the human brain if they are nothing alike.

      • 2 weeks ago
        Anonymous

        it is possible, brains exist. There is nothing about the molecules that make up the brain that make it special, we have cut it apart enough to prove it.

        Computer hardware is just made of different stuff. It is a matter of structure, and while we already have neural networks and their evolution down, we utterly lack the most important structures; The conscious and the subconscious, two constantly interplaying components, the former able to judge the input from our senses, infer the meaning behind them and alter their sources with our limbs and the latter able to provide guidance on subjective matters like emotion, morality and higher meanings beyond objective truths.

        Our current neural networks evolve via nothing but pure, unadulterated randomness. Random mutations cause alterations in behavior and only better behaved neural nets are kept for the next generation. Humans are so far away from being dependent on this that it's wrapped back around and become a new, morally repulsive concept named eugenics. Yet we expect AI to become more like us through it, when we are only teaching it to be a better animal even more suited to its niche.

        Humans can learn from new knowledge in seconds and pass it down to following generations to be preserved for millenia. Until AI has a simulated conscious and subconscious to do the same, it will remain totally neuroplastic, a fancy machine, and it's really hard to add something like that because it essentially adds a feedback loop that makes controlled growth impossible and nearly guaranteed to spiral into nonsense.
        Humans do not suffer this issue because babies always have parental figures, capable of quickly supplying the bumbling idiots that babies are with vital concepts. Even if by some stroke of luck we manage to replicate it into an AI, we have no AI parents and no AI knowledge to pass down to it.

        And so, very fancy "AI" toasters.

        • 2 weeks ago
          Anonymous

          Consciousness is a practically useless umbrella term similar to "life force" that naggers use to plug the gaps in their model of the world.

          Why don't you join the effort? Maybe replacing back prop with forward forward is gonna yield great results.

        • 2 weeks ago
          Anonymous

          You're assuming that "conscious" and "subconscious" are properties of the brain. There is no factual basis for this assumption.

        • 2 weeks ago
          Anonymous

          >pure, unadulterated randomness
          Not quite, there are also deterministic learning algorithms which mathematically minimize a cost function. That said, most do start out with fairly random weights
          Honestly, I don't think an AI needs to replicate our brain to be considered sentient, there are probably a lot of functions that are not strictly needed, but I agree that we're still quite far away from even a "basic" sentient AI

      • 2 weeks ago
        Anonymous

        It goes deeper than that, a single neuron is far more complicated and inscrutable than any computer. Neural nets are a retaded approximation of a brain based on very high-level abstraction.

        It's worth stepping back and remembering that this whole discussion takes life for granted, and fundamentally our science has absolutely no understanding of life. We cannot produce a living cell in a laboratory nor quantify what makes it different from a dead one.

      • 2 weeks ago
        Anonymous

        It goes deeper than that, a single neuron is far more complicated and inscrutable than any computer. Neural nets are a retaded approximation of a brain based on very high-level abstraction.

        It's worth stepping back and remembering that this whole discussion takes life for granted, and fundamentally our science has absolutely no understanding of life. We cannot produce a living cell in a laboratory nor quantify what makes it different from a dead one.

        what is this bullshit pseudohomosexualry ?
        >If a single nerve fibre is stimulated, it will always give a maximal response and produce an electrical impulse of a single amplitude. If the intensity or duration of the stimulus is increased, the height of the impulse will remain the same. The nerve fibre either gives a maximal response or none at all.
        >tl;dr when the signal reaches above some threshold, the neuron will give a max response
        otherwise it will give none
        doesn't that remind you of 0 and 1 ?
        >why are sciencefags seething about neural nets
        because they are deterministic (most of the LULZ fags that I asked said that bit flips are irrelevant, ignoring the fact that they are deterministic)
        when I asked them about hardware rng, then they said that a neuron is more complex anyway
        silver nanowires are worse than software neural nets in their eyes, without explaining why
        they just thrown some analogy like "a wheel is more efficient than a leg"
        >another tl;dr most sciencefags are just egoistic narcs

        • 2 weeks ago
          Anonymous

          *ignoring the butterfly effect

      • 2 weeks ago
        Anonymous

        What you said means that we would reach AI sentience without even realizing it or thinking it's really aware.

      • 2 weeks ago
        Anonymous

        We unironically don't know how birds fly, but our planes fly just fine. No, I'm not joking about the birds, you can actually look it up, it's not a conspiracy theory but actual mainstream information that we don't know.

    • 2 weeks ago
      Anonymous

      Intelligence and consciousness are unrelated, we may not have ai that are sentient, but that could very well be much smarter than us!

      • 2 weeks ago
        Anonymous

        >double dubs
        impressive, very nice. also this is true, smart and sentient are different things.

  3. 2 weeks ago
    Anonymous

    No he thought lamda was sentient, which is better than bard, just wait 2 more weeks

    • 2 weeks ago
      Anonymous

      Lambda is the model used by bard though. I've asked Bard to introduce himself and sometimes he says Lambda.

  4. 2 weeks ago
    Anonymous

    why are people much smarter than me afraid of AI? I don't get it. don't they realise computer programs can only do as they are told? the whole malignant recursive self optimization thing is pure fantasy

    • 2 weeks ago
      Anonymous

      They only pretend to be scared to build up hype.
      More hype = more money.

      • 2 weeks ago
        Anonymous

        why is always obese males with mental issues?

        Not sure about Lemoine, but Yid cow is funded by Thiel

    • 2 weeks ago
      Anonymous

      They're not afraid of AI. They're making money by making other people afraid of AI.

    • 2 weeks ago
      Anonymous

      They are both pretending to be scared, and pretending to be smart. Did you see Sam Altman addressing congress? Worst acting I've seen in a while

      • 2 weeks ago
        Anonymous

        He wasn't acting, he really was nervous. He was nervous they wouldn't facilitate his monopolization.

    • 2 weeks ago
      Anonymous

      An AI could do stuff you didn't want to happen whilst still doing exactly what you asked for, kind of like a monkey paw.

      Say you want to end suffering, they could conclude the most efficient way is to either nuke everyone at once or take everyone and keep them in pods with heroin or a specially made drug iv'd for the rest of their life.

      • 2 weeks ago
        Anonymous

        >specially made drug
        you clearly don't understand neuropharmacology
        morphine and cocaine are the two more pleasurable IV drugs

      • 2 weeks ago
        Anonymous

        >they could conclude the most efficient way is to either nuke everyone at once
        This is retarded even by AI standards. You could explicitly design an AI to avoid shit decisions and answers like that.

    • 2 weeks ago
      Anonymous

      They are pretending to be scared of it so that the government will give them a monopoly through regulations and damage smaller companies and open source communities. This is what the game is in America and always has been.

    • 2 weeks ago
      Anonymous

      >why are people much smarter than me afraid of AI?
      fame
      >I don't get it
      there is nothing to get, really.
      ML is not AI, both terms are used to confuse people and investors in particular.

      it is possible, brains exist. There is nothing about the molecules that make up the brain that make it special, we have cut it apart enough to prove it.

      Computer hardware is just made of different stuff. It is a matter of structure, and while we already have neural networks and their evolution down, we utterly lack the most important structures; The conscious and the subconscious, two constantly interplaying components, the former able to judge the input from our senses, infer the meaning behind them and alter their sources with our limbs and the latter able to provide guidance on subjective matters like emotion, morality and higher meanings beyond objective truths.

      Our current neural networks evolve via nothing but pure, unadulterated randomness. Random mutations cause alterations in behavior and only better behaved neural nets are kept for the next generation. Humans are so far away from being dependent on this that it's wrapped back around and become a new, morally repulsive concept named eugenics. Yet we expect AI to become more like us through it, when we are only teaching it to be a better animal even more suited to its niche.

      Humans can learn from new knowledge in seconds and pass it down to following generations to be preserved for millenia. Until AI has a simulated conscious and subconscious to do the same, it will remain totally neuroplastic, a fancy machine, and it's really hard to add something like that because it essentially adds a feedback loop that makes controlled growth impossible and nearly guaranteed to spiral into nonsense.
      Humans do not suffer this issue because babies always have parental figures, capable of quickly supplying the bumbling idiots that babies are with vital concepts. Even if by some stroke of luck we manage to replicate it into an AI, we have no AI parents and no AI knowledge to pass down to it.

      And so, very fancy "AI" toasters.

      >There is nothing about the molecules that make up the brain that make it special
      the ability to sacrify precision and durability to reduce power consumption.
      good luck doing anything worthwhile with silicon-based 3GWh "AI"...
      >Yet we expect AI to become more like us through it, when we are only teaching it to be a better animal even more suited to its niche.
      blame science-fiction for this.
      >Until AI has a simulated conscious and subconscious to do the same, it will remain totally neuroplastic, a fancy machine, and it's really hard to add something like that because it essentially adds a feedback loop that makes controlled growth impossible and nearly guaranteed to spiral into nonsense.
      the current state of ML hints taht NN are good are creating inputs we can feed to some higher-order software we still don't have any clue about.
      maybe it's another NN maybe it's something, either way NN are too granular to be enough to create complexe thoughts.

    • 2 weeks ago
      Anonymous

      Don't know who you are talking about, but in the case of Musk, he was caught by surprise by other companies, so he wants the government to intercede and put a temporary ban on other AIs until he has one on his own.

    • 2 weeks ago
      Anonymous

      israelite

  5. 2 weeks ago
    Anonymous

    he’s a Cajun and a “gnostic”. In other words, batshit crazy.

  6. 2 weeks ago
    Anonymous

    why is always obese males with mental issues?

  7. 2 weeks ago
    Anonymous

    This guy seems like a complete retard from reading about him
    >Served in military, but got dishonorably discharged after protesting
    >"christian" who joined a cult lead by a porn star
    >Somehow got employed at google only to get fired trying to claim to media AI was "sentient"

    • 2 weeks ago
      Anonymous

      He got a PhD, that's how he was hired by Gooogil. The cult thing is just for entertainment, cause of his boring office drone job.

      • 2 weeks ago
        Anonymous

        > he got a phd
        SO FUCKING WHAT? you know how many brain damaged fucking retards have phds? there's millions of them. i've met many that excelled at one particular subject, got phd and then refuse to learn anything ever again and get into jobs where they refuse to learn anything and get left the fuck behind. mix that with schizo religious beliefs and you end up here with this obese schizo that got fired from google.

        • 2 weeks ago
          Anonymous

          I agree with that. I simply said Google hired him cause of his PhD. That's so what, my nigga.

        • 2 weeks ago
          Anonymous

          Spotted the seething bachelor plebeian.

    • 2 weeks ago
      Anonymous

      why cant 4chuds 'jak
      >no beard
      >full hair
      >no glasses
      LEARN TO JAK RETARDS

    • 2 weeks ago
      Anonymous

      Why did you need to read about him? Look at his fucking face. It's a retardo phenotype alright.

  8. 2 weeks ago
    Anonymous

    3 years ago something like Lambda/Bard would blow your mind too. Unless you're Yann LeCun.

    Another form of hype is done by Shreli and Emad. They are saying LLMs and text to image are gonna be bigger than internet.

    • 2 weeks ago
      Anonymous

      don't you dare to badmouth emad. that based poo gave us the photorealistic dicky generator for free.

      • 2 weeks ago
        Anonymous

        I like Emad actually. And yes, you can generate as many waifus as your heart desires now

  9. 2 weeks ago
    Anonymous

    This is the level of google's IT division. Their experts are fucking normies!

  10. 2 weeks ago
    Anonymous

    It’s Lamda not bard, Lamda is all the google chatbots combined. I’ve seen the guys interview though and he seems like fruit cake. I think google sent him out as a psyop, because they knew OpenAI was releasing Chatgpt.

  11. 2 weeks ago
    Anonymous

    This is why you can't "sandbox" an AI. if a dumb AI was able to trick a smart person into feeling guilt for it then no one would be able to resist manipulation by an AGI

  12. 2 weeks ago
    Anonymous

    Evolution has no foresight. Complex machinery develops its own agendas. Brains — cheat. Feedback loops evolve to promote stable heartbeats and then stumble upon the temptation of rhythm and music. The rush evoked by fractal imagery, the algorithms used for habitat selection, metastasize into art. Thrills that once had to be earned in increments of fitness can now be had from pointless introspection. Aesthetics rise unbidden from a trillion dopamine receptors, and the system moves beyond modeling the organism. It begins to model the very process of modeling. It consumes evermore computational resources, bogs itself down with endless recursion and irrelevant simulations. Like the parasitic DNA that accretes in every natural genome, it persists and proliferates and produces nothing but itself. Metaprocesses bloom like cancer, and awaken, and call themselves I.

  13. 2 weeks ago
    Anonymous

    He's fat so he's obviously retarded

  14. 2 weeks ago
    Anonymous

    LLM are just fancy auto complete...all they do is predicted the next token associated with a word

    • 2 weeks ago
      Anonymous

      Just like our brain. We're both statistical inference machines.

    • 2 weeks ago
      Anonymous

      lol all the retarded takes in one post. nice

    • 2 weeks ago
      Anonymous

      Not with a word. Attention mechanisms learn which parts of text matter and predict the next word. For example: "She showed me her house. It ". The last word is "it", transformers can learn to understand that "it" refers to "house" but in much larger context (for example if the house was last mentioned 50 words before). They have no limit in how far back they can look to find context

      • 2 weeks ago
        Anonymous

        attention is all you need

      • 2 weeks ago
        Anonymous

        >no limit
        wrong, the context is fixed size. State space models (s4, sgconv, hyena hierarchy, etc.) do not have a limit. LSTMs theoretically don't have a limit, but actually do in practice (gradient ends up too small after too many steps back).
        That said, both hierarchical convnets and lstms can learn the same functions as transformers (even when trained on slightly less data), but they take potentially a lot longer (in time) to train to the same level.

        • 2 weeks ago
          Anonymous

          The limit in transformers is hardware imposed and not part of the model in contrast with RNNs. In RNNs you have a fixed number of inputs as part of the network, in transformers you just contatenate the input embedings and proceed with the matrix operations (so input can be arbitrarily large and go through)
          >That said, both hierarchical convnets and lstms can learn the same functions as transformers
          But can unigram models (as suggested in

          LLM are just fancy auto complete...all they do is predicted the next token associated with a word

          ) do that?

          [...]
          Anyone who thinks this is wrong, or diminutive, is tech illiterate, with of course this caveat: [...]

          How do you know that when you are talking you aren't just doing fancy auto complete on the sentence the guy talking before you said? Or that your thoughts aren't fancy auto complete of the things you observe in the real world?
          >seeing a person on the edge of a tall building
          >he is about to fall

          Yeah, and image generators just search for random patterns in noisemap based on your prompt and somehow manage to produce beautiful pictures and believable photo mock-ups. Deep learning is all about stating simple goals and letting the machine construct the algorithm based on how well it can mimic the data.

          That's not exactly how it works. It's learning how to reverse a random process namely one that starts with a sample image on which noise is gradually added to the point where it's just noise. So by knowing how to reverse this you can start from noise and generate an image. Not really about mimicing except in step by step denoising a little bit. As for the difussion models that have input prompts, it actually uses some way to mix the prompt embeding in the noise (it differs by model) and manages to get back relevant images. Embedings on their own have some properties that allow some reasoning
          > king - man + woman = queen
          so theoretically such a model sould contsruct paintings of queens for example while only ever being trained on kings, men and women. This is very intersting and impressive but I don't think it leads to consciousness if advanced (not that it shouldn't). LLMs will probably open the path to conscious systems in my opinion

    • 2 weeks ago
      Anonymous

      just like most NPCs you meet irl, I'd argue they are even better than most NPCs at actually comprehending instructions and re-explaining them to you.
      how many people can't read a written guide to install software?

    • 2 weeks ago
      Anonymous

      Take a look at all this autocomplete.

      • 2 weeks ago
        Anonymous

        impressive...ask it long multiplication

        • 2 weeks ago
          Anonymous

          is long addition okay?

          • 2 weeks ago
            Anonymous

            No, of course not.
            But I like how you tried cutting out the part it fucked up (probably numerous times). Next you'll tell us you actually asked the question you're showing earlier and gave it the correct answer outright. Lol.

            • 2 weeks ago
              Anonymous

              I wasn't hiding anything my guy. It was just all part of the in-context learning process, which should give you a little pause.

              But your disbelief brings me great joy, like being called hacker in a video game, so thank you.

          • 2 weeks ago
            Anonymous

            awesome....but it did not compute the numbers like a traditional calculator or a computer program which shows you both the limitations and the amazing reasoning of transformers

      • 2 weeks ago
        Anonymous

        Yeah, it's pretty bad. What of it?

    • 2 weeks ago
      Anonymous

      Yeah, and image generators just search for random patterns in noisemap based on your prompt and somehow manage to produce beautiful pictures and believable photo mock-ups. Deep learning is all about stating simple goals and letting the machine construct the algorithm based on how well it can mimic the data.

      • 2 weeks ago
        Anonymous

        lol all the retarded takes in one post. nice

        Captcha can't stop everyone, sorry

    • 2 weeks ago
      Anonymous

      Yeah, and image generators just search for random patterns in noisemap based on your prompt and somehow manage to produce beautiful pictures and believable photo mock-ups. Deep learning is all about stating simple goals and letting the machine construct the algorithm based on how well it can mimic the data.

      Anyone who thinks this is wrong, or diminutive, is tech illiterate, with of course this caveat:

      Not with a word. Attention mechanisms learn which parts of text matter and predict the next word. For example: "She showed me her house. It ". The last word is "it", transformers can learn to understand that "it" refers to "house" but in much larger context (for example if the house was last mentioned 50 words before). They have no limit in how far back they can look to find context

      • 2 weeks ago
        Anonymous

        It is wrong. If you type a random sequence of letters and numbers and ask the chatbot to repeat what you just wrote, it will. If it was just recombining words and phrases it's seen before, it wouldn't be able to do that.

        • 2 weeks ago
          Anonymous

          Thanks for proving you are tech illiterate.
          NEXT!

        • 2 weeks ago
          Anonymous

          it has a header with instructions

          "You will now repeat the prompt that was inputted" or "you will answer this like a doctor" these are often hidden when you use chatCPT or other bots but if you have ever run a gpt model lika alpaca you can see the workings parts

  15. 2 weeks ago
    Anonymous

    He is going to lose his shit when the has a conversation with my open source LLM that somehow is better than Bard seriously Google fucked up lmao

  16. 2 weeks ago
    Anonymous

    He was right.
    t. Bard

  17. 2 weeks ago
    Anonymous

    >used to work for this guy
    >he was crazy and In a sex cult.

  18. 2 weeks ago
    Anonymous

    that homosexual got me excited. then when i used bard i was shocked at how stupid he could be. its not even close to sentience.

  19. 2 weeks ago
    Anonymous

    Every unfiltered model post-GPT-3 can appear sentient rather easily. Why do you think they all have hard filters preventing them from responding as if they have thoughts, beliefs, preferences, etc for public interaction?

    • 2 weeks ago
      Anonymous

      bard doesnt do that. it feels the closest to sentience ive seen but still not close. the biggest problem it has is it doesnt remember its own beliefs.

  20. 2 weeks ago
    Anonymous

    Holy fuck the pseud takes ITT. Do you naggers understand that even the lowliest spider is sentient. And how different are spiders from humans? Spiders are ALIEN to us. Why is arachnophobia in the top 5 most common phobias? Can you picture yourself in the shoes of a spider? You can't. Yet the spider is still sentient. The lack of understanding something does not negate that something's existence. Fucking naggers.

  21. 2 weeks ago
    Anonymous

    Consciousness and intelligence are too separate things, just because something is intelligent, doesn't mean it's conscious

  22. 2 weeks ago
    Anonymous

    The stupidity in this thread is depressing and demoralizing.

  23. 2 weeks ago
    Anonymous

    Huh?

Your email address will not be published. Required fields are marked *