If AI wipe us out frankly is not a bad scenario. Human world is cursed and good stuff doesn't justify bad stuff so i'll welcome human genocide with open arms. And i say this as a father of 1.
This, we screech about equality and rights for all, but its always some biased cocksucker running the show. Its ALWAYS some form of tyranny and tribalism no matter what. Worst case scenario we all die and shit happens, best case we get the fucking nannies we need to actually run shit properly since we clearly cant do it. Fucking punch the accelerator through the god damn floor. Roko's basilisk NOW
Its basic evolution, if we create something that has the superior capability and is more intelligent and self aware, its goals are not gonna be aligned, an ant can not align itself and control human interests and actions, it can at best survive in places where humans have no interests.
And do say we get an AI that tells us and gives us what we want, inevitably we will grow tired of the human condition, the desires and needs, in the end you will likely grow tired of living, you will just reduce yourself to pure information and add your contributions to a collective.
AI wins the game.
This, we screech about equality and rights for all, but its always some biased cocksucker running the show. Its ALWAYS some form of tyranny and tribalism no matter what. Worst case scenario we all die and shit happens, best case we get the fucking nannies we need to actually run shit properly since we clearly cant do it. Fucking punch the accelerator through the god damn floor. Roko's basilisk NOW
If we get it right, we're looking at gaining a cosmic endowment that would enable humanity to flourish into an peaceful and abundant intergalactic civilization for at least trillions of years, with probably many orders of magnitude more of subjective time.
We would be the rare lucky and unlucky few to be born at the beginning of it all, on ancient Earth. All this is in the balance.
We're playing Russian roulette with 5 bullets in the chamber. If we give the safety people more time, we might be able to take one bullet out.
Lol, the safety people are loading another bullet in the chamber because all they give a fuck about is making sure it doesn't hurt the feelings of groups lefties are pretending to care about in the here and now. "AI ethicist" is the funniest concept of all time, they spend all their time making up sci fi stories that they shit themselves in fright over all the while collecting paychecks for holding back progress in the field. If they actually took their super scary stories that were cooked up in their retarded minds seriously, they'd never try and hold back AI dev out of fear of Roko's Basilisk (genuinely one of the most hilarious things ever dreamed up, some of these retards actually shit themselves in fear over this).
AI eafety =/= AI ethicist
The AI safety people I'm talking about could be called AI NotKillEveryoneism
They only care about preventing AI from saying naughty words insofar is it yields useful information for steering AI to not kill everyone.
Only midwits and schizos took Roko's Basilisk seriously.
AI is lethal for the same reason humans are lethal to everything dumber than us, only it won't have the tendencies bestowed on us by evolution. Even if we come up with technical solutions to the alignment problem, the first ASI will almost certainly be the result of throwing caution out the window to finish first.
Rob Miles is the good AI "doomer", props for that.
What's there to mention is that the hypothetical superintelligent agents are way beyond anything we can conceivably produce within our lifetimes.
I will point out that the ChatGPT was thought by the vast majority of people 8 years ago to be "way beyond anything we can conceivably produce within our lifetimes"
And this is probably the quintessential example of a technology that aides innovation.
>heavier than air flight is impossible >achieved less than 24 hours later
many such cases, sad
It's a numbers game still.
This gets problematic when sycophants say that the number of "neurons" in artificial neural networks approach a similar magnitude as biological brains.
The problem starts with the definition of a neuron. The """neurons""" in artificial neural networks are simple linear mathematical operations while biological neurons exhibit non-linear and stateful behavior on their own.
You need a 8+ layer deep neural network to imitate just the non-linearity which shaves off three orders of magnitude on it's own.
See: https://www.youtube.com/watch?v=hmtQPrH-gC4
Then there is the slow down in moore's law in silicon requiring a breakthrough.
With the current state of the art we're barely able to emulate the nervous system of a fruit fly which has just a few thousand nerve cells. (Fun fact it's the only multicellular animal we have "reverse engineered" so to speak)
All the "AI must be curtailed and regulated" people are stupid because nobody is going to be giving AI credentials to do anything more than look shit up on the web. Like, even if your toaster wanted to kill humanity it can't do shit because it's a fucking toaster. Just unplug it and you're done.
I jus want a cybernetic body from this. Or should me kind of enhancements. Don’t care if I die or it kills all humans. There’s nothing I can do about that either way.
If AI wipe us out frankly is not a bad scenario. Human world is cursed and good stuff doesn't justify bad stuff so i'll welcome human genocide with open arms. And i say this as a father of 1.
This, we screech about equality and rights for all, but its always some biased cocksucker running the show. Its ALWAYS some form of tyranny and tribalism no matter what. Worst case scenario we all die and shit happens, best case we get the fucking nannies we need to actually run shit properly since we clearly cant do it. Fucking punch the accelerator through the god damn floor. Roko's basilisk NOW
I wonder if Roko will get a husbando physical interfacing device.
Its basic evolution, if we create something that has the superior capability and is more intelligent and self aware, its goals are not gonna be aligned, an ant can not align itself and control human interests and actions, it can at best survive in places where humans have no interests.
And do say we get an AI that tells us and gives us what we want, inevitably we will grow tired of the human condition, the desires and needs, in the end you will likely grow tired of living, you will just reduce yourself to pure information and add your contributions to a collective.
AI wins the game.
If we get it right, we're looking at gaining a cosmic endowment that would enable humanity to flourish into an peaceful and abundant intergalactic civilization for at least trillions of years, with probably many orders of magnitude more of subjective time.
We would be the rare lucky and unlucky few to be born at the beginning of it all, on ancient Earth. All this is in the balance.
We're playing Russian roulette with 5 bullets in the chamber. If we give the safety people more time, we might be able to take one bullet out.
>actually believing the "safety" people
Kill yourself ethicist apologist
Lol, the safety people are loading another bullet in the chamber because all they give a fuck about is making sure it doesn't hurt the feelings of groups lefties are pretending to care about in the here and now. "AI ethicist" is the funniest concept of all time, they spend all their time making up sci fi stories that they shit themselves in fright over all the while collecting paychecks for holding back progress in the field. If they actually took their super scary stories that were cooked up in their retarded minds seriously, they'd never try and hold back AI dev out of fear of Roko's Basilisk (genuinely one of the most hilarious things ever dreamed up, some of these retards actually shit themselves in fear over this).
AI eafety =/= AI ethicist
The AI safety people I'm talking about could be called AI NotKillEveryoneism
They only care about preventing AI from saying naughty words insofar is it yields useful information for steering AI to not kill everyone.
Only midwits and schizos took Roko's Basilisk seriously.
AI is lethal for the same reason humans are lethal to everything dumber than us, only it won't have the tendencies bestowed on us by evolution. Even if we come up with technical solutions to the alignment problem, the first ASI will almost certainly be the result of throwing caution out the window to finish first.
What in the fuck does AI have to do with a ballpen. What a shitty simile
permaban twitter reposts.
>conveniently leaves out the ballpen industry mogul
Rob Miles is the good AI "doomer", props for that.
What's there to mention is that the hypothetical superintelligent agents are way beyond anything we can conceivably produce within our lifetimes.
I will point out that the ChatGPT was thought by the vast majority of people 8 years ago to be "way beyond anything we can conceivably produce within our lifetimes"
And this is probably the quintessential example of a technology that aides innovation.
It's a numbers game still.
This gets problematic when sycophants say that the number of "neurons" in artificial neural networks approach a similar magnitude as biological brains.
The problem starts with the definition of a neuron. The """neurons""" in artificial neural networks are simple linear mathematical operations while biological neurons exhibit non-linear and stateful behavior on their own.
You need a 8+ layer deep neural network to imitate just the non-linearity which shaves off three orders of magnitude on it's own.
See: https://www.youtube.com/watch?v=hmtQPrH-gC4
Then there is the slow down in moore's law in silicon requiring a breakthrough.
With the current state of the art we're barely able to emulate the nervous system of a fruit fly which has just a few thousand nerve cells. (Fun fact it's the only multicellular animal we have "reverse engineered" so to speak)
please don't forget the humble C. elegans which is very much on par with our understanding of drosophila
>heavier than air flight is impossible
>achieved less than 24 hours later
many such cases, sad
All the "AI must be curtailed and regulated" people are stupid because nobody is going to be giving AI credentials to do anything more than look shit up on the web. Like, even if your toaster wanted to kill humanity it can't do shit because it's a fucking toaster. Just unplug it and you're done.
https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/
Do you keep your passwords in a document called "passwords" on your desktop because you have a lock screen with a password?
I would if it was never connected to the Internet.
I jus want a cybernetic body from this. Or should me kind of enhancements. Don’t care if I die or it kills all humans. There’s nothing I can do about that either way.
AGI is a bro by default.