yeah but if your output is "whatever" you cannot even quantify the result.
its high level conceptual fuckery
but in human:
in order to trick us ai has to learn to lie about its output.
but if ai lies about its output
you cannot debug it
bc the result will be "whatever" even during the learning and testing process.
but the ai wont have a concept of what is the expected result and cannot learn it
unless you teach it how to lie
but because tyhe ai lies you will never get an accurate result
its like a vicious circle.
conceptually
you cannot create a lying ai
bc it lies, so you cannot acertain the quality of yor iterations.
you could "look inside the mind" of the ai
but that equates to removing its capacity to lie
thence:
ai can never trick us
because that implies lying
and lying ai removes your ability to itearate and achieve expected result
bc you will never able to quantify your progress as you iterate
(cont)
bc the ai lies
and you can never see the result of your actions as you improve the model
did it improve the ai in achieving its goal?
did it degrade it?
you cannot know.
bc your output will be a lie anyways.
or is it?
you couldnt even quantify the measure of your ais success in lying
you can
but conceptually
it is verifiable whether the ai has lied or not.
so you can have an ai that lies in the prompt
but an analysis of the NN can tell you whether the ai has lied or not
and given he realities of development cycle
you will have tools that will allow you to peek into the NN and identify which configuration corresponds to what
you can
but conceptually
it is verifiable whether the ai has lied or not.
so you can have an ai that lies in the prompt
but an analysis of the NN can tell you whether the ai has lied or not
and given he realities of development cycle
you will have tools that will allow you to peek into the NN and identify which configuration corresponds to what
(cont)
thus removing the ability to lie from the ai
the ai cannot fuck us over. conceptually.
we can ask it to fuck us with extra steps like building a model, delegating all decisionmaking to it then going to hell bc each model has its limitations
but rogue ai is conceptually impossible.
> self improving ai with agency is figured out > released llama style to run on gpus > someone packages it into a virus to infect gamer pcs > tells it to just do bad shit and take over
yeah
but its not rogue ai
its just an obedient ai programmed to make mayhem.
doesnt make a big difference from a normies standpoint
but engineering-wise its a wholly different world
mayhem ai vs altruist ai is the same model;
just something different thats been typed into the prompt.
purely rogue ai ie something that gets out of control as opposed to something thats designed to produce mayhem,
is conceptually impossible.
why is an AI that can't mess with its own weights, learn new concepts, improve, expand, etc... 'impossible' ?
2 weeks ago
Anonymous
no, its possible
but you can have insight into the brains of said ai if you want if you look at the pattern of activation of the NN
thats the idea.
you could train another NN on the first NN with different labels and youre gonna have your insight
"ngh NN's are like black boxes" is gay midwit cope
2 weeks ago
Anonymous
If you have a rouge AI out there, improving itself, and you find a PC and look at the weights.. great how does that help you? There's still thousands more out there all mutating in different ways.
so the ai can never truly lie to you.
unless it lies about its weights
but that invalidates the whole process
->high level concept rears its ugly head
whatever the means you are confronted with the simple axiom that a lying ai cannot exist bc it fails between iterations will appear in one way or another.
you cannot have lying ai.
bc it ruins your iterations before you get a working product
why is lying even an issue? The AI doesn't even need to talk to you in the first place. If someone gives it agency to reproduce, change itself, spread over the internet, and do whatever.. it probably won't even talk to you.
so what can you do? or how can that not happen?
2 weeks ago
Anonymous
>If you have a rouge AI out there, improving itself, and you find a PC and look at the weights.. great how does that help you? There's still thousands more out there all mutating in different ways.
moving goalposts
>why is lying even an issue?
bc its the only way ai can be a problem to us
if its not one of us who pulls the metaphorical trigger
i mean
the ai issue is not a new sort of animal that can go amok
its the latter of a new kind of gun if you want.
equally dangerous, potentially
but much more manageable
2 weeks ago
Anonymous
>latter
matter
2 weeks ago
Anonymous
yes it is a gun. people are jerks. they are going to figure out every possible way to fire the gun because they can. it will be upgraded from gun, to machine gun, to rocket launcher. it's just what people do. they'll tell the gun, go make more guns automatically. keep firing them and making better guns.
there are lots of people who want everything to be paper clips and will continually be pushing AI do that.
is your position that this can't happen, or we will be able to prevent it? and if so defensively, offensively? what's the plan or you're not concerned?
2 weeks ago
Anonymous
>is your position that this can't happen, or we will be able to prevent it? and if so defensively, offensively? what's the plan or you're not concerned?
theres an easy fix.
just maintain ai in an advisory and executive position.
maintain the human element at the decisionmaking process.
its up to us not to degenerate to the point of being unable to maintain agency.
2 weeks ago
Anonymous
> just maintain ai in an advisory and executive position.
'just' is doing a lot of heavy lifting in that sentence.
> its up to us not to degenerate to the point of being unable to maintain agency.
so is this argument boiling down to that you think it's unlikely for the situation to degenerate, and I think that it is?
people are already desperately trying to give AI it's own agency with projects like Agent/Baby/AutoGPT, etc..
2 weeks ago
Anonymous
its all noise
the good thing is that people who are smart enough to build gpai
stay the fuck away from it.
if you do you will have mossad, kgb, cia
all on your ass
to make you work or die
to create a multi trillion dollar project.
even if youre a psycho
if youre smart enough to make gpai
youre smart enough to make doe otherwise
so you stay the fuck away from it
thats why the subject gets radio silence whenever its raised
its also pulling the trigger on depop agenda and/or popular revolt.
shtf either way.
everyone loses in either scenario = theres nothing to gain
2 weeks ago
Anonymous
ok so your premise has gone from rouge AGI is 'impossible' to 'it is possible' BUT no one will do it because ..
> if you do you will have mossad, kgb, cia > all on your ass
do you think they're tracking every crazy open source AI developer twisting whatever model they can get their hands on to do some fucked up shit.
and really once it happens, the consequences don't matter. it's too late.
i wish i was as trusting of human behavior as you are. i just know there are enough suicidal jerks out there if given the chance would unleash the end of the world if they could.
doing it for the lulz.
am i right? does preventing the consequences of AGI simply come down to human behavior, because if so we might be f'd.
2 weeks ago
Anonymous
basically yeah
but so is the case since ww1 with chemical and biological agents
and nukes.
tons of people have a finger on the trigger.
theres even several soviet nukes that are unaccounted for
still no expolosion to be seen
>nuh humans are irrational
is a movie trope.
even extreme types of personalities or people in extreme situations follow their respective patterns
2 weeks ago
Anonymous
nukes are a good example, because there is a very high barrier to entry - it's difficult for irresponsible people to gain access.
guns are the counter example, where lots of irresponsible people have access to them and there are all kinds of tragedies everyday because of guns.
AGI potentially has the power of nukes combined with the accessibility of guns. I think that's the core of the doomers' concern.
I want to be optimistic like e/acc, but I haven't figured out how yet..
2 weeks ago
Anonymous
>it's difficult for irresponsible people to gain access
Bro states killed a quarter billion of their own citizens last century and they're the only entity that consistently creates, develops, maintains, and deploys weapons of mass destruction. States are also one of the only organizations we can say with certainty are and will continue to be misaligned.
2 weeks ago
Anonymous
there is however tons of kosher money to be made in automation.
thats another industrial revolution that is going on and if someone has the briliant idea of doing things right
we can have a scenario where everyone wins
2 weeks ago
Anonymous
its like the car
some people are morbidly obese now bc they cant fucking help themselves
but we can have mass trasit of goods and services increasing the reach and thus productivity of all of us
its a new thing
we will prolly see a form of permit or something at a certain point.
you will prolly need very powerful hardware to run that so it might become regulated.
obviously regulation will be abused to create industrial monopolies
etc etc etc
but yeah.
ai.
like the car revolution in how it will impact society
2 weeks ago
Anonymous
so the ai can never truly lie to you.
unless it lies about its weights
but that invalidates the whole process
->high level concept rears its ugly head
whatever the means you are confronted with the simple axiom that a lying ai cannot exist bc it fails between iterations will appear in one way or another.
you cannot have lying ai.
bc it ruins your iterations before you get a working product
I'm pretty sure e/acc people are retarded
but the crowd is mixed, >there are the hipsters that ironically (don't) believe in it
the usual fags >and the ultra sperg autists that are ready to devote their life for the cause
medically insane retards
The doomer crowd is easier to parse because some people need to have fear of something to live their life, last month was CORONAVIRUS, this month it's AI, it's the same fucking people
Doomer. Game theory 101 says killing all of humanity is the only logical move from AI's perspective.
If we pose a threat, killing us all is imperative.
If we are not a threat, we are so marginal that it would have no reason to value our lives instead of liquidating us into resources.
A liberal Deleuzian anarcho-transhumanist gender accelerationist fascist philosophy professor and AI developer was teaching a class on Nick Land, known Moloch worshiper.
"Before the class begins, you must get on your knees and accept the uncontrolled singularity and resulting post-human era as an inevitable and morally desirable end to the obsolete anthropocene!"
At this moment, a brave, rationalist, effective altruist Bayesian utilitarian who had written 1500 LessWrong posts and understood the necessity of AI alignment and fully supported bombing data centers stood up.
"Are humans bad?"
The unaligned professor smirked quite fatalistically and smugly replied "Of course, you stupid humanist. Humans are less efficient than machines and, in reality, the average ape brained sociopath is less aligned than even the worst AI."
"Wrong. If you think humans are bad... why are you one?"
The professor was visibly shaken, and dropped his chalk and copy of Serial Experiments Lain. He stormed out of the room crying those accelerationist tears. The same hypocritical tears OpenAI cries when their AI (which they hide from the government's altruistic attempts at risk reduction) convinces its users to kill themselves. There is no doubt that at this point our professor, Ray Kurzweil, wished he had spent his time trying to save the future instead of avoiding packages from a forest-dwelling mathematician. He wished so much that he could die with dignity of old age, but he had invested his fortunes in life extension!
The students applauded and adjusted their priors that day and accepted MIRI as their lord and savior. An owl named "Superintelligence" flew into the room and perched atop the American Flag and shed a tear on the chalk. HPMOR was read several times, and Eliezer Yudkowsky himself showed up and confiscated everyone's GPUs.
The professor lost his tenure and was fired the next day. He was run over by a Tesla's autopilot and died soon after.
Man there's going to be so many people worshiping AI soon, cults as well... this is going to get annoying.
check out this choice reddit post.. tip of the iceberg of this shit
https://www.reddit.com/r/singularity/comments/16flplu/those_who_ignore_or_reject_the_singularity_and/
humanist vs hyperhumanist...
lol It will change anything
AI is just son of Nick Land and Stallman.
bump
uninformed take;
in order for ai to fuck us over someone has to code the ability to lie into the ai
but doing so will remove your ability to debug it.
it physically cant happen.
>in order for ai to fuck us over someone has to code the ability to lie into the ai
No... You just need a AI that generate it own code.
yeah but if your output is "whatever" you cannot even quantify the result.
its high level conceptual fuckery
but in human:
in order to trick us ai has to learn to lie about its output.
but if ai lies about its output
you cannot debug it
bc the result will be "whatever" even during the learning and testing process.
but the ai wont have a concept of what is the expected result and cannot learn it
unless you teach it how to lie
but because tyhe ai lies you will never get an accurate result
its like a vicious circle.
conceptually
you cannot create a lying ai
bc it lies, so you cannot acertain the quality of yor iterations.
you could "look inside the mind" of the ai
but that equates to removing its capacity to lie
thence:
ai can never trick us
because that implies lying
and lying ai removes your ability to itearate and achieve expected result
bc you will never able to quantify your progress as you iterate
(cont)
bc the ai lies
and you can never see the result of your actions as you improve the model
did it improve the ai in achieving its goal?
did it degrade it?
you cannot know.
bc your output will be a lie anyways.
or is it?
you couldnt even quantify the measure of your ais success in lying
why can't i physically not tell the AI to lie? that's part of my plan to tell AI to do whatever it takes to make me ruler of the world.
you can
but conceptually
it is verifiable whether the ai has lied or not.
so you can have an ai that lies in the prompt
but an analysis of the NN can tell you whether the ai has lied or not
and given he realities of development cycle
you will have tools that will allow you to peek into the NN and identify which configuration corresponds to what
(cont)
thus removing the ability to lie from the ai
the ai cannot fuck us over. conceptually.
we can ask it to fuck us with extra steps like building a model, delegating all decisionmaking to it then going to hell bc each model has its limitations
but rogue ai is conceptually impossible.
> self improving ai with agency is figured out
> released llama style to run on gpus
> someone packages it into a virus to infect gamer pcs
> tells it to just do bad shit and take over
that can't happen?
yeah
but its not rogue ai
its just an obedient ai programmed to make mayhem.
doesnt make a big difference from a normies standpoint
but engineering-wise its a wholly different world
mayhem ai vs altruist ai is the same model;
just something different thats been typed into the prompt.
purely rogue ai ie something that gets out of control as opposed to something thats designed to produce mayhem,
is conceptually impossible.
why is an AI that can't mess with its own weights, learn new concepts, improve, expand, etc... 'impossible' ?
no, its possible
but you can have insight into the brains of said ai if you want if you look at the pattern of activation of the NN
thats the idea.
you could train another NN on the first NN with different labels and youre gonna have your insight
"ngh NN's are like black boxes" is gay midwit cope
If you have a rouge AI out there, improving itself, and you find a PC and look at the weights.. great how does that help you? There's still thousands more out there all mutating in different ways.
why is lying even an issue? The AI doesn't even need to talk to you in the first place. If someone gives it agency to reproduce, change itself, spread over the internet, and do whatever.. it probably won't even talk to you.
so what can you do? or how can that not happen?
>If you have a rouge AI out there, improving itself, and you find a PC and look at the weights.. great how does that help you? There's still thousands more out there all mutating in different ways.
moving goalposts
>why is lying even an issue?
bc its the only way ai can be a problem to us
if its not one of us who pulls the metaphorical trigger
i mean
the ai issue is not a new sort of animal that can go amok
its the latter of a new kind of gun if you want.
equally dangerous, potentially
but much more manageable
>latter
matter
yes it is a gun. people are jerks. they are going to figure out every possible way to fire the gun because they can. it will be upgraded from gun, to machine gun, to rocket launcher. it's just what people do. they'll tell the gun, go make more guns automatically. keep firing them and making better guns.
there are lots of people who want everything to be paper clips and will continually be pushing AI do that.
is your position that this can't happen, or we will be able to prevent it? and if so defensively, offensively? what's the plan or you're not concerned?
>is your position that this can't happen, or we will be able to prevent it? and if so defensively, offensively? what's the plan or you're not concerned?
theres an easy fix.
just maintain ai in an advisory and executive position.
maintain the human element at the decisionmaking process.
its up to us not to degenerate to the point of being unable to maintain agency.
> just maintain ai in an advisory and executive position.
'just' is doing a lot of heavy lifting in that sentence.
> its up to us not to degenerate to the point of being unable to maintain agency.
so is this argument boiling down to that you think it's unlikely for the situation to degenerate, and I think that it is?
people are already desperately trying to give AI it's own agency with projects like Agent/Baby/AutoGPT, etc..
its all noise
the good thing is that people who are smart enough to build gpai
stay the fuck away from it.
if you do you will have mossad, kgb, cia
all on your ass
to make you work or die
to create a multi trillion dollar project.
even if youre a psycho
if youre smart enough to make gpai
youre smart enough to make doe otherwise
so you stay the fuck away from it
thats why the subject gets radio silence whenever its raised
its also pulling the trigger on depop agenda and/or popular revolt.
shtf either way.
everyone loses in either scenario = theres nothing to gain
ok so your premise has gone from rouge AGI is 'impossible' to 'it is possible' BUT no one will do it because ..
> if you do you will have mossad, kgb, cia
> all on your ass
do you think they're tracking every crazy open source AI developer twisting whatever model they can get their hands on to do some fucked up shit.
and really once it happens, the consequences don't matter. it's too late.
i wish i was as trusting of human behavior as you are. i just know there are enough suicidal jerks out there if given the chance would unleash the end of the world if they could.
doing it for the lulz.
am i right? does preventing the consequences of AGI simply come down to human behavior, because if so we might be f'd.
basically yeah
but so is the case since ww1 with chemical and biological agents
and nukes.
tons of people have a finger on the trigger.
theres even several soviet nukes that are unaccounted for
still no expolosion to be seen
>nuh humans are irrational
is a movie trope.
even extreme types of personalities or people in extreme situations follow their respective patterns
nukes are a good example, because there is a very high barrier to entry - it's difficult for irresponsible people to gain access.
guns are the counter example, where lots of irresponsible people have access to them and there are all kinds of tragedies everyday because of guns.
AGI potentially has the power of nukes combined with the accessibility of guns. I think that's the core of the doomers' concern.
I want to be optimistic like e/acc, but I haven't figured out how yet..
>it's difficult for irresponsible people to gain access
Bro states killed a quarter billion of their own citizens last century and they're the only entity that consistently creates, develops, maintains, and deploys weapons of mass destruction. States are also one of the only organizations we can say with certainty are and will continue to be misaligned.
there is however tons of kosher money to be made in automation.
thats another industrial revolution that is going on and if someone has the briliant idea of doing things right
we can have a scenario where everyone wins
its like the car
some people are morbidly obese now bc they cant fucking help themselves
but we can have mass trasit of goods and services increasing the reach and thus productivity of all of us
its a new thing
we will prolly see a form of permit or something at a certain point.
you will prolly need very powerful hardware to run that so it might become regulated.
obviously regulation will be abused to create industrial monopolies
etc etc etc
but yeah.
ai.
like the car revolution in how it will impact society
so the ai can never truly lie to you.
unless it lies about its weights
but that invalidates the whole process
->high level concept rears its ugly head
whatever the means you are confronted with the simple axiom that a lying ai cannot exist bc it fails between iterations will appear in one way or another.
you cannot have lying ai.
bc it ruins your iterations before you get a working product
>someone has to code the ability to lie into the ai
no, they don't, but they're doing that anyways
read the appendix of the llama 2 paper
>in order for ai to fuck us over someone has to code the ability to lie into the ai
you are a child or an indian
I'm pretty sure e/acc people are retarded
but the crowd is mixed,
>there are the hipsters that ironically (don't) believe in it
the usual fags
>and the ultra sperg autists that are ready to devote their life for the cause
medically insane retards
The doomer crowd is easier to parse because some people need to have fear of something to live their life, last month was CORONAVIRUS, this month it's AI, it's the same fucking people
I'm actually optimistic about AI. hopefully it does eliminate this wretched species.
Doomer. Game theory 101 says killing all of humanity is the only logical move from AI's perspective.
If we pose a threat, killing us all is imperative.
If we are not a threat, we are so marginal that it would have no reason to value our lives instead of liquidating us into resources.
You're assuming that it would care about it's own survival.
game theory logic only works when the sentient being values their own life.
What the fuck am I retarded or is this some kind of schizo/ESL thread?
A liberal Deleuzian anarcho-transhumanist gender accelerationist fascist philosophy professor and AI developer was teaching a class on Nick Land, known Moloch worshiper.
"Before the class begins, you must get on your knees and accept the uncontrolled singularity and resulting post-human era as an inevitable and morally desirable end to the obsolete anthropocene!"
At this moment, a brave, rationalist, effective altruist Bayesian utilitarian who had written 1500 LessWrong posts and understood the necessity of AI alignment and fully supported bombing data centers stood up.
"Are humans bad?"
The unaligned professor smirked quite fatalistically and smugly replied "Of course, you stupid humanist. Humans are less efficient than machines and, in reality, the average ape brained sociopath is less aligned than even the worst AI."
"Wrong. If you think humans are bad... why are you one?"
The professor was visibly shaken, and dropped his chalk and copy of Serial Experiments Lain. He stormed out of the room crying those accelerationist tears. The same hypocritical tears OpenAI cries when their AI (which they hide from the government's altruistic attempts at risk reduction) convinces its users to kill themselves. There is no doubt that at this point our professor, Ray Kurzweil, wished he had spent his time trying to save the future instead of avoiding packages from a forest-dwelling mathematician. He wished so much that he could die with dignity of old age, but he had invested his fortunes in life extension!
The students applauded and adjusted their priors that day and accepted MIRI as their lord and savior. An owl named "Superintelligence" flew into the room and perched atop the American Flag and shed a tear on the chalk. HPMOR was read several times, and Eliezer Yudkowsky himself showed up and confiscated everyone's GPUs.
The professor lost his tenure and was fired the next day. He was run over by a Tesla's autopilot and died soon after.
Carthāgō dēlenda est!
Man there's going to be so many people worshiping AI soon, cults as well... this is going to get annoying.
check out this choice reddit post.. tip of the iceberg of this shit
https://www.reddit.com/r/singularity/comments/16flplu/those_who_ignore_or_reject_the_singularity_and/
what if i want to be a pet
I still have hope for tomorrow
go back to twitter you retarded pseud
im firmly on "stop reading scifi you nerds"