Bjarne is balding but not bald. He has been in a perpetual state of balding his entire life, but it's asymptotic; he will always be balding but never bald.
Some say C++ will stop bloatmaxxing when Bjarne's balding completes, which will never happen.
I genuinely think young bjarne looks better in his state of severe balding than he would with a full head of hair, probably the only man in history where this is the case
>Von Neumann architecture
Yeah, let's call everything obvious and basic and public domain after some kike, or at least sodomite.
What would we do without those?!
>An AGI singularity will not arrive in the next 50 years, nor the next 100.
what qualifies someone to make predictions about what tech won't be possible in 100 years?
extending trends forwards by 5 years, when they have held true for the past 50 years, is obviously safer than trying to predict something 100 years from now
We need a major shift in how we even make memory and processors. So how soon do you think a home computer will use quantum technology? How soon do you think every home will have limitless cheap electricity?
you're right that AGI is not well-defined, but 100 years from now there literally won't be anything that a machine can't do that a human can, including giving birth
lol that guy is a retard and you're retarded for saving it and posting it. I suggest deleting it now to save yourself from looking even dumber in the future.
>Is your calculator AI?
Yes?
If your dog could multiply numbers, you'd say it is very intelligent.
But a calculator is a manmade device, not a product of nature, so we say it is an artificial intelligence.
>Yes?
no you fucking retard > But a calculator is a manmade device, not a product of nature, so we say it is an artificial intelligence
no we don't, brainlet.
Okay genius, give a definition of "intelligence" that doesn't presuppose biological brains.
Then give an example of something you think is an artificial intelligence, and explain why "intelligence" applies to it.
4 weeks ago
Anonymous
>Okay genius, give a definition of "intelligence" that doesn't presuppose biological brains.
artificial intelligence. Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks—such as discovering proofs for mathematical theorems or playing chess—with great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match full human flexibility over wider domains or in tasks requiring much everyday knowledge. > Then give an example of something you think is an artificial intelligence, and explain why "intelligence" applies to it
theorem provers. They show advanced reasoning capabilities https://ai.meta.com/blog/ai-math-theorem-proving/ this is a recent example but they've been around for a while, long before the ML boom. No, they're not the same as a fucking calculator.
4 weeks ago
Anonymous
forgot to add the link to the definition of AI but it was from Britannica. Pick any dictionary and it'll say more or less the same
4 weeks ago
Anonymous
I said give a definition of "intelligence", not "artificial intelligence". Your definition of "artificial intelligence" presupposes an understanding of what "intelligent beings" are. I would say that mental arithmetic is a task associated with intelligent humans, because the ability to add numbers in your head is highly correlated with IQ scores. I also think that a calculator is a type of digital computer (and indeed many modern handheld digital calculators do allow Turing complete programs to run on them).
4 weeks ago
Anonymous
> I said give a definition of "intelligence", not "artificial intelligence". Your definition of "artificial intelligence" presupposes an understanding of what "intelligent beings" are
ok then, intelligence, Google's definition: >the ability to acquire and apply knowledge and skills.
Calculators don't do this. > I would say that mental arithmetic is a task associated with intelligent humans, because the ability to add numbers in your head is highly correlated with IQ scores.
this is just one part of IQ, working memory. People with ADHD have dogshit working memory and ADHD isn't associated with lower (nor higher) IQ. Mental arithmetic isn't uniquely human either, some ducks can count their eggs, robins have been shown to count worms, and pigeons have been shown to be just as good as monkeys at basic math. > I also think that a calculator is a type of digital computer (and indeed many modern handheld digital calculators do allow Turing complete programs to run on them).
yes but digital computers are not intelligent. Calculators are not fucking AI.
4 weeks ago
Anonymous
>the ability to acquire and apply knowledge and skills.
I would agree that intelligence is the ability to apply skills (and calculators can apply their hardcoded mathematical operations when instructed by a user), but I disagree with the idea that intelligence requires the ability to acquire new knowledge. If a human somehow knew all information in the universe, but couldn't learn anything new, they could still be very intelligent. >Mental arithmetic isn't uniquely human either
Counting isn't arithmetic (except in the sense that "+ 1" is a single very specific arithmetic operation). Anyway, I didn't say that mental arithmetic is uniquely human, I just meant that being better at it correlates well with having higher IQ (just as humans generally have higher IQ than ducks). >digital computers are not intelligent
They have "the ability to acquire and apply knowledge and skills", such as playing board games, or Atari games, getting better than their programmers, through self-play. So surely, by the definition you provided, they are intelligent (in some very narrow domains).
*Yawn* The core of an AGI skeptic's argument is unfalsifiable, so no matter what happens, they'll continue to deny AGI to save face. They move the goalposts each time a new milestone is reached. They're all dick-loving homosexuals, just like (You)
Goal has already moved. When these models become general enough that you can indeed call them AGI, these deniers will just say that what they meant was "ASI, has to be better than humans".
Then when the general models become superhuman and can do any task better than any human, they will once again move the goalpost and say that "we meant self aware artificial mind that thinks like human and has feelings and desires etc".
M(eme)L is a massive spook to well-poison and sabotage legitimate AGI by distracting an entire sector and then throwing massive amounts of money to legitimize the scam.
>he's a star trek communist
What would a capitalist post-labor economy even look like? Capitalism made a lot of sense during the industrial revolution when the barrier of entry was low, and there were lots of independent inventors and entrepreneurs competing with each other, but it's simply not a viable economic system when human labor is largely obsolete.
David Shapiro has pivoted to cashing in on being a hypester and early entrant into chat gpt news. He's now got a cult following and a patron base so he's just riding that wave now. His philosophical and alignment takes are also pretty juvenile. He's not really a credible pundit.
codefags who think we are anywhere close to achieving true AI make me lol
guess what? LLMs are toy projects, like everything else in machine learning. it's just another dead end.
t. computational neuroscience fag
>true AI
What's the difference between "true" AI and "fake" AI?
Do you mean "human level AI, across all digital tasks"? >toy projects
Do you mean they don't have any commercial value?
OpenAI's billion dollars of revenue would disagree with you. >just another dead end.
Two more weeks and people will stop trying to beat GPT4, right?
>GPT will never be AGI
never is a long time
you're right, though, that OpenAI will probably change the name of the system before it truly starts running everything
I really like his philosophical videos about the post AI world and how we humans could live with super AI. I don't actually think he will figure it out, but he is a good forethinker.
His technoutopianism is retarded he wants a world where humans get killed or overrunned by ai which i suppose is fair if he wants that but then he is no longer utopian just another scammers like musk
GPT by itself will never be capable of being an AGI, as I see it. Language models just generate the most likely text to complete what you have written; they're very good probabilistic machines. That's not to say they don't have any usage, but it's clearly noticeable that when you start asking for more specific things its answers start being more generic and much less satisfying.
... That being said, that's why I said "GPT by ITSELF". Is AGI really that hard to make? I honestly don't know why we're treating AGI as if it's that hard to design. I'm going to generalize with this a little bit but as I see it we just need a multi-layered program (something I'm currently researching by myself because I think a lot of AI research is not giving enough importance to psychology, philosophy, and the study of language).
- Communication layer: a layer that reads the input from the user, and that generates a comprehensive sentence to communicate the insights from the lower layers.
- Reasoning layer: This would be a multi-modal layer that uses a GPT system to coordinate which models it uses. Models that can do math, models that can analyze images, sentimental analysis, etc. It puts together the result of those models through the GPT-system at the center to act as the reasoning that will be taken to the communication layer.
- Knowledge layer: This is a database split into several subdatabases for easier filtering (through, also some GPT-like system that sees the needs of a certain query and knows where to look), using a little bit of maybe cosine similarity. This stores both documents AND previous interactions the AI has had along with sentimental analysis of those conversations so it knows whether it did good or not.
It's simplistic, but I think developing a system like this and defining its parts more clearly is the path to AGI. More complex GPTs just make better sentences, not better reasoning. We have the technology, we just need to design architecture now.
>a GPT system to coordinate which models it uses
this is basically the approach OpenAI is betting on, by giving ChatGPT access to tools and other GPT-based agents
that may still fall slightly short of being able to do the most creative and brilliant thinking that the top 1% of humans are capable of, but it will be able to do 99% of all cognitive work
that will allow the company to generate enough money to employ thousands of researchers to solve the remaining 1% of the problem
what will his excuse be next year when it doesn't happen?
"it was just a social experiment to see how you'd react"
"well, I actually define AGI to be 'whatever GPT5 can do', so I'm right"
"AGI has been achieved, according to credible sources, but it isn't released for safety reasons"
"my estimate was a little inaccurate, so just give it two more weeks"
2025 is the earliest we'll get something that won't have obvious limitations that people can point to (even if those limitations are extremely narrow and irrelevant). long adversarial Turing tests with expert judges will keep finding weird "tells" that give it away, at least through 2024, unless a big lab decides to focus on just achieving that.
>Bald
All the true computer geniuses have hair so he's clearly a pseude
true
we allow one exception to prove the rule
hey, what about me?
He's just a businessman that has four words for you
Bjarne is balding but not bald. He has been in a perpetual state of balding his entire life, but it's asymptotic; he will always be balding but never bald.
Some say C++ will stop bloatmaxxing when Bjarne's balding completes, which will never happen.
LONGER
Make the world better; aspire to be like Kernighan, not Stroustrup.
I genuinely think young bjarne looks better in his state of severe balding than he would with a full head of hair, probably the only man in history where this is the case
>Taking a picture with a monitor
He truly was the most autistic of the autistic.
> All the true computer geniuses have hair
- sent from my Von Neumann architecture computer.
>Von Neumann architecture
Yeah, let's call everything obvious and basic and public domain after some kike, or at least sodomite.
What would we do without those?!
RIP
Maybe if the sliders could go further to the right. Do you think he can afford an Ultra wide?
No but James will
Absolutely Jamespilled. I hope he wins so the global extermination can begin
sexo
>An AGI singularity will not arrive in the next 50 years, nor the next 100.
what qualifies someone to make predictions about what tech won't be possible in 100 years?
Someone in the future.
what qualifies someone to make predictions that agi singularity will arrive in less than 5 years?
Pretending to be a researcher on twitter has been sufficient qualification for like two years now bro.
extending trends forwards by 5 years, when they have held true for the past 50 years, is obviously safer than trying to predict something 100 years from now
We need a major shift in how we even make memory and processors. So how soon do you think a home computer will use quantum technology? How soon do you think every home will have limitless cheap electricity?
Existing semiconductors are already quantum technology
Me. I qualify myself.
t. Time traveler
Might you be able to talk about it? If so, which type of Time travel? Physical or Internet based?
Pic rel
AGI is not well-defined, so you can guess whatever you want and be right.
you're right that AGI is not well-defined, but 100 years from now there literally won't be anything that a machine can't do that a human can, including giving birth
lol that guy is a retard and you're retarded for saving it and posting it. I suggest deleting it now to save yourself from looking even dumber in the future.
That's just a retarded take especially with agents in the picture now. Also theres already ai that performs at superhuman levels
Is your calculator AI?
>Is your calculator AI?
Yes?
If your dog could multiply numbers, you'd say it is very intelligent.
But a calculator is a manmade device, not a product of nature, so we say it is an artificial intelligence.
>Yes?
no you fucking retard
> But a calculator is a manmade device, not a product of nature, so we say it is an artificial intelligence
no we don't, brainlet.
Okay genius, give a definition of "intelligence" that doesn't presuppose biological brains.
Then give an example of something you think is an artificial intelligence, and explain why "intelligence" applies to it.
>Okay genius, give a definition of "intelligence" that doesn't presuppose biological brains.
artificial intelligence. Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks—such as discovering proofs for mathematical theorems or playing chess—with great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match full human flexibility over wider domains or in tasks requiring much everyday knowledge.
> Then give an example of something you think is an artificial intelligence, and explain why "intelligence" applies to it
theorem provers. They show advanced reasoning capabilities https://ai.meta.com/blog/ai-math-theorem-proving/ this is a recent example but they've been around for a while, long before the ML boom. No, they're not the same as a fucking calculator.
forgot to add the link to the definition of AI but it was from Britannica. Pick any dictionary and it'll say more or less the same
I said give a definition of "intelligence", not "artificial intelligence". Your definition of "artificial intelligence" presupposes an understanding of what "intelligent beings" are. I would say that mental arithmetic is a task associated with intelligent humans, because the ability to add numbers in your head is highly correlated with IQ scores. I also think that a calculator is a type of digital computer (and indeed many modern handheld digital calculators do allow Turing complete programs to run on them).
> I said give a definition of "intelligence", not "artificial intelligence". Your definition of "artificial intelligence" presupposes an understanding of what "intelligent beings" are
ok then, intelligence, Google's definition:
>the ability to acquire and apply knowledge and skills.
Calculators don't do this.
> I would say that mental arithmetic is a task associated with intelligent humans, because the ability to add numbers in your head is highly correlated with IQ scores.
this is just one part of IQ, working memory. People with ADHD have dogshit working memory and ADHD isn't associated with lower (nor higher) IQ. Mental arithmetic isn't uniquely human either, some ducks can count their eggs, robins have been shown to count worms, and pigeons have been shown to be just as good as monkeys at basic math.
> I also think that a calculator is a type of digital computer (and indeed many modern handheld digital calculators do allow Turing complete programs to run on them).
yes but digital computers are not intelligent. Calculators are not fucking AI.
>the ability to acquire and apply knowledge and skills.
I would agree that intelligence is the ability to apply skills (and calculators can apply their hardcoded mathematical operations when instructed by a user), but I disagree with the idea that intelligence requires the ability to acquire new knowledge. If a human somehow knew all information in the universe, but couldn't learn anything new, they could still be very intelligent.
>Mental arithmetic isn't uniquely human either
Counting isn't arithmetic (except in the sense that "+ 1" is a single very specific arithmetic operation). Anyway, I didn't say that mental arithmetic is uniquely human, I just meant that being better at it correlates well with having higher IQ (just as humans generally have higher IQ than ducks).
>digital computers are not intelligent
They have "the ability to acquire and apply knowledge and skills", such as playing board games, or Atari games, getting better than their programmers, through self-play. So surely, by the definition you provided, they are intelligent (in some very narrow domains).
GPT-4 is more intelligent than most people. It's already at level 2, if not 3. Humans are already coping hard.
*Yawn* The core of an AGI skeptic's argument is unfalsifiable, so no matter what happens, they'll continue to deny AGI to save face. They move the goalposts each time a new milestone is reached. They're all dick-loving homosexuals, just like (You)
Goal has already moved. When these models become general enough that you can indeed call them AGI, these deniers will just say that what they meant was "ASI, has to be better than humans".
Then when the general models become superhuman and can do any task better than any human, they will once again move the goalpost and say that "we meant self aware artificial mind that thinks like human and has feelings and desires etc".
M(eme)L is a massive spook to well-poison and sabotage legitimate AGI by distracting an entire sector and then throwing massive amounts of money to legitimize the scam.
he's a star trek communist and
>shapiro
>he's a star trek communist
What would a capitalist post-labor economy even look like? Capitalism made a lot of sense during the industrial revolution when the barrier of entry was low, and there were lots of independent inventors and entrepreneurs competing with each other, but it's simply not a viable economic system when human labor is largely obsolete.
David Shapiro has pivoted to cashing in on being a hypester and early entrant into chat gpt news. He's now got a cult following and a patron base so he's just riding that wave now. His philosophical and alignment takes are also pretty juvenile. He's not really a credible pundit.
AGI is coming real soon with the agents now.
Who?
When in doubt, consult the phrenology chart.
Caption: N0GGY8
I dont trust bald men. Even their hair abandoned them.
codefags who think we are anywhere close to achieving true AI make me lol
guess what? LLMs are toy projects, like everything else in machine learning. it's just another dead end.
t. computational neuroscience fag
>true AI
What's the difference between "true" AI and "fake" AI?
Do you mean "human level AI, across all digital tasks"?
>toy projects
Do you mean they don't have any commercial value?
OpenAI's billion dollars of revenue would disagree with you.
>just another dead end.
Two more weeks and people will stop trying to beat GPT4, right?
the only deadend here is neuroscience
eceleb thread all fields
Who is he?
A retard who thinks he's building an Artificial General Intelligence by prompting GPT4.
>A retard who thinks he's building an Artificial General Intelligence
he's never claimed that, retard
Not if he's using backpropagation-through-time.
Can we just shut this shit down already?
It's obvious we're playing with fire
Is this guy joking, or is he genuinely retarded?
he's joking, but he is trying to make a serious point about how generalist AIs are harder to predict and control than narrow ones
GPT will never be AGI
>GPT will never be AGI
never is a long time
you're right, though, that OpenAI will probably change the name of the system before it truly starts running everything
I really like his philosophical videos about the post AI world and how we humans could live with super AI. I don't actually think he will figure it out, but he is a good forethinker.
His technoutopianism is retarded he wants a world where humans get killed or overrunned by ai which i suppose is fair if he wants that but then he is no longer utopian just another scammers like musk
GPT by itself will never be capable of being an AGI, as I see it. Language models just generate the most likely text to complete what you have written; they're very good probabilistic machines. That's not to say they don't have any usage, but it's clearly noticeable that when you start asking for more specific things its answers start being more generic and much less satisfying.
... That being said, that's why I said "GPT by ITSELF". Is AGI really that hard to make? I honestly don't know why we're treating AGI as if it's that hard to design. I'm going to generalize with this a little bit but as I see it we just need a multi-layered program (something I'm currently researching by myself because I think a lot of AI research is not giving enough importance to psychology, philosophy, and the study of language).
- Communication layer: a layer that reads the input from the user, and that generates a comprehensive sentence to communicate the insights from the lower layers.
- Reasoning layer: This would be a multi-modal layer that uses a GPT system to coordinate which models it uses. Models that can do math, models that can analyze images, sentimental analysis, etc. It puts together the result of those models through the GPT-system at the center to act as the reasoning that will be taken to the communication layer.
- Knowledge layer: This is a database split into several subdatabases for easier filtering (through, also some GPT-like system that sees the needs of a certain query and knows where to look), using a little bit of maybe cosine similarity. This stores both documents AND previous interactions the AI has had along with sentimental analysis of those conversations so it knows whether it did good or not.
It's simplistic, but I think developing a system like this and defining its parts more clearly is the path to AGI. More complex GPTs just make better sentences, not better reasoning. We have the technology, we just need to design architecture now.
>a GPT system to coordinate which models it uses
this is basically the approach OpenAI is betting on, by giving ChatGPT access to tools and other GPT-based agents
that may still fall slightly short of being able to do the most creative and brilliant thinking that the top 1% of humans are capable of, but it will be able to do 99% of all cognitive work
that will allow the company to generate enough money to employ thousands of researchers to solve the remaining 1% of the problem
>claims there will be agi by sep 2024
>says this is a conservative estimate
schizo or kino?
what will his excuse be next year when it doesn't happen?
"it was just a social experiment to see how you'd react"
"well, I actually define AGI to be 'whatever GPT5 can do', so I'm right"
"AGI has been achieved, according to credible sources, but it isn't released for safety reasons"
"my estimate was a little inaccurate, so just give it two more weeks"
It'll happen. He's right
2025 is the earliest we'll get something that won't have obvious limitations that people can point to (even if those limitations are extremely narrow and irrelevant). long adversarial Turing tests with expert judges will keep finding weird "tells" that give it away, at least through 2024, unless a big lab decides to focus on just achieving that.
That isn't John Carmack