Singlehandedly one of the, if not the greatest programmer in history, always tried to fix abomination of Bell Labs hacks like Dennis, Ken and Bjarne and he was right, always. Only genius like him could come up to things like UTF-8, Plan9 and Go. We as a technology loving board, give big "Thank You" to Mr. Pipe, for your contribution to humanity and the development of civilization.
Now he's a furry.
Unix: not even once
source? t. dennis
>Genius
>Go
Ha. Haha. Hahahahahahahahahahahaha
rust tranny detected
I think you'll find that I'm a Chad.
dilate
Go is the single most well designed of the last 20 years
There are no silver bullets, but there are great tradeoffs and Go just never completely missed the mark to completely kneecap itself in a domain, like e.g. Rust with async
Kill yourself again, Uriel.
Uriel did more for the world of software then you ever will.
You walked into that one
Cope more about your insignificance. People still mention Uriel. Who will give a shit when you finally kick the bucket?
It's "Plan 9" not Plan9, dipshit
Imagine how often he must have typed if err != nil.
nothing is wrong with it
>Only genius like him could come up to things like UTF-8,
Ken Thompson came up with the actual encoding thought.
They Both did at a diner. But I’d say they’re at the same level. Thompson came up with the optimal implementation of regexesin the late 60s and then everyone ignored it and used the shitty backtracking solution for 40 years. Pike came up with his own optimised version of Thompson NFA, and that one was ignored too. No wonder the Bell Labs folks hate open sores.
I'm saying this because I've read in one of the utf8 text file documents that Ken came up with the bit patterns or something, Pike was saying this I believe.
>Thompson came up with the optimal implementation of regexesin the late 60s
the NFA simulating a DFA stuff?
>Pike came up with his own optimised version of Thompson NFA, and that one was ignored too.
I didn't know that. What optimization did he made? I made my own little optimization when making an NFA simulating DFA for a lexer. When running the NFA, instead of starting with the set of all the regexes (alternations), use the first byte to lookup into a little vector that will give you the set of states after 1 byte. That way it removes almost all the useless states, particularly when lexing an operator. No need to iterate over 30 token just to find out there are only 3 possible tokens starting with * for example.
For UTF8’s, I remember that article on cat-v
And for regex stuff, from Russ Cox’s article on regexes
>https://swtch.com/~rsc/regexp/regexp1.html
>https://swtch.com/~rsc/regexp/regexp2.html
I'm going to re read this when I can, I missed a lot of stuff the first time.
>While writing the text editor sam [6] in the early 1980s, Rob Pike wrote a new regular expression implementation, which Dave Presotto extracted into a library that appeared in the Eighth Edition. Pike's implementation incorporated submatch tracking into an efficient NFA simulation but, like the rest of the Eighth Edition source, was not widely distributed
Based on this and on the code in regexp2.html, it looks like it's a simple extension of Thompson's stuff. There is nothing fundamentally different in its changes.
I just find regexp1.html funny because it made the whole Perl community seethe.
His article is interesting but it's still click bait tier shit and so is the conclusion. Every algorithm have a worse case scenario and that's exactly what he picked for the backtracking RE. The truth is that it's 2 algorithms with different tradeoffs and also different expressivity (therefore not even comparable in the first place). None of them is right or wrong.
>They Both did at a diner
A few years back I tracked down the exact location of the diner. It wasn't a diner anymore but a bakery.
Sad.
Read some of his books, he is a decent teacher.