If you're learning programming, learn C first. C will give you the discipline and insight into how computers fundamentally work.

If you're learning programming, learn C first. C will give you the discipline and insight into how computers fundamentally work. Master C and you will master programming.

  1. 3 weeks ago
    Anonymous

    [log in to view media]

    >If you're learning programming, learn C first. C will give you the discipline and insight into how computers fundamentally work. Master C and you will master programming.

    • 3 weeks ago
      Anonymous

      Imagine saving this horrible meme on your hard drive

  2. 3 weeks ago
    Anonymous

    Learning C was the most eye opening thing I have learned for how computer programs work.

    • 3 weeks ago
      Anonymous

      I think this perspective offers an interesting view on what language someone should learn. Do you want to learn how computer programs work? If so learn C. Do you want to achieve some result in the near-term? Learn something else.

      I make my living in C and love it but, it's challenging and not very user friendly in many ways.

  3. 3 weeks ago
    Anonymous

    I'm too old to be taking this stuff seriously

    • 3 weeks ago
      Anonymous

      >I'm too old
      That's perfect for Cniles

  4. 3 weeks ago
    Anonymous

    >C will give you the discipline and insight into how computers fundamentally work. Master C and you will master programming.
    C has nothing to do with how computers work and you will only be learning C workarounds that don't have anything to do with computers.

    • 3 weeks ago
      Anonymous

      >C has nothing to do with how computers work
      Lol

    • 3 weeks ago
      Anonymous

      [log in to view media]

  5. 3 weeks ago
    Anonymous

    >C will give you the discipline and insight into how computers fundamentally work.
    Your computer is not a fast PDP-11.

    Learning C at best teaches you how other programming languages work under the hood.

    • 3 weeks ago
      Anonymous

      >Learning C at best teaches you how other programming languages work under the hood.
      Only if they're implemented in C.

  6. 3 weeks ago
    Anonymous

    >C
    >how computers fundamentally work
    Hello, newfag. How was programming 101?

    • 3 weeks ago
      Anonymous

      >never passed a respectable CS course

      • 3 weeks ago
        Anonymous

        I'm well versed enough in computer science to know C is a high-level language. Now out of my way you drooling retard.

        • 3 weeks ago
          Anonymous

          I'll take that as me agreeing with you there, thank you.

        • 3 weeks ago
          Anonymous

          >no
          all I needed to know
          high or low depends on perspective, but on the scale of pl's you have machine code -> assembly -> C -> everything else
          because LULZ are retards : obviously im not saying everything else is based on c, just that nothing is comparable, if you had taken a real CS course, in your comparch class you would've understood the historical reason for C being so unsafe
          you might even have programmed a basic cpu on an FPGA
          >LULZ is a humor board for qualified engineers, thank you for your contribution

          • 3 weeks ago
            Anonymous

            >assembly -> C -> everything else
            >just that nothing is comparable
            >the historical reason for C being so unsafe
            This is wrong. C is not the only systems language that was designed to be "lower level" than most languages. There were BLISS, BCPL, PL360, PL/S, and other languages that are older than C and designed to be systems languages. The problems with C are unique to C like null-terminated strings and array decay. These came from PDP-7 and PDP-11 assembly. Even BCPL which was a very primitive language used length-prefixed strings.

            • 3 weeks ago
              Anonymous

              >Even BCPL which was a very primitive language used length-prefixed strings.
              but what if you didn't know the length of the string

              • 3 weeks ago
                Anonymous

                >but what if you didn't know the length of the string
                You always know the length of the string. When you allocate or declare something, there's a specific amount of space where you can put something.

              • 3 weeks ago
                Anonymous

                but what if you don't know if the user will overrun that space or not

              • 3 weeks ago
                Anonymous

                Then fuck them.
                >MUH HEEEEEEAP
                Go back to r*ddit.
                Callee responsibility ruined programming.
                Programming died the day pajeet decided his functions must do the needful for his callers

              • 3 weeks ago
                Anonymous

                Meme attitude.

                true.
                learned python first, it was a waste of time.
                had to learn everything from scratch again when i was learning C

                now that i know C, im almost language-agnostic.
                exception made of lisps and such, learning a new language is a breeze.

                then you do boundary checking:/

                [...]
                c++ has a has an oop component to it.
                its easier to learn functional style first, then graduate to oop

                >then you do boundary checking:/
                But what if them needing more space than you thought to provide is a valid use case and you want to allow for it, but in general the amount you allow for initially is still best practice?

              • 3 weeks ago
                Anonymous

                absolutely no problem
                you provide space on the ram with malloc

                its like you have an input
                and to store it you len the string and malloc accordingly
                ez pz
                C is absolute freedom

              • 3 weeks ago
                Anonymous

                did you forget we're talking about bcpl
                can bcpl also do this

              • 3 weeks ago
                Anonymous

                >bcpl
                baltimore county public library?
                kek jk
                i dont know.
                dont know bcpl
                if you got pointers and memory management it can

              • 3 weeks ago
                Anonymous

                if you need space allocated dynamically you can do vectors or linked lists
                how do you think higher languages do?

                and theres another boon of C:
                it gives you understanding how higher level languages work

              • 3 weeks ago
                Anonymous

                i realize you can do this in c
                *i* do this in c
                i'm asking about bcpl
                why have you both forgotten this so quickly

              • 3 weeks ago
                Anonymous

                barely jumped into the discussion

              • 3 weeks ago
                Anonymous

                true.
                learned python first, it was a waste of time.
                had to learn everything from scratch again when i was learning C

                now that i know C, im almost language-agnostic.
                exception made of lisps and such, learning a new language is a breeze.

                then you do boundary checking:/

                Everyone itt is probably trolling but I will answer seriously

                This but C++
                [...]
                C is intentionally designed to be portable and work fast in any architecture. C would be even better, more useful and faster if it was allowed to be specialised for one architecture (like x86) but the language definition is in a way that will enable it to run on any machine without any performance cost. People who claim c is made around the PDP-11 have no idea what is a computer

                c++ has a has an oop component to it.
                its easier to learn functional style first, then graduate to oop

            • 3 weeks ago
              Anonymous

              [log in to view media]

              brainlets..
              are any of those commonly used today?
              like I said I also took a comparch class my fren, I know they exist, I don't feel like rewriting wikipedia at every post

              • 3 weeks ago
                Anonymous

                >but i was ONLY talking about languages COMMONLY USED TODAY!
                >D stands for Dead... yes i know it's well maintained... yes i know it has an active community... but you see... no one USES it! where's the BUZZ!
                >every language must be commonly used or else it doesn't count, ever, for anything!
                >OBVIOUSLY my post did not apply to the likes of [memelang]!
                this is what you sound like

              • 3 weeks ago
                Anonymous

                tp (You) myb because you are a language homosexual

              • 3 weeks ago
                Anonymous

                you look like this and you say this
                [picture deliberately omitted]

              • 3 weeks ago
                Anonymous

                >but i was ONLY talking about languages COMMONLY USED TODAY!
                read OP's post again, would you recommend a language that isn't on that liste to some one learning to code?
                you sound like a spaz

          • 3 weeks ago
            Anonymous

            So you concede that C is not how computers fundamentally work. Good to know I'm right again.

            • 3 weeks ago
              Anonymous

              I was never saying different, fren

  7. 3 weeks ago
    Anonymous

    C is truly the foundational language of programming. Learn C, then Python or JavaScript.

  8. 3 weeks ago
    Anonymous

    Only meaningful advice if you are also taking a computer organization course.

  9. 3 weeks ago
    Anonymous

    C is not a low-level language

    https://dl.acm.org/doi/pdf/10.1145/3209212

    • 3 weeks ago
      Anonymous

      C is an incredibly outdated and low-level language.

      D is good for low-level system programming. C++ is good for medium-level.

    • 3 weeks ago
      Anonymous

      >C is not a low-level language
      this is a great argument for going back to college.

    • 3 weeks ago
      Anonymous

      C is an incredibly outdated and low-level language.

      D is good for low-level system programming. C++ is good for medium-level.

      >C is not a low-level language
      this is a great argument for going back to college.

      >implying the majority of you are not collegefags or people working a corporate office to get that six figure salary you think you deserve
      if you want to write software in 2022 you will be doing in C/C++ or assembly at least for the longest. If you want to work in the big cities like silicon valley, you will be doing Python, Java and D

      • 3 weeks ago
        Anonymous

        swift/kotlin for mobile development

    • 3 weeks ago
      Anonymous

      C wasn't meant to be a low level language. "Unix in a high-level language" has a lot of results. The problem is that C is a bad language and people mistakenly attribute stupidity in C to being how computers work.

      Some liars even say Unix was the first operating system written in a high level language which is not true. It was not even originally written in a high level language. Unix was originally written in PDP-7 assembly and then PDP-11 assembly and then it was rewritten in C after it was already an accepted idea to write an OS in a high level language. Burroughs MCP (ESPOL and ALGOL) and Multics (PL/I) were written in high level languages from the beginning when this was still controversial.

    • 3 weeks ago
      Anonymous

      >https://dl.acm.org/doi/pdf/10.1145/3209212
      Damn, this makes me wish that the Itanium architecture won

    • 3 weeks ago
      Anonymous

      About this, the guy who wrote it is a complete reatrd. The fact there was a vulnurability in speculative execution has nothing to do with C, it was a hardware vulnurability and every language would be affected. Speculative execution and instruction level parallelism are done by the processor so they happen in every single language. They were both added to for performance reasons and have nothing to do with C. If you were writting in assembly you still can't control speculative execution or instruction level parallelism because the processor does them on its own. Cache size and levels are hidden in C because the differ from processor to processor (even within the same architecture) and C is designed to run on any machine. If however the programmer targets a spcecific processor he knows the cache size and levels and can write the code accordingly. The fact that the compiler changes the code has nothing to do with it being high-level or not. You can write something in assembly and the compiler can write a different and faster code that does the same. The retard thinks parallel programming is easy because he ahs only been exposed to it through levels of abstractions (in functional languages I assume)

      • 3 weeks ago
        Anonymous

        >compiler changes the code
        if thats an issue, dont ask the compiler to optimize it.

      • 3 weeks ago
        Anonymous

        >If you were writting in assembly you still can't control speculative execution or instruction level parallelism because the processor does them on its own
        So he is right. There is no language that resembles what a computer actually does at runtime. C code is far away from how computers actually work.

        • 3 weeks ago
          Anonymous

          >wire up a hardware-implemented c interpreter
          >the actual machine code is c, there's no lower-level execution model
          >it would be the worst thing ever
          >it would probably not fit in any smaller space than a cabinet
          >but you could do it
          what now, c-isn't-how-computers-actually-work-chuds

          • 3 weeks ago
            Anonymous

            There are Lisp machines that do that for Lisp.

            • 3 weeks ago
              Anonymous

              >lost in stupid parenthesis
              tell me why lisp is even a thing.
              im really open to discussion, its genuinely a mystery to me

              • 3 weeks ago
                Anonymous

                You don't necessarily need all those parentheses if your lisp is statically typed.
                Caml proved this

              • 3 weeks ago
                Anonymous

                ok, the parenthesis was a little jab at the syntax
                i mean logically, and materially, why lisp was invented?
                i looked into it, but its very alien and strange to me
                y lisp then?

              • 3 weeks ago
                Anonymous

                in moderation, organizing your code into functions is good practice to avoid repeating yourself
                functional programming languages reify this principle as an absolute ideology

              • 3 weeks ago
                Anonymous

                i can write oop-like in C tho
                and actually do so pretty often (i use pointers to functions as dynamic goto's, so i end up with structures encompassing data and relevant functions into objects- for all intents and purposes- oop. i even have inheritance thanks to a smart use of void *)

              • 3 weeks ago
                Anonymous

                Homoiconicity.
                McCarthy wanted to use M-expressions instead for Lisp-2 but his colleagues convinced him to stick with S-expressions.

                in moderation, organizing your code into functions is good practice to avoid repeating yourself
                functional programming languages reify this principle as an absolute ideology

                Code reuse is not the point of FP, and Lisp as an FP language is not even very accurate. Common Lisp is very imperative for example.

              • 3 weeks ago
                Anonymous

                >Homoiconicity
                nice, thanks

      • 3 weeks ago
        Anonymous

        The argument is twofold. First of all, Meltdown and Spectre are direct results of programmers wanting a simple, sequential instruction set architecture but also massive gains in performance from the microarchitecture. The engineers decided, hmm, in order to keep this performance we need to allow speculative execution to have observable microarchitectural effects (on the cache, TLB, RSB etc. which are the side channels used in the attacks), but there are no observable architectural effects so that's fine. If the ISA was closer to the microarchitectural reality, then maybe they would have been forced to choose between no speculative effects at all or to say that speculation can have architectural effects even in single-threaded code (today's ISAs have at least admitted that caching and speculation are absolutely things to worry about when writing *multi-threaded* programs), and the problem would have been avoided. Secondly, the reason that programmers still want a simple, sequential ISA is because it maps easily to most high level languages. The ISAs that tried something else (i.e. that had explicit instruction-level parallelism) failed in the end, because it was too difficult to write good compilers for them if the source language had too many ways to pretend that the computer was still sequential. So it's not really C's fault specifically, but C is the prime example of a language that people consider to be "how the computer really works" when it truly isn't, at least not any more. The argument is more that this mindset caused Meltdown and Spectre and not that C itself did.

        • 3 weeks ago
          Anonymous

          Thanks for the constructive post

        • 3 weeks ago
          Anonymous

          Based post.

        • 3 weeks ago
          Anonymous

          >but also massive gains in performance from the microarchitecture
          in my experience programmers want predictable machines
          speculative execution only exists for cheating benchmarks, i.e. marketers and management

          • 3 weeks ago
            Anonymous

            You don't know what speculative execution is.

            • 3 weeks ago
              Anonymous

              inherently leaky is what it is

              • 3 weeks ago
                Anonymous

                There's no inherent security issues with speculative execution. The problem was that, for example, speculative memory accesses would still cause cache lines to be fetched from memory, even if those accesses didn't result in any data actually being read or written due to a misprediction. If the addresses of those cache lines were based on some secret value that the correctly taken branch should never access, then that value would still effectively be measurable by other processes. It's entirely possible for a CPU to perform speculative execution without doing this, but it would come at a performance penalty that would be pointless 99.99% of the time. Alternatively, the ISA could have better ways to express whether an instruction is safe to execute speculatively or not (the deployed mitigations were far less precise and thus worse for performance), and make high level languages and compilers that support emitting those hints only when necessary.

      • 3 weeks ago
        Anonymous

        >the guy who wrote it is a complete reatrd
        >Professor at the University of Cambridge
        >Maintainer for LLVM
        He's 100% more qualified (both academically and professionally) tha you.

        • 3 weeks ago
          Anonymous

          Yeah academics are usually marxists and therefore retards.

          • 3 weeks ago
            Anonymous

            this is the kind of retard that visits this place.

            • 3 weeks ago
              Anonymous

              [log in to view media]

              hes only half wrong tho

              • 3 weeks ago
                Anonymous

                Yeah but academic Marxism is the most boring ideology ever conceived, it doesn't bother giving TED talks.
                The people you see in these videos are the ones who think free market capitalism's only problem is less n-words.

              • 3 weeks ago
                Anonymous

                still academics.
                and some are extremely autistically retarded like sobel and canny.
                because computing square roots is soooooo efficient on computers

        • 3 weeks ago
          Anonymous

          him finding some idiots that are letting him teach doesn't say shit about his qualifications and maintainers are just janitors

          • 3 weeks ago
            Anonymous

            Cope. Pure, 100% cope.

      • 3 weeks ago
        Anonymous

        >The fact there was a vulnurability in speculative execution has nothing to do with C

        The argument is twofold. First of all, Meltdown and Spectre are direct results of programmers wanting a simple, sequential instruction set architecture but also massive gains in performance from the microarchitecture. The engineers decided, hmm, in order to keep this performance we need to allow speculative execution to have observable microarchitectural effects (on the cache, TLB, RSB etc. which are the side channels used in the attacks), but there are no observable architectural effects so that's fine. If the ISA was closer to the microarchitectural reality, then maybe they would have been forced to choose between no speculative effects at all or to say that speculation can have architectural effects even in single-threaded code (today's ISAs have at least admitted that caching and speculation are absolutely things to worry about when writing *multi-threaded* programs), and the problem would have been avoided. Secondly, the reason that programmers still want a simple, sequential ISA is because it maps easily to most high level languages. The ISAs that tried something else (i.e. that had explicit instruction-level parallelism) failed in the end, because it was too difficult to write good compilers for them if the source language had too many ways to pretend that the computer was still sequential. So it's not really C's fault specifically, but C is the prime example of a language that people consider to be "how the computer really works" when it truly isn't, at least not any more. The argument is more that this mindset caused Meltdown and Spectre and not that C itself did.

        >The argument is more that this mindset caused Meltdown and Spectre and not that C itself did.
        what caused that mindset in the first place? the fact that C became popular... and that it didn't teach programmers to do parallel/concurrent programming. programmers are lazy and, just like everyone else, will only apply what they know. as you said, CPU engineers had to work around that mindset.
        how the fuck can you build the argument around the same fact you dismiss as an almost-non-issue?

        • 3 weeks ago
          Anonymous

          youre gonna have predictive caching whenever your program + data is bigger than the cache of the core

          the language used is irrelevant

          • 3 weeks ago
            Anonymous

            >youre gonna have predictive caching whenever your program + data is bigger than the cache of the core
            and how would these features work if, say, the ABI, data structures and calling conventions are different in other programming languages?

            >the language used is irrelevant
            it is not if you want your CPU to be as fast as possible. for instance, x86 CPUs have instructions that operate on C strings (null terminated). explain how that doesn't mean x86 CPUs were built to process C programs instead of, say, programs written in Pascal.

            • 3 weeks ago
              Anonymous

              >explain how that doesn't mean x86 CPUs were built to process C programs instead of, say, programs written in Pascal.
              Because null terminated string processing is faster in a cpu. The cpu only has to run until it finds a 0 in memory. Otherwise storing a number that has no known property as it can be arbitrary (ie not necessarily a power of 2 or even or something that could help) means you have to store a counter and slow down, consume resources to see if you've processed the whole string or not. These resources could have been used for other computations

              • 3 weeks ago
                Anonymous

                >Because null terminated string processing is faster in a cpu.
                No it's not. It's always faster if you know how long the string is ahead of time.

                >Otherwise storing a number that has no known property as it can be arbitrary (ie not necessarily a power of 2 or even or something that could help) means you have to store a counter and slow down, consume resources to see if you've processed the whole string or not.
                This is backwards. Having a counter is faster. CPUs can copy a lot of bytes at a time if you know the length.

            • 3 weeks ago
              Anonymous

              >for instance, x86 CPUs have instructions that operate on C strings (null terminated).
              No they don't. All x86 string instructions take the length in CX/ECX/RCX. There is nothing for null-terminated strings in x86. Ctards "hack" it by setting the count to the maximum possible and pretend you have a 2^64 byte "string" but that's not how they're supposed to be used. They're supposed to copy/scan/store up to a maximum length.

              >explain how that doesn't mean x86 CPUs were built to process C programs instead of, say, programs written in Pascal.
              Those features are better for Pascal strings (and every other language) than C strings.

              I don't know if you just got this backwards or if you're intentionally lying and hoping nobody here knows anything about x86 assembly.

              • 3 weeks ago
                Anonymous

                actually... you are right, I had forgotten how those string instructions work in x86

            • 3 weeks ago
              Anonymous

              gonna have predictive caching whenever your program + data is bigger than the cache of the core
              >and how would these features work if, say, the ABI, data structures and calling conventions are different in other programming languages?

              implementation dependent. irrelevant.
              you ARE gonna have cache prediction whenever you have conditional jumps in your code and your code + data is bigger than the cache.
              i mean its friggin logic, isnt it? if something is bigger than your cache it is not gonna fit.
              simple as.
              and if your jumps depend on the dataset, you wont be sure what part of the program will run at what point, thence, as optimization- the predictive part

              >>the language used is irrelevant
              >it is not if you want your CPU to be as fast as possible. for instance, x86 CPUs have instructions that operate on C strings (null terminated). explain how that doesn't mean x86 CPUs were built to process C programs instead of, say, programs written in Pascal.

              with predictive caching the language chosen is perfectly irrelevant
              cf. previous paragraph

              and saying spectre and meltdown is caused by the use of C is peak intellectual dishonesty

  10. 3 weeks ago
    Anonymous

    The first language I learned is Java. I switched to Kotlin back in 2020 and I am still happy every single time I write code.

  11. 3 weeks ago
    Anonymous

    C is how computers really work? That's cool, I didn't know my computer was able to go back in time to undo all of the correct effects of my program and summon nasal demons instead when I accidentally add 1 to 2147483647.

    • 3 weeks ago
      Anonymous

      You should read about the representation of numbers in digital circuits and two's complement. Unsigned or signed is just how to print it. Word size is 4 or 8 bytes in modern processors which is why the specific curcuit will overflow

      >no
      all I needed to know
      high or low depends on perspective, but on the scale of pl's you have machine code -> assembly -> C -> everything else
      because LULZ are retards : obviously im not saying everything else is based on c, just that nothing is comparable, if you had taken a real CS course, in your comparch class you would've understood the historical reason for C being so unsafe
      you might even have programmed a basic cpu on an FPGA
      >LULZ is a humor board for qualified engineers, thank you for your contribution

      Machine code is the same as assembly, except for the second you don't need a thousand page translation manual to read it. C is unsafe if you don't program the safety mechanisms yourself and when you don't it's faster. Safety is not always required

      • 3 weeks ago
        Anonymous

        >You should read about the representation of numbers in digital circuits and two's complement. Unsigned or signed is just how to print it. Word size is 4 or 8 bytes in modern processors which is why the specific curcuit will overflow
        I have not failed to understand that. You have clearly failed to understand that that is explicitly not how integers work in C.

        • 3 weeks ago
          Anonymous

          What are the differences?

          >People who claim c is made around the PDP-11 have no idea what is a computer
          C is made around the PDP-11. Computers today have converged around C and PDP-11-like features. No segmentation, no decimal numbers, no tagged memory, no string formatting, no OOP, no garbage collection, only word-sized or smaller types. This was not true at all in the 70s and 80s when C was being developed. There were Lisp machines, COBOL machines, ALGOL machines, etc.
          https://en.wikipedia.org/wiki/Burroughs_Medium_Systems
          >The B2500 and B3500 computers were announced in 1966. [1] They operated directly on COBOL-68's primary decimal data types: strings of up to 100 digits, with one EBCDIC or ASCII digit character or two 4-bit binary-coded decimal BCD digits per byte. Portable COBOL programs did not use binary integers at all, so the B2500 did not either, not even for memory addresses. Memory was addressed down to the 4-bit digit in big-endian style, using 5-digit decimal addresses. Floating point numbers also used base 10 rather than some binary base, and had up to 100 mantissa digits. A typical COBOL statement 'ADD A, B GIVING C' may use operands of different lengths, different digit representations, and different sign representations. This statement compiled into a single 12-byte instruction with 3 memory operands. [2] Complex formatting for printing was accomplished by executing a single EDIT instruction with detailed format descriptors.

          >No segmentation
          Bad idea, it did happen for a bit but was dropped
          >no decimal numbers
          Wrong
          https://en.m.wikipedia.org/wiki/Intel_BCD_opcode
          Also I think the pdp-11 might have had bcd but I don't remember
          >no tagged memory
          Sounds like a bad idea
          >no string formatting, no OOP, no garbage collection
          Yeah those are higher level stuff to be implemented by programming languages. You can't make circuits that do them unless you are essentially making a specialiesd machine which would be extremely complicated and might not even fit in an integrated circuit
          >only word-sized or smaller types
          You know you can utilise the carry flag right?

          >compiler changes the code
          if thats an issue, dont ask the compiler to optimize it.

          Yeah, this too

          >If you were writting in assembly you still can't control speculative execution or instruction level parallelism because the processor does them on its own
          So he is right. There is no language that resembles what a computer actually does at runtime. C code is far away from how computers actually work.

          >C code is far away from how computers actually work.
          No because the processor makes it look like it's executing assembly. Instruction level parallelism will not execute the instructions in a way that would yeild a different result than if they were executed in order. Speculative execution will be undone if there was a branch missprediciton. In contrast, take for example the Java abstracted machine, the processor works in a completely different way than Java thinks

          • 3 weeks ago
            Anonymous

            >>No segmentation
            >Bad idea, it did happen for a bit but was dropped
            It's a feature of x86. C is not how x86 works. You will get no insight into segmentation from C.

            >Wrong
            >https://en.m.wikipedia.org/wiki/Intel_BCD_opcode
            That's because x86 was designed before C became popular, maybe for COBOL. C is not how anything that supports BCD works.

            >>no tagged memory
            >Sounds like a bad idea
            It prevents a lot of bugs and makes garbage collection and dynamic typing easier, but C does not support it. C is not how one of those computers work.

            >>no string formatting, no OOP, no garbage collection
            >Yeah those are higher level stuff to be implemented by programming languages.
            Except there are computers that have those features. C is not how those computers work.

            >>only word-sized or smaller types
            >You know you can utilise the carry flag right?
            C doesn't have that. If your computer has a carry flag, C isn't how your computer works.

            • 3 weeks ago
              Anonymous

              >segmentation
              I thought it was dropped. Still seems like an unclean idea to me
              >That's because x86 was designed before C became popular
              Nope, it's because financial institutions are required by law to keep some records in decimal numbers, else it wouldn't exist anymore
              >C doesn't have that. If your computer has a carry flag
              Yeah my point with all of these was that todays fast architectures were not build around C. C also doesn't have a carry flag or any of the other things because it wants to be able to run on any machine, which might not have carry flags for instance. C should still run on any of the machines you describe. C is still very close to how any processor works especially when you compare it with Java, functional or other oop languages which are much more abstracted.
              Yeah the processor can do stuff C can't that's why C runs on any processor and it also shows modern cpus are not made around C.

              • 3 weeks ago
                Anonymous

                >C should still run on any of the machines you describe. C is still very close to how any processor works especially when you compare it with Java, functional or other oop languages which are much more abstracted.
                How will C run on a computer where numbers are decimal or character strings from 1 to 100 digits? There's no binary arithmetic or bitwise instructions. How is that anything like C? There are also JVM processors that run JVM bytecode as their instruction set and computers designed for functional languages. All of these are closer to their languages than C is to a PDP-11-like computer.

                Sure it will. C runs close to the operating system. This principle doesn't depend on what the operating system was written in; if it can compile and run C at all, and doesn't do it deliberately horribly just to bait cniles, then it compiles and runs C close to itself. C makes more direct and explicit use of kernel features than other langs

                >This principle doesn't depend on what the operating system was written in
                Look at how Mezzano and Genera do it.

                >and doesn't do it deliberately horribly just to bait cniles, then it compiles and runs C close to itself.
                Then you will say that any computer where C doesn't run close to the operating system is "deliberately horribly just to bait cniles" like on a Lisp machine, but that's actually how C works there. It uses vectors and "C pointers" are a pair of vector reference and integer offset.

                >C makes more direct and explicit use of kernel features than other langs
                No it doesn't. The only "kernel features" it has is pointer arithmetic but that has nothing to do with kernels. It has no way to access hardware. The pointer arithmetic can actually be within a dynamically typed vector or maybe an 8-bit byte vector for PDP-11-like emulation.

              • 3 weeks ago
                Anonymous

                >The only "kernel features" it has is pointer arithmetic
                >what is malloc
                >what is mmap

              • 3 weeks ago
                Anonymous

                Those aren't kernel features and have nothing to do with how computers work.

              • 3 weeks ago
                Anonymous

                >Those aren't kernel features
                The allocator is a kernel feature.
                Cope
                >and have nothing to do with how computers work.
                We're talking about how OSes work now
                Try to keep up

              • 3 weeks ago
                Anonymous

                >>what is malloc
                A high-level abstraction over sbrk.
                >>what is mmap
                Available in Python.
                C and C++ and Rust give you more control over memory than most languages but you failed to point out how they do that. Hint: it's about the language, not the standard library.

              • 3 weeks ago
                Anonymous

                >Hint: it's about the language, not the standard library.
                not necessarily
                you can do pointer arithmetic in java
                if you find a bug in the jvm

              • 3 weeks ago
                Anonymous

                >How will C run on a computer where numbers are decimal or character strings from 1 to 100 digits? There's no binary arithmetic or bitwise instructions.
                That's implementation and yes it would be slower in such a processor. However a java bytecode processor is probably slower than a core i9 or M1 pro. The latter are faster for implementation reasons (the circuits are cheaper, easier to make and faster). Since C is built around those circuits it runs faster in these processors which are faster anyway. So it depends on the computer weather or not C is close to the metal but most computers nowdays are ones that C is close to the metal

                We learned about signed number representation, 1s complement, 2s complement, 9s complement and 10s complement. Could've actually learned about programming instead of wasting my time on trivia

                Then maybe you shouldn't complain when you overflow a number

      • 3 weeks ago
        Anonymous

        We learned about signed number representation, 1s complement, 2s complement, 9s complement and 10s complement. Could've actually learned about programming instead of wasting my time on trivia

  12. 3 weeks ago
    Anonymous

    >C will give you the discipline and insight into how computers fundamentally work.
    Where does this meme originate from? I am willing to bet not one (ONE) cnile knows the first thing about Hardware Description language and how logic boards work.

    • 3 weeks ago
      Anonymous

      (1/?)
      >logic boards
      macfig detected
      anyway i know and use c and have dabbled in verilog and i know (concerning a cpu, not a motherboard, and yes it is called a motherboard you macfig) you can build a minimal clock by wiring a not gate to itself but most clocks don't actually work this way and i know the clock gets wired to a stepper though i am not sure how the stepper drives everything but i do know low and high voltages represent 0 and 1 but which is which can vary and there is also a zero-current third state Z used for bus arbitration which works because the "bus" is literally just any wire (or wire-equivalent pathway) and also the cpu has registers but ram is also made of registers, they are not (necessarily) a different technology, ram is just farther away, and the way a register works is you just make an infinity symbol with gates, i can't remember which gates but it effectively creates a physical manifestation of a tautological paradox, thus trapping a voltage in a subcircuit for as long as power remains, and that's how one bit of storage works, also the alu is made of a lot of gates that do logic and math, this is easy enough since gates already do logic, and math can be expressed as just convoluted combinatorial logic on many bits, btw i've described how a bus is one wire and a single bit of storage is one flipflop, so how does that turn into a byte that can go to the alu and get math done on it, well there are two ways, a parallel bus is just 2^N buses together where N>=3, a serial bus is just one bus but the distinct voltages making up the byte go through it one at a time and there may be timing protocol or oscillation necessary to distinguish adjacent like voltages, the instruction opcode multiplexes the joint alu/storage interface but that's not all there is to instruction execution, adjacent instructions run in parallel in a pipeline, i forget the exact pipeline, i think it was something like one instruction can be fetched, the one before th

      • 3 weeks ago
        Anonymous

        [log in to view media]

        [...]
        (2/?)
        at can be decoded (bitparsed), the one before that can be executed, the one before that can store results to registers, and the one before that can operate on memory, all at the same time, anyway this complicates usage of the cache (which is a small imperfect modulo-view of memory wired closer to the cpu to preoptimize memory accesses), and furthermore can cause instructions to misbehave, i think(?) some architectures require you to do nop padding to work around this while others intelligently block instructions out of the pipeline until instructions on which they depend have reached appropriate stages, though i am not sure how this is done, btw memory doesn't mean ram, memory is the complete address space of the cpu and generally includes everything on the entire motherboard, this principle is called mmio, some data is not memory-mapped but exists on inaccessible buffers inside devices which instead present "controllers," bus interfaces which provide serial access, this principle is called pmio, well actually it's not, in theory ports exist as a completely different bus system from memory and the cpu has specialized instructions to access them, but in practice hardware devs just put ports into memory space instead i think(?) also at least one bus system definitely existing separately from memory space is the interrupt lines, the cpu has interrupt lines, hardware can send interrupts to the cpu which will cause it to asynchronously call interrupt handlers at addresses looked up in a register table called the uh.... interrupt descriptor table something like that anyway it's not really a call so you have to use a different instruction to return from it but it's semantically similar in that it won't forget where it was before it was forced to jump away and it will preserve your stack frame somehow i think or maybe you have to do that manually i'm not sure, also there is an "mmu," a chip that changes the apparent layout of devices in the addre

        [...]
        [...]
        (3/?)
        ss space, the cpu uses this for context switches, it has a register table of "task descriptors" for it to use to tell the mmu what it wants where depending on what kind of code it's in, oses can use this feature to implement processes, and i think that's about the limit of my knowledge on low-level stuff, i can't think of anything else, oh yeah remember how i said serial buses can require timing protocols and/or voltage oscillation for disambiguation, well that applies to usb especially and usb is shit and i hate it, ok im done

        >all this seething just to prove me right
        (You)

        • 3 weeks ago
          Anonymous

          The fuck are you talking about retard name one (1 (one)) thing i said that shows i don't knos how motherboards work

          • 3 weeks ago
            Anonymous

            >motherboard
            >not logic board
            >not an apple user
            This proves you have no idea what you're talking about.
            I didn't read the rest

      • 3 weeks ago
        Anonymous

        [log in to view media]

        [...]
        (2/?)
        at can be decoded (bitparsed), the one before that can be executed, the one before that can store results to registers, and the one before that can operate on memory, all at the same time, anyway this complicates usage of the cache (which is a small imperfect modulo-view of memory wired closer to the cpu to preoptimize memory accesses), and furthermore can cause instructions to misbehave, i think(?) some architectures require you to do nop padding to work around this while others intelligently block instructions out of the pipeline until instructions on which they depend have reached appropriate stages, though i am not sure how this is done, btw memory doesn't mean ram, memory is the complete address space of the cpu and generally includes everything on the entire motherboard, this principle is called mmio, some data is not memory-mapped but exists on inaccessible buffers inside devices which instead present "controllers," bus interfaces which provide serial access, this principle is called pmio, well actually it's not, in theory ports exist as a completely different bus system from memory and the cpu has specialized instructions to access them, but in practice hardware devs just put ports into memory space instead i think(?) also at least one bus system definitely existing separately from memory space is the interrupt lines, the cpu has interrupt lines, hardware can send interrupts to the cpu which will cause it to asynchronously call interrupt handlers at addresses looked up in a register table called the uh.... interrupt descriptor table something like that anyway it's not really a call so you have to use a different instruction to return from it but it's semantically similar in that it won't forget where it was before it was forced to jump away and it will preserve your stack frame somehow i think or maybe you have to do that manually i'm not sure, also there is an "mmu," a chip that changes the apparent layout of devices in the addre

        [...]
        [...]
        (3/?)
        ss space, the cpu uses this for context switches, it has a register table of "task descriptors" for it to use to tell the mmu what it wants where depending on what kind of code it's in, oses can use this feature to implement processes, and i think that's about the limit of my knowledge on low-level stuff, i can't think of anything else, oh yeah remember how i said serial buses can require timing protocols and/or voltage oscillation for disambiguation, well that applies to usb especially and usb is shit and i hate it, ok im done

        The fuck are you talking about retard name one (1 (one)) thing i said that shows i don't knos how motherboards work

        Confirmed retard

    • 3 weeks ago
      Anonymous

      >Where does this meme originate from?
      Because shills gaslit a bunch of Indians, Chinese, and Africans into thinking C was the origin of everything in programming and now they worship C.

    • 3 weeks ago
      Anonymous

      (1/?)
      >logic boards
      macfig detected
      anyway i know and use c and have dabbled in verilog and i know (concerning a cpu, not a motherboard, and yes it is called a motherboard you macfig) you can build a minimal clock by wiring a not gate to itself but most clocks don't actually work this way and i know the clock gets wired to a stepper though i am not sure how the stepper drives everything but i do know low and high voltages represent 0 and 1 but which is which can vary and there is also a zero-current third state Z used for bus arbitration which works because the "bus" is literally just any wire (or wire-equivalent pathway) and also the cpu has registers but ram is also made of registers, they are not (necessarily) a different technology, ram is just farther away, and the way a register works is you just make an infinity symbol with gates, i can't remember which gates but it effectively creates a physical manifestation of a tautological paradox, thus trapping a voltage in a subcircuit for as long as power remains, and that's how one bit of storage works, also the alu is made of a lot of gates that do logic and math, this is easy enough since gates already do logic, and math can be expressed as just convoluted combinatorial logic on many bits, btw i've described how a bus is one wire and a single bit of storage is one flipflop, so how does that turn into a byte that can go to the alu and get math done on it, well there are two ways, a parallel bus is just 2^N buses together where N>=3, a serial bus is just one bus but the distinct voltages making up the byte go through it one at a time and there may be timing protocol or oscillation necessary to distinguish adjacent like voltages, the instruction opcode multiplexes the joint alu/storage interface but that's not all there is to instruction execution, adjacent instructions run in parallel in a pipeline, i forget the exact pipeline, i think it was something like one instruction can be fetched, the one before th

      (2/?)
      at can be decoded (bitparsed), the one before that can be executed, the one before that can store results to registers, and the one before that can operate on memory, all at the same time, anyway this complicates usage of the cache (which is a small imperfect modulo-view of memory wired closer to the cpu to preoptimize memory accesses), and furthermore can cause instructions to misbehave, i think(?) some architectures require you to do nop padding to work around this while others intelligently block instructions out of the pipeline until instructions on which they depend have reached appropriate stages, though i am not sure how this is done, btw memory doesn't mean ram, memory is the complete address space of the cpu and generally includes everything on the entire motherboard, this principle is called mmio, some data is not memory-mapped but exists on inaccessible buffers inside devices which instead present "controllers," bus interfaces which provide serial access, this principle is called pmio, well actually it's not, in theory ports exist as a completely different bus system from memory and the cpu has specialized instructions to access them, but in practice hardware devs just put ports into memory space instead i think(?) also at least one bus system definitely existing separately from memory space is the interrupt lines, the cpu has interrupt lines, hardware can send interrupts to the cpu which will cause it to asynchronously call interrupt handlers at addresses looked up in a register table called the uh.... interrupt descriptor table something like that anyway it's not really a call so you have to use a different instruction to return from it but it's semantically similar in that it won't forget where it was before it was forced to jump away and it will preserve your stack frame somehow i think or maybe you have to do that manually i'm not sure, also there is an "mmu," a chip that changes the apparent layout of devices in the addre

    • 3 weeks ago
      Anonymous

      (1/?)
      >logic boards
      macfig detected
      anyway i know and use c and have dabbled in verilog and i know (concerning a cpu, not a motherboard, and yes it is called a motherboard you macfig) you can build a minimal clock by wiring a not gate to itself but most clocks don't actually work this way and i know the clock gets wired to a stepper though i am not sure how the stepper drives everything but i do know low and high voltages represent 0 and 1 but which is which can vary and there is also a zero-current third state Z used for bus arbitration which works because the "bus" is literally just any wire (or wire-equivalent pathway) and also the cpu has registers but ram is also made of registers, they are not (necessarily) a different technology, ram is just farther away, and the way a register works is you just make an infinity symbol with gates, i can't remember which gates but it effectively creates a physical manifestation of a tautological paradox, thus trapping a voltage in a subcircuit for as long as power remains, and that's how one bit of storage works, also the alu is made of a lot of gates that do logic and math, this is easy enough since gates already do logic, and math can be expressed as just convoluted combinatorial logic on many bits, btw i've described how a bus is one wire and a single bit of storage is one flipflop, so how does that turn into a byte that can go to the alu and get math done on it, well there are two ways, a parallel bus is just 2^N buses together where N>=3, a serial bus is just one bus but the distinct voltages making up the byte go through it one at a time and there may be timing protocol or oscillation necessary to distinguish adjacent like voltages, the instruction opcode multiplexes the joint alu/storage interface but that's not all there is to instruction execution, adjacent instructions run in parallel in a pipeline, i forget the exact pipeline, i think it was something like one instruction can be fetched, the one before th

      [...]
      (2/?)
      at can be decoded (bitparsed), the one before that can be executed, the one before that can store results to registers, and the one before that can operate on memory, all at the same time, anyway this complicates usage of the cache (which is a small imperfect modulo-view of memory wired closer to the cpu to preoptimize memory accesses), and furthermore can cause instructions to misbehave, i think(?) some architectures require you to do nop padding to work around this while others intelligently block instructions out of the pipeline until instructions on which they depend have reached appropriate stages, though i am not sure how this is done, btw memory doesn't mean ram, memory is the complete address space of the cpu and generally includes everything on the entire motherboard, this principle is called mmio, some data is not memory-mapped but exists on inaccessible buffers inside devices which instead present "controllers," bus interfaces which provide serial access, this principle is called pmio, well actually it's not, in theory ports exist as a completely different bus system from memory and the cpu has specialized instructions to access them, but in practice hardware devs just put ports into memory space instead i think(?) also at least one bus system definitely existing separately from memory space is the interrupt lines, the cpu has interrupt lines, hardware can send interrupts to the cpu which will cause it to asynchronously call interrupt handlers at addresses looked up in a register table called the uh.... interrupt descriptor table something like that anyway it's not really a call so you have to use a different instruction to return from it but it's semantically similar in that it won't forget where it was before it was forced to jump away and it will preserve your stack frame somehow i think or maybe you have to do that manually i'm not sure, also there is an "mmu," a chip that changes the apparent layout of devices in the addre

      (3/?)
      ss space, the cpu uses this for context switches, it has a register table of "task descriptors" for it to use to tell the mmu what it wants where depending on what kind of code it's in, oses can use this feature to implement processes, and i think that's about the limit of my knowledge on low-level stuff, i can't think of anything else, oh yeah remember how i said serial buses can require timing protocols and/or voltage oscillation for disambiguation, well that applies to usb especially and usb is shit and i hate it, ok im done

  13. 3 weeks ago
    Anonymous

    [log in to view media]

    >If you're learning programming, learn C first. C will give you the discipline and insight into how computers fundamentally work. Master C and you will master programming.
    Is this your brain on C? Like if you want to learn about computers and shit go learn about them no need to learn them trough some shitty language with a gate keeping community.

  14. 3 weeks ago
    Anonymous

    It is a troll thread, probably, but I agree. Started seriously learning programming from C++, and very happy that I did. Learned a lot about PC's and writing efficient code, and had a lot of fun in the process. Never used it after tho, and rarely enjoy other languages as much.

  15. 3 weeks ago
    Anonymous

    The first language should be Common Lisp. Gotta learn how powerful a language can be and how C-like languages have rotten our world.

  16. 3 weeks ago
    Anonymous

    [log in to view media]

    Racket imo

  17. 3 weeks ago
    Anonymous

    > #include <http/http.h>
    > http_run_server(8080);

    Whoa.... I totally know how computers work now.

  18. 3 weeks ago
    Anonymous

    >how computers fundamentally work
    oh? do you do instruction reordering in your C code? no? hmm

  19. 3 weeks ago
    Anonymous

    Based Dre

  20. 3 weeks ago
    Anonymous

    >C
    >how computers fundamentally work
    by that logic, IMO you should be telling people to learn assembly, not C.
    C does help learn how an operating system works.

    • 3 weeks ago
      Anonymous

      >C does help learn how an operating system works.
      Only if the operating system is written in C. It won't help you learn how an operating system written in Lisp or Ada or anything else works.

      • 3 weeks ago
        Anonymous

        well, of course, yeah, and since the most relevant OSs are written in C or C++, you'll learn a bit about those

      • 3 weeks ago
        Anonymous

        Sure it will. C runs close to the operating system. This principle doesn't depend on what the operating system was written in; if it can compile and run C at all, and doesn't do it deliberately horribly just to bait cniles, then it compiles and runs C close to itself. C makes more direct and explicit use of kernel features than other langs

  21. 3 weeks ago
    Anonymous

    Is C++ okay too? I'm currently learning it.

    • 3 weeks ago
      Anonymous

      any statically typed language is decent really

    • 3 weeks ago
      Anonymous

      when learning c++ you will be guided towards using its oop features which may complicate things if you dont know C.
      from a didactic standpoint, C is a simplified C++

  22. 3 weeks ago
    Anonymous

    I'm not interested in learning how computers work. I just want to build shit.

    • 3 weeks ago
      Anonymous

      yesterday we made a program that finds swastika patterns in one bilion hex digits of pi.

      one lad wrote it in python- it took 1200 secs to run.
      i built one in C. it took 17 secs and had a memory footprint of 500kB to do the same task

      theres pros and cons with every language. choose accordingly.

    • 3 weeks ago
      Anonymous

      yesterday we made a program that finds swastika patterns in one bilion hex digits of pi.

      one lad wrote it in python- it took 1200 secs to run.
      i built one in C. it took 17 secs and had a memory footprint of 500kB to do the same task

      theres pros and cons with every language. choose accordingly.

      after asking the compiler to optimize it went down to 3 seconds.
      to parse 1B caracters, turn them into a binary representation i could work with, and check against a specified pattern which was 25 bytes long.

  23. 3 weeks ago
    Anonymous

    kinda relevant to the discussion, what's a good resource to learn x86 assembly?

    • 3 weeks ago
      Anonymous

      Start from 8085

  24. 3 weeks ago
    Anonymous

    Nah, I learnt C first and regret it, in industry I typically use Java/Groovy and Python, and would have started with Java if I could do it all again.

  25. 3 weeks ago
    Anonymous

    How about starting with C#?

    • 3 weeks ago
      Anonymous

      Not the same

      • 3 weeks ago
        Anonymous

        >muh pointers muh memory management
        unless you're an EE you don't need such things. c# is fine. garbage collection is a nice qol feature so are virtual machiens

  26. 3 weeks ago
    Anonymous

    I'm learning coral and sql lmao

  27. 3 weeks ago
    Anonymous

    Let's not get ahead of ourselves. First, get a standard calculus text and dive in. You should also get linear algebra and discrete math books as well; make sure the discrete text is proof based.

    Once you're a couple chapters in to your discrete book (you will want to have covered basic proposition and higher order logic, and basic proofs), you may begin learning programming and computer architecture. As a litmus test, if you don't know what this statement is

    ∀P((0∈P∧∀i(i∈P-->i+1∈P))-->∀n(n∈P))

    you aren't ready to take the reins of a computer.

    Now, forget what you do know about computer programming:

    First, you learn boolean logic operations
    then, you learn transistor logic
    then, you learn how to build functional units from logic gates
    then, you learn CPU design
    then, and only then, you learn assembly language
    then, after you have mastered assembly language (not dabbled, but mastered it), you learn C,
    then you may learn the higher-level languages of your choice, but you will always use C and assembly as your primary languages because everything else is unnecessary bloat.

    By this time you should be finished with your first wave of math and ready for the next: abstract algebra, analysis, multivariate and vector calculus, and, after you have progressed a way in those, topology.

    Finally, you become familiar with topoi, and study the internal logic of categories
    then familiarize yourself with (general) type theory, and its applications to programming. I also recommend studying how to reformulate mathematics in terms of globular categories for use in automatic theorem proving, because there is an inherent programming-like 'feel' to it.

    • 3 weeks ago
      Anonymous

      This but to actually grasp transistor logic you need electronics and for electronics you need to start with linear circuits. But to start with linear circuits you need also need differential equations. All the above is for using transistors as components in circuits but for transistors to really make sence (to get how the basic gates operate and cmos logic etc) you will also need semiconductor physics and possibly materials science. The last two will also help you understand any kind of transistor besides bjts and mosfets (usefull for new technologies like finfetts etc). To do semiconductor physics however you also need electromagnetism and specifically the field theory so you need multivariate and vector calculus before that. I would also suggest to use all of the above knowledge to study memory technologies but this may be included in what the previous anon termed "transistor logic". There is also a thing which is technically missing which would be how to connect the cpu with periferals (serial port, vga etc) but I don't know if this is that much relevant if you only want to learn programming. However if you decide to learn that too you need to know about communication protocols so we are definetly talking about a digital communications or telecom study at least which may require digital signal processing but deffinetly requires electromagnetic field theory and code theory so we also need algebra here. Idk, am I forgetting something here fellas?
      Anyway of you follow my and the other anons advice you will only have learned programming so keep in mind you still have a long way to go (system programming, os, programming language theory, compilers, software engineering, algorithms, network programming, webdev etc)

      • 3 weeks ago
        Anonymous

        Forgot inorganic chemistry and quantum mechanics as requirements for materials science trageted at semiconductros and semiconductor physics so you also need classical and relativistic mechanics before quantum (with their math requirements which are calculus and differential equations). Sorry I accidentally considered them common knowledge since they are at like first semester stuff that everyone does regardless they learn programming or not lol. Again sorry

    • 3 weeks ago
      Anonymous

      This but to actually grasp transistor logic you need electronics and for electronics you need to start with linear circuits. But to start with linear circuits you need also need differential equations. All the above is for using transistors as components in circuits but for transistors to really make sence (to get how the basic gates operate and cmos logic etc) you will also need semiconductor physics and possibly materials science. The last two will also help you understand any kind of transistor besides bjts and mosfets (usefull for new technologies like finfetts etc). To do semiconductor physics however you also need electromagnetism and specifically the field theory so you need multivariate and vector calculus before that. I would also suggest to use all of the above knowledge to study memory technologies but this may be included in what the previous anon termed "transistor logic". There is also a thing which is technically missing which would be how to connect the cpu with periferals (serial port, vga etc) but I don't know if this is that much relevant if you only want to learn programming. However if you decide to learn that too you need to know about communication protocols so we are definetly talking about a digital communications or telecom study at least which may require digital signal processing but deffinetly requires electromagnetic field theory and code theory so we also need algebra here. Idk, am I forgetting something here fellas?
      Anyway of you follow my and the other anons advice you will only have learned programming so keep in mind you still have a long way to go (system programming, os, programming language theory, compilers, software engineering, algorithms, network programming, webdev etc)

      Forgot inorganic chemistry and quantum mechanics as requirements for materials science trageted at semiconductros and semiconductor physics so you also need classical and relativistic mechanics before quantum (with their math requirements which are calculus and differential equations). Sorry I accidentally considered them common knowledge since they are at like first semester stuff that everyone does regardless they learn programming or not lol. Again sorry

      If this is not bait, touch grass and have sex. Instead of going full LULZzo, you can grind ~100 algorithm puzzle problems for a few months and get literally over 200k yearly kek.

      • 3 weeks ago
        Anonymous

        >he thinks money motivates us for code
        Nice 100 IQ

        • 3 weeks ago
          Anonymous

          Money buys you more time to engage deeply in whatever autistic interest you might have.

          • 3 weeks ago
            Anonymous

            Programming is already the autistic interest

      • 3 weeks ago
        Anonymous

        I learned most of these things despite not even liking them just because I was interested in getting a full picture. Now I know exactly what every single line of code does. Unemployed though because I deny writing "understandable" code, I only write efficient code (actual assembly sometimes to overcome C when needed). Also I never write comments or document anything since it's a waste of time

  28. 3 weeks ago
    Anonymous

    c is one of the easiest programming languages, with fewest features and little to no abstraction over assembly

  29. 3 weeks ago
    Anonymous

    [log in to view media]

    C will give you the insight into how PDP-11 worked, for actual fundamental insight you need to read pic related and learn assembly with HDL.

  30. 3 weeks ago
    Anonymous

    i'm a 30 year old who's never had a job dude. either i learn javascript so i can be a webdev or i'm fucked. what would learning C do for me? the barrier to entry for an actual software engineering job is so high that nobody with my lack of job history and old age could do it.

    • 3 weeks ago
      Anonymous

      Well, not with that kind of attitude.

  31. 3 weeks ago
    Anonymous

    C is for lazy fucking retards that can't properly write their own code.

  32. 3 weeks ago
    Anonymous

    What if you learn golang first?

Your email address will not be published.