It has the same, how should I put it, schizo writing quality. Lemme guess, your "strategy" was to shit up the thread where people made arguments against rust so it would hit the bump limit. Then create a new thread and same fag all over again? Well I'm off to bed (insomnia is a bitch). Your posts are insanely off topic. Now if I didn't know better I would suspect some janitorial assistance. I wonder what a few of my frens will find for me.
Thank you for confirming something. I do know that the jannies are behind a lot of this now. I have to warn you; you've picked up an unusually determined adversary. As far as I'm concerned, it's open season on this board and on this site
Yep. Unfortunately compiler devs and standards authors have crippled any level of control the developer has over the program, so unless you're willing to restrict yourself to a single version of a single compiler like I do, then you need to watch out for shit like this. >MUH UNDEFINED BEHAVIOR!
Because the variable was optimized to automatic storage, to have same effect as the compound char buffer[MAX] = {0};. If he didn't want any optimization applied, he should use "volatile".
Because the compiler expects you to use the buffer after writing to it. If you don't check the buffer contents after writing, it assumes the write is unnecessary and optimizes it out.
>the compiler is free to
Just because the compiler is "free to" do something doesn't mean they should.
And yet the GCC and Clang naggers think that it is a good idea to delete the user's code.
>x + 1 will always be greater than x
this is demonstrably and verifiably false and i just gave you an example (
It's true if the number is about to overflow.
For a signed dword, 2147483647 + 1 == -2147483648 (which is obviously less than 2147473647).
).
2 weeks ago
Anonymous
are you actually retarded?
2 weeks ago
Anonymous
if you know what you're doing, you'll use "volatile". if you don't know what you're doing, this is likely an unnecessary check you added because you're an idiot, in which case the compiler is right to remove it.
You seem to be using the mathematical definition of addition. Computers use binary storage for numbers which does not represent real numbers with an unlimited range. You cannot store an arbitrarily large number inside a normal integer, and integers overflow (or wrap around) when you try and add too much.
GCC and Clang compilers are broken because they incorrectly assume a mathematical model of numbers instead of acknowledging the reality of fixed width integers, as used by every computer in existence.
2 weeks ago
Anonymous
Let me explain what you did here:
>x + 1 will always be greater than x
this is demonstrably and verifiably false and i just gave you an example ( [...] ).
You have read the first part of the sentence >x + 1 will always be greater than x
and then completely ignored the second part of the sentence >unless you can't trust that (aka. mark it as volatile)
And based only on the first part, you re-iterated exactly the issue >this is demonstrably and verifiably false and i just gave you an example (
It's true if the number is about to overflow.
For a signed dword, 2147483647 + 1 == -2147483648 (which is obviously less than 2147473647). ).
that was addressed in the second part of the sentence.
2 weeks ago
Zanon
Again: if you can't trust the addition, that there's a possibility to overflow on whatever you are doing, you just mark the fucking variable as volatile and the compiler will do what do you want.
Express your intent that you are expecting the integer to behave like a 32 bit integer that overflows and not just a number of integer type.
2 weeks ago
Anonymous
Let me explain what you did here: [...]
You have read the first part of the sentence >x + 1 will always be greater than x
and then completely ignored the second part of the sentence >unless you can't trust that (aka. mark it as volatile)
And based only on the first part, you re-iterated exactly the issue >this is demonstrably and verifiably false and i just gave you an example ( [...] ).
that was addressed in the second part of the sentence.
Volatile has a number of other effects, such as ensuring the variable cannot be cached in a register on every read/write. This slows down the program for no reason.
Instead of using volatile, it is more effective to simply use a compiler that does not have this bug in it (i.e. don't use GCC or Clang).
2 weeks ago
Anonymous
this has nothing to do with your retardation I explained here:
Let me explain what you did here: [...]
You have read the first part of the sentence >x + 1 will always be greater than x
and then completely ignored the second part of the sentence >unless you can't trust that (aka. mark it as volatile)
And based only on the first part, you re-iterated exactly the issue >this is demonstrably and verifiably false and i just gave you an example ( [...] ).
that was addressed in the second part of the sentence.
so why did you quote me?
2 weeks ago
Anonymous
On a computer, x + 1 is not always greater than x. This is simply a fact.
If you deny this you are denying reality. Computing is not mathematics.
2 weeks ago
Anonymous
and now you're repeating the same mistake again. I seriously hope for you you're only pretending to be retarded.
2 weeks ago
Anonymous
Again, I must repeat to you that x + 1 is not always greater than x.
Assuming otherwise is an error in the compiler, nothing more and nothing less.
2 weeks ago
Anonymous
>Again, I must repeat to you that x + 1 is not always greater than x.
Only because you're a brainlet. Otherwise your warped perception wouldn't force you to repeat that, because it'd be clear to you that nobody actually claimed so.
2 weeks ago
Anonymous
>nobody actually claimed so.
Refer:
No because x + 1 will always be greater than x, x + 1 < x will always be false unless you can't trust that (aka. mark it as volatile). Is that simple.
>because x + 1 will always be greater than x
2 weeks ago
Anonymous
Ironically, you're doing the same of what you accuse GCC and Clang of doing.
The compilers trow away instructions that they assume to always be true and therefore have no effect on the outcome.
You ignored the latter part of the sentence (for no discernible reason) and act as if it wouldn't change the meaning. And then you're complaining that the first part does not take into consideration what the second part (which you just ignored) actually did.
2 weeks ago
Anonymous
>The compilers trow away instructions that they assume to always be true
The compiler makes an incorrect assumption which is obviously, demonstrably and verifiably false, which is a bug in the compiler(s) which do so (GCC and Clang).
The best way to avoid this error in your programs is to use compilers, such as MSVC, TCC or Compcert, which do not contain this bug - rather than spam volatile on every variable which might be affected.
I hope you understand and do not require additional clarification.
2 weeks ago
Anonymous
You makes an incorrect assumption which is obviously, demonstrably and verifiably false, which is a bug in your brain which does so (You).
The best way to avoid this error in your case is to read all of a text and interpret it in whole instead of taking parts out of context.
I hope you understand and do not require additional clarification.
2 weeks ago
Anonymous
As per the standard, integer overflow is uNdEfInEd BeHaViOr and compilers are free to assume that it never happens.
2 weeks ago
Anonymous
Whether or not this is true is not important. The fact is, the compiler exists to produce working and useful code, not to follow the standards. If a compiler produces broken executables from sensible source code, there is a bug in the compiler.
2 weeks ago
Anonymous
GCC and Clang produce code for more than x86 and ARM. Integer overflow on some more esoteric (or future) architectures might cause wildly different results than wrapping and you shouldn't rely on it if you want your code to be portable. (Note that standards-compliant C code can be ran with a LISP interpreter!)
2 weeks ago
Anonymous
I write Windows applications. Windows doesn't run on those literally-who processors, which means I don't want to make their problems my problems by using a crappy compiler which prioritizes obscure architectures over real ones that people actually use.
Your program might be linked with ubsan and get SIGTRAPped when an overflow occurs.
Linking third party code which breaks my program is out of scope for me. Simply don't do that and there will not be any bugs.
>Integer overflow is reliable and well defined on all processor architectures
But you know what is not reliable? Idiots who work with you who may decide that "int is not enough for that abomination I'm about to write so I'll change to long". Not to mention that this reliability lasts for short period of time. In case you want your software to work on something that's gonna be developed in future you should avoid this shit. This is bad practice in general and every sane programmer will always avoid wrapping.
x + 1 < x is a generic test condition which works for *all* integer data types, no matter their signedness or bit width. That is why it is better to use it than some other test, like x == INT_MAX. Relying on wrapping is the cleanest and most future-proof form of overflow check that exists (other than using the built-on overflow flag on the x86 architecture, but there is no way to express that in C).
2 weeks ago
Anonymous
>i'm programming for windows
Well in that case you can do whatever you want. Don't clean up heap, use goto for loops, don't catch exceptions, etc. You don't need all that "guidelines" on how to write actually good code and not some shit because Windows will do anything that you forgot/haven't done for you.
2 weeks ago
Anonymous
>I don't want to make their problems my problems by using a crappy compiler which prioritizes obscure architectures over real ones that people actually use
you cant honestly be claiming that the MSVC compiler is better than gcc or clang, right?
2 weeks ago
Anonymous
The compiler's behavior is correct because it is obeying the standard: it can choose to do whatever it wants when it sees undefined behavior. So it is a one-to-many relationship between the source and the generated assembly.
2 weeks ago
Anonymous
>The compiler's behavior is correct because it is obeying the standard
The policeman's behavior [bashing you over the head repeatedly] is correct because he is obeying the law.
2 weeks ago
Anonymous
You are trying to compile C code which has a standard that dictates what the behavior of the program is supposed to be. The compiler implements the standard. Not my fault that you can't comprehend this.
2 weeks ago
Anonymous
If you violated the law then what is the policeman supposed to do? Forgive you?
Not to mention that computers can't assume what you had in mind. They are simply doing what they are told to do. If you want some code to work while it's shit then the problem is not on the computer side.
>return x == INT_MAX;
Fragile code which will break when you change the datatype of x. 0/10 bait, try again >signed integers are not allowed to overflow by the standard
In reality, signed integers overflow all the time. The standard also does not say that signed integers "cannot" overflow; it merely says that signed overflow is undefined. This is a clause which is included to allow for obscure and/or outdated processor architectures in which signed overflow is undefined. However, any reasonable person would defer the behavior of signed overflow to the underlying architecture.
[...]
Windows only cleans up the heap upon process termination. If your program runs for any non-trivial amount of time, then heap clean-up is still necessary.
[...]
Refer to my previous points on signed overflow
>Fragile code which will break when you change the datatype of x.
It's a function retard. If you changed type then it's a type mismatch. -1/10 bait, try again.
2 weeks ago
Anonymous
>The fact is, the compiler exists to produce working and useful code, not to follow the standards.
Then there is literally no point in a standard.
2 weeks ago
Anonymous
great, you figured it out.
2 weeks ago
Anonymous
You're right that it's an error in the compiler
You're wrong about x+1 not always being greater than x
x+1 is always greater than x unless there is integer overflow
Since integer overflow is UB, the compiler can do anything it wants. It can choose to wrap around, zero it, cast it as a long, etc.
The problem is that if the compiler chooses to wrap around, for example, then the assumption that x+1 is always greater than x is false, hence the compiler has inconsistent logic
but if it casts it to a long, then it can continue that assumption
The error is not the assumption per se, its the inconsistent compiler logic - which is indeed a compiler bug.
2 weeks ago
Anonymous
>You're right that it's an error in the compiler
wrong, you retard >its the inconsistent compiler logic - which is indeed a compiler bug.
wrong, you retard
the logic is very consistent
you are not allowed to *cause* overflow in signed integer
period
signed integers are meant to operate only in their limits
and you as a programmer *must* ensure this
it's a bug in your program if you cause or rely on signed integer overflow
that's why x + 1 < x can be assumed from the purely mathematical point of view in signed integers
so it's always false, and can be optimized out
2 weeks ago
Anonymous
>you are not allowed to *cause* overflow in signed integer >period
Yes you are. The program compiles, hence it's valid C.
The language of the C spec is: >If an exceptional condition occurs during the evaluation of an expression (that is, if the result is not mathematically defined or not in the range of representable values for its type), the behavior is undefined.
Hence it is the /compiler/ that is responsible for any bugs.
2 weeks ago
Anonymous
imbecile
literally
causing ub is YOUR BUG
2 weeks ago
Zanon
And your example wouldn't be cached in a register at all for ABI and call convenction reasons, just the return of the function would be cached at best in rax.
2 weeks ago
Anonymous
My example is just an example, intended to be short and simple in order to demonstrate compiler bugs in GCC and Clang. Real world examples are more complex. For example, the overflow checking function could be inlined.
2 weeks ago
Zanon
>My example is just an example
Yeah, because in real world situation you would use the register specifier too and avoid non-inlined functions at all to ensure it will be cached in a register.
if you know what you're doing, you'll use "volatile". if you don't know what you're doing, this is likely an unnecessary check you added because you're an idiot, in which case the compiler is right to remove it.
Relying on wrapping is considered a bug not only in C++ but in any language. In this case you should write better code. In case you want to check if you reached type's limits you can compare with predefined macros and you'll never actually write this shit.
>Relying on wrapping is considered a bug
It is not a bug. Integer overflow is reliable and well defined on all processor architectures in use today. Compilers which insist on breaking the behavior are buggy and should not be used.
2 weeks ago
Anonymous
Your program might be linked with ubsan and get SIGTRAPped when an overflow occurs.
2 weeks ago
Anonymous
>Integer overflow is reliable and well defined on all processor architectures
But you know what is not reliable? Idiots who work with you who may decide that "int is not enough for that abomination I'm about to write so I'll change to long". Not to mention that this reliability lasts for short period of time. In case you want your software to work on something that's gonna be developed in future you should avoid this shit. This is bad practice in general and every sane programmer will always avoid wrapping.
2 weeks ago
Anonymous
>Integer overflow is reliable and well defined
wrong, you moron
UNSIGNED integer overflow is reliable and well defined
SIGNED integer overflow is not allowed by the standard, an undefined behavior, moronic and a BUG on your side
2.
msvc and others simply can't optimize - that's not a feature, that's a joke quality of a compiler
3.
the proper way of checking this:
int will_integer_overflow(int x) {
return x == INT_MAX;
}
compiles to 3 asm lines
https://godbolt.org/z/hreMb8rTP
4.
signed integers are not allowed to overflow by the standard
x + 1 in the context of x being a max_int is AN ERROR IN YOUR CODE and an UNDEFINED BEHAVIOR
you fucking imbecile
5. gcc/clang optimize code having standard in mind
so it is allowed to simply remove this shit code in this case (see 4.)
>return x == INT_MAX;
Fragile code which will break when you change the datatype of x. 0/10 bait, try again >signed integers are not allowed to overflow by the standard
In reality, signed integers overflow all the time. The standard also does not say that signed integers "cannot" overflow; it merely says that signed overflow is undefined. This is a clause which is included to allow for obscure and/or outdated processor architectures in which signed overflow is undefined. However, any reasonable person would defer the behavior of signed overflow to the underlying architecture.
>i'm programming for windows
Well in that case you can do whatever you want. Don't clean up heap, use goto for loops, don't catch exceptions, etc. You don't need all that "guidelines" on how to write actually good code and not some shit because Windows will do anything that you forgot/haven't done for you.
Windows only cleans up the heap upon process termination. If your program runs for any non-trivial amount of time, then heap clean-up is still necessary.
>Integer overflow is reliable and well defined
wrong, you moron
UNSIGNED integer overflow is reliable and well defined
SIGNED integer overflow is not allowed by the standard, an undefined behavior, moronic and a BUG on your side
>any reasonable person would defer the behavior of signed overflow to the underlying architecture.
no reasonable person would write non-portable code based on undefined behavior
seriously kill yourself at this point
>non-portable
You are missing the forest for the trees.
If you target all the common architectures which are in use today (which means x86 and ARM), then your code is almost guaranteed to run on any hypothetical future processors which may be invented in the future, because processor designers always consider compatibility with existing code-bases when they design their architecture.
Practically speaking your concerns with portability are irrelevant.
If you violated the law then what is the policeman supposed to do? Forgive you?
Not to mention that computers can't assume what you had in mind. They are simply doing what they are told to do. If you want some code to work while it's shit then the problem is not on the computer side.
[...] >Fragile code which will break when you change the datatype of x.
It's a function retard. If you changed type then it's a type mismatch. -1/10 bait, try again.
>Not to mention that computers can't assume what you had in mind
The processor itself is not misinterpreting the executable code. The compiler is producing buggy executable code which does not accurately reflect the source code from which it was produced. >If you violated the law then what is the policeman supposed to do? Forgive you?
If you are pulled over for a routine traffic check, the policeman can either check your license and registration, or he can do a full destructive search of your car while his friends beat you up on the ground for non-compliance with his vague and contradictory instructions. Both are fully legal but only one option is sane and reasonable.
I hope you understand my analogy and do not require any further clarification. If you would like additional assistance, please call the suicide hotline in your country of residence, who will strive to help you terminate your own life in the most painless way possible.
2 weeks ago
Anonymous
>Both are fully legal
Nope. At least not in my country. In my country the second option is 5 to 10 years in jail for policemen and his friends. On normal countries there's one law for one crime. Overacting or underacting. Unless special cases where judging by authorities applied but that's never a case for anything computer-related. >The processor itself
Yes but we are talking about compilers. And compilers are the same as processors – they have standards. If your low-level language is based on assuming then JS shithole is two floors lower, I think you are stranded.
2 weeks ago
Anonymous
>Nope. At least not in my country
Oh don't worry, I'm sure your country also has plenty of retarded laws that could theoretically be used to punish you for doing nothing wrong.
2 weeks ago
Anonymous
Never had problems with law tho. Because I know laws and the same for standards. I don't try to find a loophole in something to get shot in my leg.
>3. the proper way of checking this:
That is fucking retarded and wont check for integer overflows
int x = INT_MAX - 1;
will_integer_overflow(x); // false
x += 3; // Whoooooops
But it's functionally equivalent (if the compiler doesn't optimize) to
>the compiler is free to
Just because the compiler is "free to" do something doesn't mean they should.
And yet the GCC and Clang naggers think that it is a good idea to delete the user's code.
>Integer overflow is reliable and well defined
wrong, you moron
UNSIGNED integer overflow is reliable and well defined
SIGNED integer overflow is not allowed by the standard, an undefined behavior, moronic and a BUG on your side
Integer overflows are undefined behaviour for the C virtual machine, as when C was designed there were still machines that used ones complement for some retarded reason.
Optimizing that call away seems reasonable as there only is that one edge case where it breaks.
If you want to have gcc/clang assume twos complement you can pass -fwrapv as a compiler flag, and now the compiler optimizes the call the same way [...] did manually, with the added benefit that you can change the datatype without having the issue described in [...].
https://godbolt.org/z/rdhq9n54P
>Integer overflows are undefined behaviour for the C virtual machine, as when C was designed there were still machines that used ones complement for some retarded reason.
No, its specifically to allow optimizations. Since in real code, expressions like x+1>x will be intended to always return 1 by the programmer, but if the compiler has to account for overflow, then it can't optimize as such. Similar shit like x*2/2 should optimize to just x. If you want overflow to be defined then use unsigned. BTW, unsigned is defined to overflow like 2's complement.
BASED
>the compiler is free to
Just because the compiler is "free to" do something doesn't mean they should.
And yet the GCC and Clang naggers think that it is a good idea to delete the user's code.
>gcc bug >clang bug
it's only a bug in your dumb head
kys
>need to use inline assembly to add 2 numbers because your shitty compiler doesn't support it without ugly hacks
the absolute state of gcc and clang naggers
2 weeks ago
Anonymous
2 numbers that might overflow and even kill people yes. not everyone is layer upon layer abstracted away from the microprocessors like you.
Integer overflows are undefined behaviour for the C virtual machine, as when C was designed there were still machines that used ones complement for some retarded reason.
Optimizing that call away seems reasonable as there only is that one edge case where it breaks.
If you want to have gcc/clang assume twos complement you can pass -fwrapv as a compiler flag, and now the compiler optimizes the call the same way
>pic
what a moron wrote this?
1.
gcc and clang optimize out moronic code
2.
msvc and others simply can't optimize - that's not a feature, that's a joke quality of a compiler
3.
the proper way of checking this:
int will_integer_overflow(int x) {
return x == INT_MAX;
}
compiles to 3 asm lines
https://godbolt.org/z/hreMb8rTP
4.
signed integers are not allowed to overflow by the standard
x + 1 in the context of x being a max_int is AN ERROR IN YOUR CODE and an UNDEFINED BEHAVIOR
you fucking imbecile
5. gcc/clang optimize code having standard in mind
so it is allowed to simply remove this shit code in this case (see 4.)
did manually, with the added benefit that you can change the datatype without having the issue described in
>return x == INT_MAX;
Fragile code which will break when you change the datatype of x. 0/10 bait, try again >signed integers are not allowed to overflow by the standard
In reality, signed integers overflow all the time. The standard also does not say that signed integers "cannot" overflow; it merely says that signed overflow is undefined. This is a clause which is included to allow for obscure and/or outdated processor architectures in which signed overflow is undefined. However, any reasonable person would defer the behavior of signed overflow to the underlying architecture.
[...]
Windows only cleans up the heap upon process termination. If your program runs for any non-trivial amount of time, then heap clean-up is still necessary.
[...]
Refer to my previous points on signed overflow
>Integer overflows are undefined behaviour for the C virtual machine, as when C was designed there were still machines that used ones complement for some retarded reason.
No, its specifically to allow optimizations. Since in real code, expressions like x+1>x will be intended to always return 1 by the programmer, but if the compiler has to account for overflow, then it can't optimize as such. Similar shit like x*2/2 should optimize to just x. If you want overflow to be defined then use unsigned. BTW, unsigned is defined to overflow like 2's complement.
That's why compilers do it now but the original reason was hardware differences I'm pretty sure. Hence the difference with unsigned.
C was not originally about crazy compiler optimizations. That's a modern thing. >unsigned is defined to overflow like 2's complement.
That's because unsigned integers were consistent enough across architectures. It's not really "like 2's complement" any more than it's "like 1's complement". >If you want overflow to be defined then use unsigned
What if I need negative numbers though?
>What if I need negative numbers though?
store the sign in an additional variable
2 weeks ago
Anonymous
>What if I need negative numbers though?
(int)((unsigned)x + 1)
or you could just use a compiler that doesn't play sneaky tricks to win in benchmarks
2 weeks ago
Anonymous
imbecile detected
see:
>pic
what a moron wrote this?
1.
gcc and clang optimize out moronic code
2.
msvc and others simply can't optimize - that's not a feature, that's a joke quality of a compiler
3.
the proper way of checking this:
int will_integer_overflow(int x) {
return x == INT_MAX;
}
compiles to 3 asm lines
https://godbolt.org/z/hreMb8rTP
4.
signed integers are not allowed to overflow by the standard
x + 1 in the context of x being a max_int is AN ERROR IN YOUR CODE and an UNDEFINED BEHAVIOR
you fucking imbecile
5. gcc/clang optimize code having standard in mind
so it is allowed to simply remove this shit code in this case (see 4.)
>Integer overflow is reliable and well defined
wrong, you moron
UNSIGNED integer overflow is reliable and well defined
SIGNED integer overflow is not allowed by the standard, an undefined behavior, moronic and a BUG on your side
Integer overflows are undefined behaviour for the C virtual machine, as when C was designed there were still machines that used ones complement for some retarded reason.
Optimizing that call away seems reasonable as there only is that one edge case where it breaks.
If you want to have gcc/clang assume twos complement you can pass -fwrapv as a compiler flag, and now the compiler optimizes the call the same way [...] did manually, with the added benefit that you can change the datatype without having the issue described in [...].
https://godbolt.org/z/rdhq9n54P
>Integer overflows are undefined behaviour for the C virtual machine, as when C was designed there were still machines that used ones complement for some retarded reason.
No, its specifically to allow optimizations. Since in real code, expressions like x+1>x will be intended to always return 1 by the programmer, but if the compiler has to account for overflow, then it can't optimize as such. Similar shit like x*2/2 should optimize to just x. If you want overflow to be defined then use unsigned. BTW, unsigned is defined to overflow like 2's complement.
and kys
seriously
2 weeks ago
Anonymous
>or you could just use a compiler that doesn't comply with standards thus making undefined behavior where you do not expect it
Fixed.
2 weeks ago
Anonymous
Making signed integer overflow wrap is compliant with the standard. Running GCC with -fwrapv doesn't make it any less compliant with the standard.
The standard imposes no requirements on it.
2 weeks ago
Anonymous
I'm talking about other situations. Had a question on cppquiz where, after a quick check, I found out that M*VC literally disobeyed the standard.
If you think that's a joke. You should check out the rust devs who are leaving the rust project and requesting their names be removed from the repo. Literally people talking about filing gdpr requests against GitHub to have their names removed from the record of rust development.
Volatile literally exists to tell the optimizer to fuck off
Memory access performed by memcpy is _not_ volatile. Plus it kills all optimizations, there are better way to do this, they are just compiler-specific.
Wow, a security bug? If other process have access to the stack that will be a security bug, it's better just clear the stack after the program termination.
The whole point of the original issue is that memory content must be destroyed. Stores must not be eliminated.
2 weeks ago
Anonymous
No the point is that the original values in the arrays are persisting lading to security exploits. If the object is dropped all together, the exploit does not happen to begin with.
God I love Rust so much it's unreal.
2 weeks ago
Anonymous
The object in the picture in the OP is dropped. Same as Rust, basically.
But because the backing memory isn't cleared it may be possible to access the old values if some other piece of code violates memory safety and does an out of bounds read.
Rust only protects against this insofar as it makes these out of bounds reads less likely. It doesn't make it easy to clear the backing memory. (The black_box trick helps but it's best-effort, not guaranteed.)
>That one's a vector, not a stack array. The stack array case does behave the same.
LLVM should be capable of optimizing it away with both dynamic and stack allocations (as long as deallocation is marked appropriately e.g. as allocptr). >But you can force the compiler to preserve the final state.
Same must be done in C, except you will need GCC extension.
No the point is that the original values in the arrays are persisting lading to security exploits. If the object is dropped all together, the exploit does not happen to begin with.
God I love Rust so much it's unreal.
The object will be dropped in C too. There is literally no difference. It is all done by LLVM for both Rust and Clang.
People who complain about basic optimizations like that don't realize how load-bearing they are in modern computing. Like, the developer's intent seems obvious in that one, but when the dead code is inside ten nested template/macro expansions, it might not be. If you could take those optimizations away from every program and library, like the programming language weenies want you to, the effect on performance would be so bad it'd make some things unusable.
I might be confusing this with something else, but doesn't a memset with 0 on a page aligned memory region have the ability to swap the pages in question with pre-filled "zero" pages for increased performance?
Thus giving the pages with "secret" data just to the next process who asks for memory?
>but doesn't a memset with 0 on a page aligned memory region have the ability to swap the pages in question with pre-filled "zero" pages for increased performance
Don't know of any implementation that does anything like that. This will degrade performance unless we are talking about really large buffers. The problem is that any destructive VM operations requires cross-CPU TLB flush. >Thus giving the pages with "secret" data just to the next process who asks for memory?
Any new memory given to a process are cleared for security reasons.
the buffer contain a password, the computer goes to sleep, glowies do a cold boot attack and steal the password because the password wasn't erased first
Your question would be sound if we were talking about a garbage collected language. In C, the program's allocated memory stays the way you set it during the lifetime of the program, nothing touches it unless your program explicitly does so.
Not OP but the language is defined by the standard and the optimizer is following the standard. The standard is fucked up in all kinds of ways. Maybe the language is actually dogshit. Either way, you're a dogshit programmer.
the optimizer fucking everything up and not any issue with the language
actually the OP fucking everything up and not any issue with the language nor compiler/optimizers
see:
>pic
what a moron wrote this?
1.
gcc and clang optimize out moronic code
2.
msvc and others simply can't optimize - that's not a feature, that's a joke quality of a compiler
3.
the proper way of checking this:
int will_integer_overflow(int x) {
return x == INT_MAX;
}
compiles to 3 asm lines
https://godbolt.org/z/hreMb8rTP
4.
signed integers are not allowed to overflow by the standard
x + 1 in the context of x being a max_int is AN ERROR IN YOUR CODE and an UNDEFINED BEHAVIOR
you fucking imbecile
5. gcc/clang optimize code having standard in mind
so it is allowed to simply remove this shit code in this case (see 4.)
>Integer overflow is reliable and well defined
wrong, you moron
UNSIGNED integer overflow is reliable and well defined
SIGNED integer overflow is not allowed by the standard, an undefined behavior, moronic and a BUG on your side
Integer overflows are undefined behaviour for the C virtual machine, as when C was designed there were still machines that used ones complement for some retarded reason.
Optimizing that call away seems reasonable as there only is that one edge case where it breaks.
If you want to have gcc/clang assume twos complement you can pass -fwrapv as a compiler flag, and now the compiler optimizes the call the same way [...] did manually, with the added benefit that you can change the datatype without having the issue described in [...].
https://godbolt.org/z/rdhq9n54P
>Integer overflows are undefined behaviour for the C virtual machine, as when C was designed there were still machines that used ones complement for some retarded reason.
No, its specifically to allow optimizations. Since in real code, expressions like x+1>x will be intended to always return 1 by the programmer, but if the compiler has to account for overflow, then it can't optimize as such. Similar shit like x*2/2 should optimize to just x. If you want overflow to be defined then use unsigned. BTW, unsigned is defined to overflow like 2's complement.
meanwhile rust doesn't even allow you to memset/clear memory without *unsafe* block
and in *any* other joke language like Java/C# you can't even know what memory you are operating on in any given moment, so the leaks can be under the hood and you won't even notice
You'd think these rustards would at least try to differentiate their gay little language from C. I guess that's too difficult for them.
2 weeks ago
Anonymous
Kinda hard to do if everytime your brainrotted peanut brain sees a semi-colon it thinks it's C. But hey, if it's literally C, why do cniles get filtered by the synatax?
Food for thought but I guess cniles lack the critical thinking part of the brain.
2 weeks ago
Anonymous
Spoken languages are substantially different from one another; rust can't even use a different statement terminator.
2 weeks ago
Anonymous
>Spoken languages are substantially different from one another;
Rust is not a spoken language. Go take your pseudo intellectual takes back to /b//
2 weeks ago
Anonymous
What would be the point of using a different statement terminator? Being quirky?
Rust's statement terminator does have different semantics compared to C's, it's more of a separator. If the last statement of a block isn't terminated then the block evaluates to its value.
A postcondition is something that you want to be true once you're done.
They want the buffer to be cleared when the function returns to make sure that exploits can't read the data that was in it while it was still in use. (For example a password.)
So something else would have to go wrong for it to be a security issue. But these exploits do tend to crop up, so it's good to limit the damage they can do.
brainrotted cniles I wonder why they would even hold on to their legacy language! What's the point when their senile and old??? IMAGINE CARING ABOUT LEGACY SUPPORT!
works for me
This you?
It has the same, how should I put it, schizo writing quality. Lemme guess, your "strategy" was to shit up the thread where people made arguments against rust so it would hit the bump limit. Then create a new thread and same fag all over again? Well I'm off to bed (insomnia is a bitch). Your posts are insanely off topic. Now if I didn't know better I would suspect some janitorial assistance. I wonder what a few of my frens will find for me.
1/0/2/2
>this post is extremely low quality
Thank you for confirming something. I do know that the jannies are behind a lot of this now. I have to warn you; you've picked up an unusually determined adversary. As far as I'm concerned, it's open season on this board and on this site
>Announcing reports
i love that webm, been ages since i last saw it
just char buf[MAX] = {0}; lmao
You can only do that at declaration time and it's useless unless you're doing some weird embedded shit.
it should be sizeof buffer since buffer is a variable, not a type.
Litteraly doesn't matter, people prefer consistency
Yep. Unfortunately compiler devs and standards authors have crippled any level of control the developer has over the program, so unless you're willing to restrict yourself to a single version of a single compiler like I do, then you need to watch out for shit like this.
>MUH UNDEFINED BEHAVIOR!
Which compiler & version do you use then?
He uses Windows anon.
I personally don't have a problem with that but I'm laughing at those who are using Windows as an excuse for shitty code.
Because the variable was optimized to automatic storage, to have same effect as the compound char buffer[MAX] = {0};. If he didn't want any optimization applied, he should use "volatile".
t. barely used C in my life
memset here is supposed to write 0 to all of buffer, why would the compiler think its redundant?
>why would the compiler think its redundant?
It's explained in the OP.
Because the compiler expects you to use the buffer after writing to it. If you don't check the buffer contents after writing, it assumes the write is unnecessary and optimizes it out.
So now a C compiler puts linear types on top of you code that was never meant to support that feature.
Unabomber was right
>the compiler is free to
Just because the compiler is "free to" do something doesn't mean they should.
And yet the GCC and Clang naggers think that it is a good idea to delete the user's code.
Again: that's the reason "volatile" exists, retarded.
Maybe the compiler devs should grow a brain and realize that if I'm explicitly checking for x + 1 < x, that means it shouldn't just delete the check.
No because x + 1 will always be greater than x, x + 1 < x will always be false unless you can't trust that (aka. mark it as volatile). Is that simple.
>x + 1 will always be greater than x
this is demonstrably and verifiably false and i just gave you an example (
).
are you actually retarded?
You seem to be using the mathematical definition of addition. Computers use binary storage for numbers which does not represent real numbers with an unlimited range. You cannot store an arbitrarily large number inside a normal integer, and integers overflow (or wrap around) when you try and add too much.
GCC and Clang compilers are broken because they incorrectly assume a mathematical model of numbers instead of acknowledging the reality of fixed width integers, as used by every computer in existence.
Let me explain what you did here:
You have read the first part of the sentence
>x + 1 will always be greater than x
and then completely ignored the second part of the sentence
>unless you can't trust that (aka. mark it as volatile)
And based only on the first part, you re-iterated exactly the issue
>this is demonstrably and verifiably false and i just gave you an example (
For a signed dword, 2147483647 + 1 == -2147483648 (which is obviously less than 2147473647). ).
that was addressed in the second part of the sentence.
Again: if you can't trust the addition, that there's a possibility to overflow on whatever you are doing, you just mark the fucking variable as volatile and the compiler will do what do you want.
Express your intent that you are expecting the integer to behave like a 32 bit integer that overflows and not just a number of integer type.
Volatile has a number of other effects, such as ensuring the variable cannot be cached in a register on every read/write. This slows down the program for no reason.
Instead of using volatile, it is more effective to simply use a compiler that does not have this bug in it (i.e. don't use GCC or Clang).
this has nothing to do with your retardation I explained here:
so why did you quote me?
On a computer, x + 1 is not always greater than x. This is simply a fact.
If you deny this you are denying reality. Computing is not mathematics.
and now you're repeating the same mistake again. I seriously hope for you you're only pretending to be retarded.
Again, I must repeat to you that x + 1 is not always greater than x.
Assuming otherwise is an error in the compiler, nothing more and nothing less.
>Again, I must repeat to you that x + 1 is not always greater than x.
Only because you're a brainlet. Otherwise your warped perception wouldn't force you to repeat that, because it'd be clear to you that nobody actually claimed so.
>nobody actually claimed so.
Refer:
>because x + 1 will always be greater than x
Ironically, you're doing the same of what you accuse GCC and Clang of doing.
The compilers trow away instructions that they assume to always be true and therefore have no effect on the outcome.
You ignored the latter part of the sentence (for no discernible reason) and act as if it wouldn't change the meaning. And then you're complaining that the first part does not take into consideration what the second part (which you just ignored) actually did.
>The compilers trow away instructions that they assume to always be true
The compiler makes an incorrect assumption which is obviously, demonstrably and verifiably false, which is a bug in the compiler(s) which do so (GCC and Clang).
The best way to avoid this error in your programs is to use compilers, such as MSVC, TCC or Compcert, which do not contain this bug - rather than spam volatile on every variable which might be affected.
I hope you understand and do not require additional clarification.
You makes an incorrect assumption which is obviously, demonstrably and verifiably false, which is a bug in your brain which does so (You).
The best way to avoid this error in your case is to read all of a text and interpret it in whole instead of taking parts out of context.
I hope you understand and do not require additional clarification.
As per the standard, integer overflow is uNdEfInEd BeHaViOr and compilers are free to assume that it never happens.
Whether or not this is true is not important. The fact is, the compiler exists to produce working and useful code, not to follow the standards. If a compiler produces broken executables from sensible source code, there is a bug in the compiler.
GCC and Clang produce code for more than x86 and ARM. Integer overflow on some more esoteric (or future) architectures might cause wildly different results than wrapping and you shouldn't rely on it if you want your code to be portable. (Note that standards-compliant C code can be ran with a LISP interpreter!)
I write Windows applications. Windows doesn't run on those literally-who processors, which means I don't want to make their problems my problems by using a crappy compiler which prioritizes obscure architectures over real ones that people actually use.
Linking third party code which breaks my program is out of scope for me. Simply don't do that and there will not be any bugs.
x + 1 < x is a generic test condition which works for *all* integer data types, no matter their signedness or bit width. That is why it is better to use it than some other test, like x == INT_MAX. Relying on wrapping is the cleanest and most future-proof form of overflow check that exists (other than using the built-on overflow flag on the x86 architecture, but there is no way to express that in C).
>i'm programming for windows
Well in that case you can do whatever you want. Don't clean up heap, use goto for loops, don't catch exceptions, etc. You don't need all that "guidelines" on how to write actually good code and not some shit because Windows will do anything that you forgot/haven't done for you.
>I don't want to make their problems my problems by using a crappy compiler which prioritizes obscure architectures over real ones that people actually use
you cant honestly be claiming that the MSVC compiler is better than gcc or clang, right?
The compiler's behavior is correct because it is obeying the standard: it can choose to do whatever it wants when it sees undefined behavior. So it is a one-to-many relationship between the source and the generated assembly.
>The compiler's behavior is correct because it is obeying the standard
The policeman's behavior [bashing you over the head repeatedly] is correct because he is obeying the law.
You are trying to compile C code which has a standard that dictates what the behavior of the program is supposed to be. The compiler implements the standard. Not my fault that you can't comprehend this.
If you violated the law then what is the policeman supposed to do? Forgive you?
Not to mention that computers can't assume what you had in mind. They are simply doing what they are told to do. If you want some code to work while it's shit then the problem is not on the computer side.
>Fragile code which will break when you change the datatype of x.
It's a function retard. If you changed type then it's a type mismatch. -1/10 bait, try again.
>The fact is, the compiler exists to produce working and useful code, not to follow the standards.
Then there is literally no point in a standard.
great, you figured it out.
You're right that it's an error in the compiler
You're wrong about x+1 not always being greater than x
x+1 is always greater than x unless there is integer overflow
Since integer overflow is UB, the compiler can do anything it wants. It can choose to wrap around, zero it, cast it as a long, etc.
The problem is that if the compiler chooses to wrap around, for example, then the assumption that x+1 is always greater than x is false, hence the compiler has inconsistent logic
but if it casts it to a long, then it can continue that assumption
The error is not the assumption per se, its the inconsistent compiler logic - which is indeed a compiler bug.
>You're right that it's an error in the compiler
wrong, you retard
>its the inconsistent compiler logic - which is indeed a compiler bug.
wrong, you retard
the logic is very consistent
you are not allowed to *cause* overflow in signed integer
period
signed integers are meant to operate only in their limits
and you as a programmer *must* ensure this
it's a bug in your program if you cause or rely on signed integer overflow
that's why x + 1 < x can be assumed from the purely mathematical point of view in signed integers
so it's always false, and can be optimized out
>you are not allowed to *cause* overflow in signed integer
>period
Yes you are. The program compiles, hence it's valid C.
The language of the C spec is:
>If an exceptional condition occurs during the evaluation of an expression (that is, if the result is not mathematically defined or not in the range of representable values for its type), the behavior is undefined.
Hence it is the /compiler/ that is responsible for any bugs.
imbecile
literally
causing ub is YOUR BUG
And your example wouldn't be cached in a register at all for ABI and call convenction reasons, just the return of the function would be cached at best in rax.
My example is just an example, intended to be short and simple in order to demonstrate compiler bugs in GCC and Clang. Real world examples are more complex. For example, the overflow checking function could be inlined.
>My example is just an example
Yeah, because in real world situation you would use the register specifier too and avoid non-inlined functions at all to ensure it will be cached in a register.
TurboC sisters, it's our time!
if you know what you're doing, you'll use "volatile". if you don't know what you're doing, this is likely an unnecessary check you added because you're an idiot, in which case the compiler is right to remove it.
boa esl
isn't "x + 1 < x" 0 so the if statement is false?
It's true if the number is about to overflow.
For a signed dword, 2147483647 + 1 == -2147483648 (which is obviously less than 2147473647).
Relying on wrapping is considered a bug not only in C++ but in any language. In this case you should write better code. In case you want to check if you reached type's limits you can compare with predefined macros and you'll never actually write this shit.
>Relying on wrapping is considered a bug
It is not a bug. Integer overflow is reliable and well defined on all processor architectures in use today. Compilers which insist on breaking the behavior are buggy and should not be used.
Your program might be linked with ubsan and get SIGTRAPped when an overflow occurs.
>Integer overflow is reliable and well defined on all processor architectures
But you know what is not reliable? Idiots who work with you who may decide that "int is not enough for that abomination I'm about to write so I'll change to long". Not to mention that this reliability lasts for short period of time. In case you want your software to work on something that's gonna be developed in future you should avoid this shit. This is bad practice in general and every sane programmer will always avoid wrapping.
>Integer overflow is reliable and well defined
wrong, you moron
UNSIGNED integer overflow is reliable and well defined
SIGNED integer overflow is not allowed by the standard, an undefined behavior, moronic and a BUG on your side
That's a bad example since it uses signed integer overflow, which is undefined, but unsigned integer overflow is defined and perfectly acceptable.
>pic
what a moron wrote this?
1.
gcc and clang optimize out moronic code
2.
msvc and others simply can't optimize - that's not a feature, that's a joke quality of a compiler
3.
the proper way of checking this:
int will_integer_overflow(int x) {
return x == INT_MAX;
}
compiles to 3 asm lines
https://godbolt.org/z/hreMb8rTP
4.
signed integers are not allowed to overflow by the standard
x + 1 in the context of x being a max_int is AN ERROR IN YOUR CODE and an UNDEFINED BEHAVIOR
you fucking imbecile
5. gcc/clang optimize code having standard in mind
so it is allowed to simply remove this shit code in this case (see 4.)
>return x == INT_MAX;
Fragile code which will break when you change the datatype of x. 0/10 bait, try again
>signed integers are not allowed to overflow by the standard
In reality, signed integers overflow all the time. The standard also does not say that signed integers "cannot" overflow; it merely says that signed overflow is undefined. This is a clause which is included to allow for obscure and/or outdated processor architectures in which signed overflow is undefined. However, any reasonable person would defer the behavior of signed overflow to the underlying architecture.
Windows only cleans up the heap upon process termination. If your program runs for any non-trivial amount of time, then heap clean-up is still necessary.
Refer to my previous points on signed overflow
>any reasonable person would defer the behavior of signed overflow to the underlying architecture.
no reasonable person would write non-portable code based on undefined behavior
seriously kill yourself at this point
>non-portable
You are missing the forest for the trees.
If you target all the common architectures which are in use today (which means x86 and ARM), then your code is almost guaranteed to run on any hypothetical future processors which may be invented in the future, because processor designers always consider compatibility with existing code-bases when they design their architecture.
Practically speaking your concerns with portability are irrelevant.
>Not to mention that computers can't assume what you had in mind
The processor itself is not misinterpreting the executable code. The compiler is producing buggy executable code which does not accurately reflect the source code from which it was produced.
>If you violated the law then what is the policeman supposed to do? Forgive you?
If you are pulled over for a routine traffic check, the policeman can either check your license and registration, or he can do a full destructive search of your car while his friends beat you up on the ground for non-compliance with his vague and contradictory instructions. Both are fully legal but only one option is sane and reasonable.
I hope you understand my analogy and do not require any further clarification. If you would like additional assistance, please call the suicide hotline in your country of residence, who will strive to help you terminate your own life in the most painless way possible.
>Both are fully legal
Nope. At least not in my country. In my country the second option is 5 to 10 years in jail for policemen and his friends. On normal countries there's one law for one crime. Overacting or underacting. Unless special cases where judging by authorities applied but that's never a case for anything computer-related.
>The processor itself
Yes but we are talking about compilers. And compilers are the same as processors – they have standards. If your low-level language is based on assuming then JS shithole is two floors lower, I think you are stranded.
>Nope. At least not in my country
Oh don't worry, I'm sure your country also has plenty of retarded laws that could theoretically be used to punish you for doing nothing wrong.
Never had problems with law tho. Because I know laws and the same for standards. I don't try to find a loophole in something to get shot in my leg.
>3. the proper way of checking this:
That is fucking retarded and wont check for integer overflows
int x = INT_MAX - 1;
will_integer_overflow(x); // false
x += 3; // Whoooooops
Same goes for x>x+1. We are simplifying things to increase ease of understanding.
But it's functionally equivalent (if the compiler doesn't optimize) to
, which is similarly retarded
retard has spoken
BASED
>gcc bug
>clang bug
it's only a bug in your dumb head
kys
what more based is using intel safe math library or literally assembly inlining the code in critical infrastructure.
>need to use inline assembly to add 2 numbers because your shitty compiler doesn't support it without ugly hacks
the absolute state of gcc and clang naggers
2 numbers that might overflow and even kill people yes. not everyone is layer upon layer abstracted away from the microprocessors like you.
You can't check for (signed) integer overflow using signed integers, retard.
inline bool
ckdi64_add(int64_t lhs, int64_t rhs, int64_t* result)
{
uint64_t r;
r = (uint64_t) lhs + (uint64_t) rhs;
result[0] = (int64_t) r;
if ((~(lhs ^ rhs) & (lhs ^ r)) & ~~*uint64_t) 1) << 63))
return 1;
return 0;
}
Integer overflows are undefined behaviour for the C virtual machine, as when C was designed there were still machines that used ones complement for some retarded reason.
Optimizing that call away seems reasonable as there only is that one edge case where it breaks.
If you want to have gcc/clang assume twos complement you can pass -fwrapv as a compiler flag, and now the compiler optimizes the call the same way
did manually, with the added benefit that you can change the datatype without having the issue described in
.
https://godbolt.org/z/rdhq9n54P
>Integer overflows are undefined behaviour for the C virtual machine, as when C was designed there were still machines that used ones complement for some retarded reason.
No, its specifically to allow optimizations. Since in real code, expressions like x+1>x will be intended to always return 1 by the programmer, but if the compiler has to account for overflow, then it can't optimize as such. Similar shit like x*2/2 should optimize to just x. If you want overflow to be defined then use unsigned. BTW, unsigned is defined to overflow like 2's complement.
That's why compilers do it now but the original reason was hardware differences I'm pretty sure. Hence the difference with unsigned.
C was not originally about crazy compiler optimizations. That's a modern thing.
>unsigned is defined to overflow like 2's complement.
That's because unsigned integers were consistent enough across architectures. It's not really "like 2's complement" any more than it's "like 1's complement".
>If you want overflow to be defined then use unsigned
What if I need negative numbers though?
>What if I need negative numbers though?
(int)((unsigned)x + 1)
I don't like these solutions very much
>What if I need negative numbers though?
store the sign in an additional variable
or you could just use a compiler that doesn't play sneaky tricks to win in benchmarks
imbecile detected
see:
and kys
seriously
>or you could just use a compiler that doesn't comply with standards thus making undefined behavior where you do not expect it
Fixed.
Making signed integer overflow wrap is compliant with the standard. Running GCC with -fwrapv doesn't make it any less compliant with the standard.
The standard imposes no requirements on it.
I'm talking about other situations. Had a question on cppquiz where, after a quick check, I found out that M*VC literally disobeyed the standard.
__builtin_add_overflow
>-O2
Fucking retard.
get new bait passfag, I've deboonked this months ago. MSVC lost.
this isnt real
https://github.com/isocpp/CppCoreGuidelines/issues/11
Volatile literally exists to tell the optimizer to fuck off
according to compiler devs no one understands what volatile does and encourage banning it in your project
>you should fight the compiler to not eliminate semantically meaningful lines of code
Joke language.
If you think that's a joke. You should check out the rust devs who are leaving the rust project and requesting their names be removed from the repo. Literally people talking about filing gdpr requests against GitHub to have their names removed from the record of rust development.
I didn't even mentioned Rust once, I don't care about their dramas or language created and maintained by mentally ill maggots.
If you're not pro-C you're pro-Rust, there are only two sides in this war, make sure you're not on the wrong one
No it doesn't, it exists for threading. Using it to wrangle the compiler is a hack.
This doesn't even work.
Memory access performed by memcpy is _not_ volatile. Plus it kills all optimizations, there are better way to do this, they are just compiler-specific.
Wow, a security bug? If other process have access to the stack that will be a security bug, it's better just clear the stack after the program termination.
Is this supposed to be some convoluted way to set all memory to 0?
Lmao thanks I'll stick with Rust.
Rust literally does the same thing.
Show assembly.
Check here
I see that the original vector is gone all together. This is even better than resetting it.
The whole point of the original issue is that memory content must be destroyed. Stores must not be eliminated.
No the point is that the original values in the arrays are persisting lading to security exploits. If the object is dropped all together, the exploit does not happen to begin with.
God I love Rust so much it's unreal.
The object in the picture in the OP is dropped. Same as Rust, basically.
But because the backing memory isn't cleared it may be possible to access the old values if some other piece of code violates memory safety and does an out of bounds read.
Rust only protects against this insofar as it makes these out of bounds reads less likely. It doesn't make it easy to clear the backing memory. (The black_box trick helps but it's best-effort, not guaranteed.)
That one's a vector, not a stack array. The stack array case does behave the same.
But you can force the compiler to preserve the final state.
Wouldn't make any difference, leading underscores only affect the linting. (Plain _ is special though.)
>That one's a vector, not a stack array. The stack array case does behave the same.
LLVM should be capable of optimizing it away with both dynamic and stack allocations (as long as deallocation is marked appropriately e.g. as allocptr).
>But you can force the compiler to preserve the final state.
Same must be done in C, except you will need GCC extension.
The object will be dropped in C too. There is literally no difference. It is all done by LLVM for both Rust and Clang.
People who complain about basic optimizations like that don't realize how load-bearing they are in modern computing. Like, the developer's intent seems obvious in that one, but when the dead code is inside ten nested template/macro expansions, it might not be. If you could take those optimizations away from every program and library, like the programming language weenies want you to, the effect on performance would be so bad it'd make some things unusable.
you are a joke, kill yourself
compiler settings issue, also it's retarded to work with sensetive data in local buffers
Everything you use is done on c
works on my machine, you should get a proper computer instead of a toy
still using c++
I might be confusing this with something else, but doesn't a memset with 0 on a page aligned memory region have the ability to swap the pages in question with pre-filled "zero" pages for increased performance?
Thus giving the pages with "secret" data just to the next process who asks for memory?
>Thus giving the pages with "secret" data just to the next process who asks for memory?
The OS clears them anon
I don't know if memset does it, but calloc absolutely does.
>but doesn't a memset with 0 on a page aligned memory region have the ability to swap the pages in question with pre-filled "zero" pages for increased performance
Don't know of any implementation that does anything like that. This will degrade performance unless we are talking about really large buffers. The problem is that any destructive VM operations requires cross-CPU TLB flush.
>Thus giving the pages with "secret" data just to the next process who asks for memory?
Any new memory given to a process are cleared for security reasons.
That's why memset_s exists btw
Imagine that you are not allowed to access your own hardware memory, it equally strikes to support censorship. shame on you.
You can't access "hardware memory" in C either
that depends on the os (or lack of)
>tried the memset
>compiler optmized to rep stosq
What's your point?
how is the OPs pic a bug if the buffer isn't used after being set to 0
the buffer contain a password, the computer goes to sleep, glowies do a cold boot attack and steal the password because the password wasn't erased first
Your question would be sound if we were talking about a garbage collected language. In C, the program's allocated memory stays the way you set it during the lifetime of the program, nothing touches it unless your program explicitly does so.
you're fucking blind and/or retarded
Care to explain, you blithering idiot?
why would I explain anything to someone who doesn't even understand the lifetime of a local variable in C, you're a waste of oxygen
So you do have no idea what you're talking about, thanks for confirming.
it's called dead store elimination, use volatile or memset_s
>Joke language
>actually the optimizer fucking everything up and not any issue with the language
Not OP but the language is defined by the standard and the optimizer is following the standard. The standard is fucked up in all kinds of ways. Maybe the language is actually dogshit. Either way, you're a dogshit programmer.
the optimizer fucking everything up and not any issue with the language
actually the OP fucking everything up and not any issue with the language nor compiler/optimizers
see:
maybe its just a shit compiler
meanwhile rust doesn't even allow you to memset/clear memory without *unsafe* block
and in *any* other joke language like Java/C# you can't even know what memory you are operating on in any given moment, so the leaks can be under the hood and you won't even notice
In Rust this is just
let mut vec = vec![0; len];
vec.clear();
Exceedingly common Rust W.
>404 Logo not found
>In ©Rust this is just®™
I need the work, what do you say? Can I be the logoman?
Go for it. Add a cute crab too.
cute crab snipping up some C code above his head like the swedish chef tosses salad?
um Jim Henson's Muppets Swedish Chef
No, cute crab holding a bell pepper and RMS is watching it from afar.
What does this cute crab logo you are envisioning represent exactly? What is the logo for in your future for it?
this one.
Do you want me to make the logo a pdf that when you click the coin it takes them to a donation page?
Apologies for any one who might have used the above term in a search engine with out the added clarifiers and with safe search turned off like I did.
>zero some arbitrary region of unused heap memory, then reset the length counter
>the optimizer removes it all
woag
>>the optimizer removes it all
So it's working as intended. lmao Rustchads can't stop winning.
Try with underscore. _vec
arguing about C/C++ undefined behavior is the mutt's law of LULZ
That's extremely stupid and the compiler shouldn't do things like that, but why wouldn't you just char buffer[MAX]{};?
It appears that Rust does call memset.
Rustbros we win again.
Bro why'd you copy C++ syntax?
In what world is [3; 100]; a C++ syntax, lilbro?
>;
>f() {}
>std::
>';' must be C++
>brackets must be C++
>std must be C++
Brainrot
You'd think these rustards would at least try to differentiate their gay little language from C. I guess that's too difficult for them.
Kinda hard to do if everytime your brainrotted peanut brain sees a semi-colon it thinks it's C. But hey, if it's literally C, why do cniles get filtered by the synatax?
Food for thought but I guess cniles lack the critical thinking part of the brain.
Spoken languages are substantially different from one another; rust can't even use a different statement terminator.
>Spoken languages are substantially different from one another;
Rust is not a spoken language. Go take your pseudo intellectual takes back to /b//
What would be the point of using a different statement terminator? Being quirky?
Rust's statement terminator does have different semantics compared to C's, it's more of a separator. If the last statement of a block isn't terminated then the block evaluates to its value.
It's just cnile cope.
even better
Damn do cniles really need to call assembly if they really want to clear an array.
I'll be sticking to rust thank you.
Do you see any assembly instructions? Exactly.
>cniles
What the hell is wrong with you?
what the fuck is a "postcondition" or "postcondition that the buffer should be cleared" ? and why is this a security issue?
A postcondition is something that you want to be true once you're done.
They want the buffer to be cleared when the function returns to make sure that exploits can't read the data that was in it while it was still in use. (For example a password.)
So something else would have to go wrong for it to be a security issue. But these exploits do tend to crop up, so it's good to limit the damage they can do.
brainrotted cniles I wonder why they would even hold on to their legacy language! What's the point when their senile and old??? IMAGINE CARING ABOUT LEGACY SUPPORT!
>ITT: terminally incompetent C trannies discover that "C IS JUST LIKE PORTABLE ASSEMBLY" is nothing but a god damn lie.